About the Workshop
This half-day workshop is designed to foster awareness and understanding of user-centered secure robotics within HRI (and related disciplines), and equip participants with appropriate foci, frameworks and collaborations, to inform their ongoing robotics activities with this critical perspective.
The end goal of this workshop is to enable the development of methodologies and recommendations to empower people to understand and decide on how their robots (and/or robots they are using) are secured, and approaches for raising awareness of the impact of an attack.
Image: L. M. Bishop, P. M. Asquith, and P. L. Morgan, “The Employee Cybersecurity Awareness Framework,” Human Behavior and Emerging Technologies, vol. 2025, no. 1, p. 1025045, 2025
Final Program
Welcome and Introductions
Organizers
2:30 pm
Professor Phillip L Morgan
School of Psychology, Cardiff University, UK
Opening Address: "How to prevent an AI and robot apocalypse”: Designing and deploying AI, robots and other autonomous systems responsibly, safely, securely and ethically
Abstract: Over the past 15-20 years, we have seen rapid technological developments in AI, robotic and autonomous systems such that they are fast becoming ubiquitous within many workplace domains (e.g. healthcare, logistics, manufacturing, transport) and are becoming ever present within domestic and social contexts. Self-driving cars, industrial and domestic robots, augmentation of reality, and smart AI agents are no longer something of fiction. Such technologies can, for example: increase productivity; complete repetitive tasks; streamline operations; reduce errors, incidents and accidents typically caused by humans; and in a growing number of cases – support decision making. However, they are not flawless, yet are being developed and deployed at a rapid pace. Lisanne Bainbridge (1983) warned of the ‘ironies of automation’; Raja Parasuraman and Victor Riley (1997) the ‘misuse, disuse, and abuse of automation; John Lee and Katrina See (2004) ‘designing automation for appropriate reliance’; and Alexandra Kaplan and colleagues (2023) ‘factors that have no bearing on AI performance impacting trust in AI’. Are we then risking an AI and robot apocalypse? A judgement day? Quite possibly! Unless such technologies are designed, developed, and tested responsibly, safely, securely, and ethically by humans and crucially with end-users. I will present example research findings, recommendations, notes of caution and many tales of hope from projects spanning a 20+ year career (to date) in Human Factors Psychology and Cognitive Science – across application domains including aerospace, defence, emergency services, environmental intelligence, healthcare, and transportation. Furthermore, almost all these technologies are at risk of being cyber attacked, due to us – humans – often being the weakest link. I will discuss how we can better understand and measure our cyber vulnerabilities in order to fight back and achieve a state of seamless security and privacy in symbiosis with the AI, robotic and autonomous systems in which we increasingly share the world with.
Bio:
Prof Phil Morgan holds a Personal Chair (as a Senior Professor) within the School of Psychology at Cardiff University. He is Director of the Cardiff University Human Factors Excellence (HuFEx) Group, Director of Research within the Centre for AI, Robotics, and Human-Machine Systems (IROHMS), Transportation and Human Factors and Cognitive Science Lead within the Digital Transformation Innovation institute (DTII), Director of the Airbus – Cardiff University Academic Centre of Excellence in Human-Centric Cyber Security (H2CS) and Co-Academic Lead of a partnership between Airbus and Cardiff University. Prof Morgan is also Visiting Professor at Luleå University of Technology – Psychology, Division of Health, Medicine & Rehabilitation, Sweden, and Distinguished Visiting Fellow within the Faculty of Education, Science, Technology and Mathematics at the University of Canberra, Australia.
Formally trained as a Cognitive Experimental Psychologist, Prof Morgan is an international expert in human aspects of AI and automation, trust in new/disruptive technologies, Cyberpsychology, transportation human factors, HMI design, HCI, interruption and distraction effects, and adaptive cognition and has published extensively (>130 outputs) across these areas. With >50 grants (~£40million, e.g. Airbus, CREST, ERDF, ESRC, EPSRC, HSSRC, IUK, NCSC, SOS Alarm, Wellcome); often as Principal Investigator / Institution Lead, he has significant project management experience. He supervises PhD students (with many past completions) in areas including human aspects of AI, automation, cyber security, transportation and robotics.
Prof Morgan was a Human Factors lead on the IUK (~£5m, 2015-18) Venturer Autonomous Vehicles for UK Roads project, Co-I and Human Factors lead on the IUK (~£5.5m, 2016-19) Flourish Connected Autonomous Vehicles project, PI on an ESRC-JST (~750k, 2020-2023, with universities in Japan – e.g. Kyoto and Osaka) project Rule of Law in the Age of AI: Distributive Liability for Multi-Agent Societies – focussing on factors such as trust, blame and implications for standards and legislation in the event of accidents involving autonomous vehicles. Amongst other current projects, Prof Morgan is Co-Leading a cross-cutting Human-Centred Design Work Package within an EPSRC (~£12m, 2024-2029) AI for Collective Intelligence (AI4CI) hub (https://ai4ci.ac.uk/).
Recently, Prof Morgan established HumaniFAI Ltd – a research and consultancy company focussed on human-centred, assured, ethical, responsible, and safe design and use of AI, robotic and autonomous systems.
Keynote session 1
2:40 pm

Prof Tatsuhiko Inatani
Graduate School of Law, Legal and Political Studies Professor, Kyoto University, JP
Keynote 01: The Flick: Leveraging Designed Frictions to Foster Agency during Human-Robot Interactions
Abstract: As social robot interactions become more commonplace, we are beginning to see a clear picture of both their positive and negative impacts. This shift has brought the ELSI (Ethical, Legal, and Social Implications) of social robotics to the forefront of academic and public discourse. On one hand, these interactions offer promising benefits for well-being, such as slowing cognitive decline and preventing mental health issues. On the other hand, there are growing concerns that over-reliance on social robots could lead to psychological distress and a decline in overall well-being. To ensure that social robots truly enhance human well-being, what should we demand of developers and users through law and ethics? This report explores the ideal form of human-robot interaction and the role of legal and ethical frameworks in achieving it, drawing on cognitive science and sociocultural anthropology. Using active inference as a theoretical foundation, I argue that the “frictions” or “gaps” in communication with social robots are, in fact, closely linked to a user’s sense of agency and well-being. Specifically, I propose that introducing “Flick”—designed gaps or subtle frictions in interaction—is a necessary legal and ethical requirement to prevent passive dependency and cultural differences must be treated as a critical variable when designing and regulating these interactions.
Bio:Tatsuhiko Inatani is a Professor at the Kyoto University Graduate School of Law, specializing in criminal law, criminology, and law and technology. His research primarily focuses on corporate crime and the legal governance of emerging science and technology. Taking an interdisciplinary approach, Professor Inatani integrates insights from philosophy, cognitive science, and economics into his legal analysis. He serves as the Principal Investigator (PI) for the AI and Law research team at Kyoto University’s Center for Interdisciplinary Studies on Law and Policy and is a visiting researcher at RIKEN AIP. Beyond academia, he contributes to various committees for Japan’s Cabinet Secretariat, the Ministry of Economy, Trade and Industry (METI), the Digital Agency, IPA-DADC, the World Economic Forum (WEF), and OECD.

Prof. Praminda Caleb-Solly
Professor of Embodied Intelligence, Faculty of Science, University of Nottingham, UK
Keynote 02: When ‘intelligent’ behaviour risks safety: adaptation, end-user agency and governance gaps in physically assistive AI
Abstract: Adaptive ‘intelligence’ is increasingly presented as the route to safer, more personalised physical assistance, however in embodied systems, adaptation is also a potential pathway for harm. In this keynote, Professor Praminda Caleb-Solly will draw on her research on physically assistive robots to examine a central dilemma for secure robotics – when should a system adapt, what counts as adaptation across different timeframes, and who has the authority to permit, constrain, and reverse it? Grounded in her safety-focused work on dressing assistance and physical human–robot interaction (including formal safety assessment approaches and evidence on how distraction and cognitive overload alter human movement), she argues that safety is co-produced through system behaviour, human understanding, and the ability to intervene. These are also conditions that can be destabilised when learning systems drift or are manipulated. Building from her contributions on standards and regulation for physically assistive robots, robot ethics with older adults, longitudinal field deployment insights, and recent work on benchmarking for trustworthy robots, she highlights a persistent governance mismatch – existing assurance mechanisms largely assume static behaviour, while adaptive physical AI changes what it does, what it senses, and what it learns after deployment. The talk proposes an agency-centred framing for “secure adaptation” which is about introducing enforceable user controls (opt-in/out, boundaries, override), transparency about what changed and why, and accountability structures that make ongoing verification, maintenance and auditability core requirements rather than afterthoughts.
Bio: Professor Praminda Caleb-Solly is Professor of Embodied Intelligence in the School of Computer Science at the University of Nottingham, University Academic Lead for the National Rehabilitation Centre, and an Honorary Visiting Professor at NUH NHS Trust. Her research focus is assistive robotics and intelligent sensing, with 25+ years’ experience spanning academia and health technology translation, including four years in an assistive-technology SME/charity, Designability. She holds a PhD in Computer Science, an MSc in Biomedical Instrumentation Engineering, and a BEng (Hons) in Electronic Systems Engineering. She leads the CHART research group, delivering interdisciplinary programmes and living-lab testbeds focused on evaluation, safety assurance and adoption of robotics and connected health technologies. Her governance and standards roles include Co-Chair of the IEEE RAS Technical Committee on Robot Ethics and membership of ISO TC299/WG2 and UK BSI robotics/ethics committees. She led the EPSRC Healthcare Technologies Network+ EMERGENCE which has produced a White Paper on Robotics in Health and Social Care, and co-leads the NIHR RehabHRC Enabling Participation theme. She is also Co-Founder and Director of Robotics for Good CIC, supporting responsible development and deployment of robotics and AI for public benefit.
Discussion and questions
3:00 pm
Keynote session 2
3:05 pm

Prof Matthew Ewart Studley
Professor of Ethics and Technology
Director of Engineering Research and Enterprise
Bristol Robotics Laboratory, UWE, Bristol, UK
Keynote 03: On the Ethical Framing...
Synopsis: I will present a brief exploration of possible ethical framings within which we can consider the impacts of cyber attacks on social robots. I will argue that the most important impact is on trust, and that we should assess moral value here within the framing of ecological ethics.
Bio: Matthew Studley’s career in robotics has been driven by a commitment that Godlike Technology results in more good than harm, and that it should be possible for citizens to influence the future beyond the laissez-faire approach of surrendering responsibility to market forces. As the UWE Bristol Engineering Director of Research, he drives strategic initiatives, fosters international collaborations, and ensures research aligns with ethical standards. His work spans robotics, machine learning and AI, autonomous weapons, standards and sustainability, with a keen focus on embedding ethics into every stage of technological development. He has served as an advisor for the World Economic Forum and UK Government, and a board member for the Engineering Professors Council and the European Robotics League, where he advances the consideration of ethical and social impacts of robotics through international competitions.

Prof David Cotterrell
Research Professor of Fine Art, Sheffield Creative Industries Institute
College of Social Sciences and Arts, Sheffield Hallam University, Sheffield, UK
Keynote 04: What Robots Inherit: Labour, Power, and the Ethics We Allow to Scale
Abstract: While robotics and artificial intelligence are often discussed in terms of efficiency, automation, and future capability, their development also serves as a mirror, reflecting how we already understand agency, labour, value, and conflict. As intelligent systems increasingly participate in social, economic, and creative domains, they risk inheriting not only our technical ambitions but also our unresolved ethical compromises.
Within human–robot interaction, questions of coexistence are therefore inseparable from questions of responsibility. The ways in which robots are designed to work, serve, assist, or replace human labour reveal long-standing societal tolerances: the normalisation of exploitation, an enduring obsession with productivity, and a persistent acceptance of conflict as a structuring force. As these systems accelerate beyond our capacity for collective ethical consensus, the urgency for safeguards and moral reflection becomes acute—not simply to protect humans from machines, but to confront what these machines expose about ourselves.
This keynote proposes that the challenge of cohabitation with intelligent technologies is less about control or alignment, and more about recognition: recognising which values we encode, which behaviours we reproduce, and which injustices we quietly allow to scale. From an artistic perspective, robotics is not only a technological project, but a cultural one—inviting us to reconsider how we manage rights, dignity, and care, both for emerging forms of agency and for one another.
Bio: David Cotterrell is a British visual artist. David works internationally and regularly collaborates with artists, civil servants, academics and administrators to realise art, advocacy and social research projects. David’s work spans galleries, architecture and the public realm. He has realised over 105 exhibitions or public artworks, 40 publications and 75 papers and public lectures in the UK, North America, Europe, the Middle East and Asia. In recent years, David has been working to develop multidisciplinary interventions within visual arts, theatre and policy.
Cotterrell’s work is diverse and at times playful, but it is consistently informed by research and an analysis of its role within the shared social and physical space that it inhabits. In addition to commissioned artworks and interventions, Cotterrell has collaborated with architects, engineers, and masterplanners on strategic projects in British Cities and has been involved in debates, exhibitions, and events concerning the challenges of urban design, experience, and policy in England, China, and the US.
David has worked in conflicted landscapes and considered the ethical and practical challenges of humanitarian and military engagement at sites of tension around the world, including Palestine, Tunisia, Afghanistan and other regional contexts. David co-founded Empathy & Risk in 2016
David has held academic posts within the UK since 2000. He was first awarded a personal chair in 2008, was the recipient of the Philip Leverhulme Prize for research in 2010 and was appointed Director of Research and Development at the University of Brighton in 2016. Since 2018, he has held the post of Research Professor at Sheffield Hallam University and was the Director of the Culture and Creativity Research Institute from 2020-2025.
Discussion and questions
3:25 pm
Scenario Planning Activity
First developed in defence settings, and now widely used across many fields, scenario planning is designed to scaffold strategic thinking, consideration of diverse options and outcomes, and development of tactical approaches to problem-solving (Cordovapoza et al, 2023). This activity will help develop participants’ understanding of different contextual factors relevant to secure robotics, with a focus on conceptualising the user’s experience and possible impacts from different examples of system violations.
3:30 pm
Break
4:10 pm
Lightning Talks
4:40 pm
Talk 01: Trust Is Not Reliance: A Psychological Perspective on Secure Social Robotics
By Wenwen Gao
Abstract: As robotic systems become increasingly autonomous and embedded in everyday contexts, security is no longer only a technical concern but a lived psychological experience for users. This lightning talk introduces a psychological perspective on user-centred secure robotics, focusing on trust and reliance are distinct but interrelated processes, each with different implications for security.
Drawing on research from human–robot interaction and trust dynamics, the talk highlights how users may continue to rely on a robot despite diminished trust—for example due to task dependency, lack of alternatives, or situational pressure. Conversely, users may express trust while still limiting reliance through cautious or compensatory behaviours. These mismatches become especially consequential following system failures, anomalous behaviour, or perceived cyber-attacks.
Disruptions to trust and reliance can lead to different user responses, including over-reliance, inappropriate delegation, safeguard circumvention, or premature disengagement. Importantly, such responses may undermine safety and security objectives even when technical protections are functioning as intended. Users also differ in how they interpret security-relevant events and in how trust and reliance evolve over time.
By explicitly distinguishing trust from reliance, this talk contributes a human-centred perspective to secure robotics. It reframes security not only as a matter of technical robustness, but as the management of dynamic human–robot relationships, where understanding when and why users trust or rely on a system is essential for effective security design and response.
Talk 02: Identical Performance Does Not Produce Identical Trust: Psychosocial Implications for Safe AI Systems Adoption
By Otter, M., Honey, R. C., & Morgan, P. L.
Abstract: If we hope to support the safe and successful adoption of next generation robots and AI enabled technologies, we must grapple with the psychosocial factors that shape whether users will accept and adopt these systems. This contribution presents findings from an experiment in which participants were tasked with protecting a data centre. Participants had concurrent access to two advisors, one labelled AI Expert and the other Human Expert, whose recommendations were identically accurate (although the participants were not made aware of this). The human advisor was selected significantly more frequently (p < .001) and proportionally rejected much less (p < .001). Also, whilst the self-reported trust scores of the two expert advisors converged over time (5 blocks, over 120 trials), this did not translate into behavioural change.
Dissociation between trust-as-attitude and trust-as-behaviour has direct implications for evaluating the safety of robots and other autonomous and AI-Enabled systems. Self-report adoption-based metrics may mask persistent behavioural resistance which could cause costs to companies in terms of finance and ongoing security risks.
This experiment focusing on reactions to cyber-attacks shows that equal competence of the expert advisors is insufficient to overcome initial bias against AI and additionally challenges the assumption that exposure alone produces adoption. The implications of this work go beyond cyber security operations to the integration of social robots in domestic and workplace environments. We argue that legal/regulatory frameworks governing deployment must incorporate behavioural evidence alongside attitudinal measures and include strategies to support safe, willing integration directly addressing psychosocial barriers.
Talk 03: The Cybersecurity Implications of Social Presence in Human–Robot Interaction
By Sharni Konrad
Abstract: Human-robot interaction research has shown that social presence, the extent to which an agent recognises the existence of another agent, and their potential for shared experience, plays a crucial role in variables such as trust, engagement, future adoption, and anthropomorphism. But what happens when social presence shapes not only interaction outcomes, but also users’ assumptions about the robot itself?
This lightning talk will explore whether varying levels of social presence in HRI influences how anthropomorphised and “human-like” a robot is perceived to be, and whether these perceptions extend to implicit assumptions about cybersecurity and data safety. I propose that social presence does more than enhance engagement, it may also prompt automatic assumptions about agency and discretion.
When robots appear socially responsive and relational, users may attribute to them human-like qualities such as responsibility, autonomy, and even discretion. These inferences may reduce perceived vulnerability, fostering beliefs that a robot can manage its own data, protect user information, or resist malicious interference, when in reality, most social robots are networked machines that are embedded within broader technical infrastructures.
This perspective positions social presence as a human-factor variable in cyber safety: if heightened social presence inflates perceived safety, it may also reduce critical scrutiny of privacy risks. Conversely, lower social presence may draw attention to the robot as a connected machine, increasing privacy vigilance. This highlights the need to design robots that feel engaging without encouraging misplaced assumptions about their security.
Talk 04
By Ffion Evans, Dr Dominic Guittard, Dr Thomas Vaughan-Johnston, Dr Katy Burgess and Professor Phillip Morgan (Cardiff University/Prifysgol Caerdydd)
Abstract: As artificial intelligence (AI) becomes increasingly embedded into everyday life, it is rapidly expanding into high stake domains such as medicine and healthcare. The question is no longer whether humans can trust AI, but whether that trust is appropriately calibrated. Current approaches often rely on self-reported measures or observable patterns of AI-influenced behaviour in isolation. While informative, these indicators risk fragmenting a construct that is dynamic, relational, and shaped by continuous interaction.
This research argues that understanding trust in AI requires a new integrative framework centred on calibrated trust, defined as the alignment between perceived and actual system capability. Drawing on trust calibration theory (Lee & See, 2004), dual processing models of decision making, and research on algorithm aversion (Dietvorst et al., 2014), this work reconceptualises trust as a dynamic learning process. Rather than remaining stable, trust may fluctuate following system success or failure, creating tipping points that lead to over-reliance or withdrawal. Crucially, research should examine how individuals recalibrate trust after errors occur. Calibration is not treated as the mere product of system accuracy, but as emerging from the interaction between system characteristics, individual differences, and task demands.
By integrating multiple theoretical perspectives from cognitive psychology and human AI interaction, this work aims to move beyond binary notions of trust versus distrust. In safety critical contexts, from clinical decision support to autonomous vehicles, the failure of calibrating trust may impair decision quality and increase vulnerability within socio-technical systems, making calibrated trust central to the design of appropriate and secure systems.
Small Group Discussions: Applications
The final discussion session will be in small groups and will focus on the application of the issues, frameworks, and knowledge introduced across the workshop to real-world applications. Each group will be given a specific contextual framing within which to consider potential security harms and vulnerabilities, system strengths, mechanisms to support awareness, and encourage security-conscious application development in robotics.
5:00 pm
Collation of Key Themes, Future Planning and Wrap-Up
5:40 pm
Secure Robotics: Interdependence of Trust, Safety, and Security
The escalating development and integration of interactive, assistive, and social RAS into our lives has brought a pressing need to address the issue of human trust in them (and those developing them). Trust fundamentally impacts acceptance, adoption, and continued usage. 0ptimally calibrated trust is crucial for effective human-robot collaboration and successful long-term acceptance and adoption, as well as continued use following situations where something goes wrong.
Security is the non-negotiable foundation upon which both Trust and Safety are built. Robust cybersecurity measures (including those based on assurance, ethical and responsible by design principles) are essential to optimize the confidentiality and integrity of the robot’s control systems and data, thereby ensuring its predictability and preventing adversarial manipulation that
Safety in this context must extend beyond mitigating engineering faults to include defenses against malicious attacks. The capacity for a robot to operate safely, even when under a cyber attack, is the final element that validates human trust. The synthesis of Trust, Safety, and Security into a unified and interdependent set of design requirements is the core definition of Secure Robotics and the focus of this workshop.
Call for Participation
The increasing prevalence of interactive, mobile robots in domestic and social spaces requires not only a comprehensive examination of the security challenges inherent in their large-scale deployment, but also an understanding of how we, as a community, can support the safe and successful adoption of robots. This workshop provides a forum for researchers, practitioners, and stakeholders from a range of disciplines to build expertise and networks in safety, trust, psychosocial, legal, and economic aspects for the secure deployment of social robots in domestic environments.
We invite applications to participate in this workshop as a Lightning Talk speaker (2-minute presentation per speaker). We particularly encourage ECRs and PhD students to apply, as well as those from a range of disciplinary backgrounds (including ethics, law, engineering, complex systems, human factors, and art). Topics include (but are not limited to):
- Robot safety, security, and trust
- robot ethics
- legal and economic implications of interactive robots
- Cyber attacks on social robots
- Psychosocial and humanfactor consideration in deploying interactive robots
***Please provide a 250-word abstract to the workshop email by the 27th of February 2026 (AOE).
Organisers
Prof. Phillip L Morgan
School of Psychology, AI, Robotics & Human-Machine Systems Centre, Cardiff University, UK
Prof. Damith Herath
Collaborative Robotics Lab, University of Canberra, Bruce, AU
Prof. Praminda Caleb-Solly
School of Computer Science, University of Nottingham, Nottingham, UK
Prof. Matthew Studley
Bristol Robotics Laboratory, University of the West of England, Bristol, UK
Dr. Elizabeth Williams
School of Engineering, Australian National University, AU
Aurora An-Lin Hu
Collaborative Robotics Lab, University of Canberra, Bruce, AU
Dr. Eduardo B. Sandoval
School of Art and Design, University of New South Wales, Sydney, AU
Dr. Maleen Jayasuriya
Collaborative Robotics Lab, University of Canberra, Bruce, AU
Dr. Min Wang
Collaborative Robotics Lab, University of Canberra, Bruce, AU
Assoc Prof. Janie Busby Grant
Collaborative Robotics Lab, University of Canberra, Bruce, AU
Join Our Workshop Today!
Don’t miss the opportunity to be part of a transformative experience in secure robotics. Register now to engage with leading experts and explore cutting-edge innovations that are shaping the future of domestic robotics. Stay informed and connected by following us on social media for the latest updates and insights.