Araya Reinforcement Learning Team

Levels of Shared Autonomy
in Brain-Robot Interfaces

Enabling Multi-Human Multi-Robot Collaboration for Activities of Daily Living

Hannah Douglas†, Marina Di Vincenzo†, Rousslan Fernand Julien Dossa†,
Luca Nunziante†, Shivakanth Sujit, Kai Arulkumaran
Araya Inc., Tokyo, Japan
†Equal contributions

Abstract

Individuals with ALS and other severe motor impairments often rely on caregivers for daily tasks, which limits their independence and sense of control. Brain-robot interfaces (BRIs) have the potential to restore autonomy, but many existing systems are task-specific and highly automated, which reduces the users’ sense of empowerment and limits opportunities to exercise autonomy. In particular, shared autonomy approaches hold promise for overcoming current BRI limitations, by balancing user control with increased robot capabilities.

In this work, we introduce a collaborative BRI that integrates non-invasive EEG, EMG, and eye tracking to enable multi-user, multi-robot interaction in a shared kitchen environment with mobile manipulators. Our system modulates assistance through three levels of autonomy—Assisted Teleoperation, Shared Autonomy, and Full Automation—allowing users to retain meaningful control over task execution while reducing effort for routine operations. We conducted a controlled user study comparing autonomy conditions, evaluating performance, workload, ease of use, and agency.

Our results show that, while Full Automation was generally preferred by users due to lower workload and higher usability, Shared Autonomy provided higher reliability and preserved user agency, especially in the presence of noisy EEG decoding. Although there was significant individual variability in EEG decoding performance, our post-hoc analysis revealed the potential benefits of customizing pipelines for each user.

Motivation

We aim to advance assistive robotics to empower people with physical disabilities in activities of daily living (ADLs). By using shared autonomy to enable multi-agent collaboration between humans and robots, our research explores how technology can support independence and improve quality of life.

Motivation image

Assistive Kitchen

Our experiments take place in a simulated kitchen, designed to study how people and robots can collaborate in everyday tasks. The environment includes a central table, storage areas, and common objects such as plates, cutlery, and cups, arranged to mimic a realistic household setup.

Within this setting, participants and robots work together on various tasks, such as moving food and drink containers from one area to another, organizing items on the table, or retrieving them from storage. These tasks allow us to evaluate different forms of human input and robot autonomy in a familiar and meaningful context.

The assistive kitchen enivonrment is more than just a testbed—it provides a controlled yet realistic environment where we can explore how collaboration unfolds, how users adapt to varying autonomy levels, and what design choices make robotic systems more usable and supportive in everyday life.

AK Image

Mobile Manipulators

Each participant is given control of a mobile manipulator robot—a Franka Panda Emika arm on top of a Tidybot++ holonomic mobile base. The mobile base is controlled via path planning, and the robot arm is trained to perform manipulation tasks via imitation learning. For the latter, we use the RISE point cloud encoder, trained with a flow matching objective on 70 human demonstrations per task. We ensure safe and reliable motion using an operational space control barrier function controller on top of the learned policy.

Pick Pasta

Pick Soup

Pick Wine

Pick Beer

Place Pasta

Place Soup

Place Wine

Place Beer

User Interface Devices and Control

participants

Our platform, built on top of M4Bench, integrates multiple input devices to enable intuitive human–robot interaction. Participants control the robots using a combination of EEG signals (with the motor imagery paradigm), EMG activity (muscle signals), and eye-tracking (gaze selection).

These signals were mapped via different user interfaces to increasing levels of robot autonomy: from Assisted Teleoperation, where users managed movements step by step, to Shared Autonomy, where human intent guided the robot, up to Full Automation, where the system acted independently after goal selection.

Level 1
Assisted Teleoperation

Level 2
Shared Autonomy

Level 3
Full Automation

Shared Autonomy Study

We conducted a within-subjects user study to examine how different levels of robot autonomy influence human–robot collaboration in activities of daily living. Each participant interacted with the system across three autonomy level—Assisted Teleoperation, Shared Autonomy, and Full Automation—presented in a randomized order.

Our study evaluated how autonomy shapes user experience, performance, cognitive workload, and perceived control when operating an assistive robot in a kitchen scenario. Our results highlight how humans adapt to increasing autonomy and how shared control can balance efficiency and engagement.

pipeline

Results

Our study revealed clear differences across the three autonomy levels. Higher autonomy consistently improved task performance: Shared Autonomy and Full Automation required less user interaction, reduced workload, and enabled smoother task execution compared to Assisted Teleoperation. Participants reported the highest cognitive and physical effort during Assisted Teleoperation, highlighting the difficulty of fully manual gaze–EEG control.

Usability also increased with autonomy, with system usability scores rising from Assisted Teleoperation to Shared Autonomy and Full Automation. Measures of agency and ownership showed that Assisted Teleoperation and Shared Autonomy provided a stronger sense of control and involvement, while Full Automation offered the greatest efficiency but at the cost of reduced direct control. Qualitative feedback confirmed that participants appreciated the balance offered by Shared Autonomy, recognized Full Automation as ideal for routine tasks, and found Assisted Teleoperation demanding and effortful.

NASA-TLX

NASA Task Load Index

🔍
SUS

System Usability Scale

🔍
SoARS

Sense of Agency Rating Scale

🔍
Expanded image

Takeaways

Assisted Teleoperation

The most demanding mode, requiring continuous attention and fine control. Participants felt strongly involved, but the effort and fatigue were noticeably higher.

Shared Autonomy

The most balanced mode. Users remained in control while receiving meaningful assistance, resulting in smoother interaction, lower workload, and a natural sense of agency.

Full Automation

The fastest and most efficient level. Workload was minimal, though participants reported reduced direct control as the robot handled most of the task independently.

General Findings

Higher autonomy improved performance and reduced workload, while lower autonomy increased engagement and sense of control. Overall, Shared Autonomy emerged as the most appreciated balance between effort and efficiency.

Citation

@article{douglas2026levels,
    title     = {Levels of shared autonomy in brain-robot interfaces: enabling multi-robot multi-human collaboration for activities of daily living},
    author    = {Douglas, Hannah and Di Vincenzo, Marina and Dossa, Rousslan Fernand Julien and Nunziante, Luca and Sujit, Shivakanth and Arulkumaran, Kai},
    journal   = {Frontiers in Human Neuroscience},
    volume    = {19},
    pages     = {1718713},
    publisher = {Frontiers}
}

Meet the Team

Anna
Hannah Kodama Douglas
Marina
Marina Di Vincenzo
Rousslan
Rousslan Dossa, Ph.D.
Luca
Luca Nunziante
Shiva
Shivakanth Sujit
Kai
Kai Arulkumaran, Ph.D.

Funding

This work was supported by JST under Moonshot R&D Grant Number JPMJMS2012.

Araya Internet of Brains