Overview
As the development of automated driving systems (ADS) accelerates, the safety of human-system interaction during vehicle control transitions remains one of the most pressing challenges for automation researchers and policymakers. This study, led by Dr. Camila Correa-Julian at the UCLA Risk Sciences Institute, explores how human drivers and vehicle automation systems interact during takeover events in Level 3 ADS, when responsibility for driving shifts from the system back to the human driver.
Using a combination of driving simulation experiments and causal modeling techniques, this research investigates how environmental complexity and system warnings shape driver performance, workload, and safety outcomes. The study represents a significant step toward quantifying human reliability and teamwork in semi-automated vehicles—laying the foundation for safer design and regulation of shared-autonomy systems.
Research Approach
The study introduces a Human-Autonomy Team (HAT) performance model developed within the framework of Human Reliability Analysis (HRA)—a methodology commonly used in high-risk industries such as aviation and nuclear energy.
A Bayesian Belief Network (BBN) was constructed to represent the causal relationships between human and system factors influencing takeover performance. Controlled simulator-based experiments were then conducted to empirically validate the model.
72 participants completed controlled driving scenarios in a Level 3 ADS simulator, using the OpenCDA open-source platform.
Scenarios varied by two main factors:
(A) Takeover Request/Warning Availability
(B) Traffic ComplexityData collection integrated simulation telemetry (e.g., reaction times, braking, steering behavior) with subjective measures such as trust in automation and perceived workload (NASA-TLX).
Key Findings
Warnings improve safety. Drivers who received timely takeover warnings had significantly lower collision probabilities (6%) compared to those without warnings (up to 52%). Warnings increased time-to-collision and reduced the likelihood of severe braking or tailgating.
Traffic complexity drives risk. High traffic environments amplified mental workload, steering variability, and human intervention rates—often outweighing the benefits of system warnings.
Cognitive load matters. Over 70% of participants reported high mental demand during takeovers. Complex traffic and ambiguous system behavior increased frustration and decreased trust in the ADS.
Trust in automation is conditional. Participants were generally comfortable delegating control to the ADS under low-demand conditions but preferred to intervene in complex or uncertain environments.
Team performance modeling shows promise. The Bayesian model effectively captured the relationships among human, environmental, and system factors, supporting future integration of risk-informed metrics into ADS safety evaluation frameworks.
Implications
This research highlights the importance of human-centered design and risk modeling in the path toward higher levels of vehicle automation. Even as technology advances, human oversight remains a critical safety layer. By integrating causal models with empirical human-in-the-loop data, developers and regulators can better anticipate system failures, optimize takeover alerts, and design safer handover protocols.
Future work will focus on refining the BBN model to predict the probability of collision or near-miss events based on driver profiles and scenario conditions, offering a proactive tool for human-system reliability assessment.
Project Team
Lead Researcher: Dr. Camila Correa-Julian, Postdoctoral Scholar, UCLA Risk Sciences Institute
Collaborating Institutions: UC Institute of Transportation Studies, UC Davis MoSAIC Initiative (Mobility Science, Automation, and Inclusion Center)
Citation
Correa-Julian, C. (2025). Empirical Validation of a Human-Autonomy Team Model for Level 3 Automated Driving Systems. UCLA Risk Sciences Institute. (Review Draft inquire for access)