Model-Centric Safety Assurance for AI-Driven Autonomous Vehicles

Learn how AI-driven autonomous vehicles are assured for safety through requirements, evaluation, verification, runtime monitoring, and standards-based evidence.

Micro-credential developed in partnership with

Model-Centric Safety Assurance for AI-Driven Autonomous Vehicles focuses on how safety is established and maintained for AV AI systems once they are placed in a safety-critical context. It explains how safety requirements, acceptance criteria, KPIs, stress testing, verification and validation, runtime monitoring, fallback strategies, and safety cases work together to support safe operation within the Operational Design Domain (ODD). The course emphasizes how evidence is gathered, interpreted, and used to justify that AI-enabled vehicle behaviour remains acceptably safe across development, deployment, and ongoing operation. It also connects this assurance view to standards and governance topics such as ISO 26262, ISO 21448/SOTIF, ISO/PAS 8800, organizational safety management, and public trust.

Micro-credential developed in partnership with

Model-Centric Safety Assurance for AI-Driven Autonomous Vehicles focuses on how safety is established and maintained for AV AI systems once they are placed in a safety-critical context. It explains how safety requirements, acceptance criteria, KPIs, stress testing, verification and validation, runtime monitoring, fallback strategies, and safety cases work together to support safe operation within the Operational Design Domain (ODD). The course emphasizes how evidence is gathered, interpreted, and used to justify that AI-enabled vehicle behaviour remains acceptably safe across development, deployment, and ongoing operation. It also connects this assurance view to standards and governance topics such as ISO 26262, ISO 21448/SOTIF, ISO/PAS 8800, organizational safety management, and public trust.

Model-Centric Safety Assurance for AI-Driven Autonomous Vehicles

Topics

Autonomous Vehicle, Hazard Assistant, ML Model

Intermediate

Price:

Included in subscription

Time to complete:

30 hours

Outcome

+3000 points

What you'll learn

  • checkmark icon

    Explain how the AV stack and the Operational Design Domain (ODD) shape safety expectations for autonomous vehicle operation

  • checkmark icon

    Connect hazards, HARA, safety goals, and acceptance criteria to concrete safety requirements for AV systems

  • checkmark icon

    Evaluate AI model behaviour using safety-oriented concepts such as missed detections, false alarms, latency, uncertainty, confidence, and residual risk

  • checkmark icon

    Interpret how safety KPIs and acceptance thresholds link model behaviour to system-level safety outcomes

  • checkmark icon

    Describe how stress testing, scenario-based simulation, hardware-in-the-loop, integration testing, and regression testing contribute to AV safety assurance

  • checkmark icon

    Understand how runtime monitoring, anomaly detection, drift handling, and fallback strategies support continuous assurance after deployment

  • checkmark icon

    Explain how safety cases are built and updated using test evidence, field data, incidents, and near-misses

  • checkmark icon

    Recognize how standards, governance, compliance, and public trust shape the safe deployment of AI-driven autonomous vehicles

Program outline

Show more

man on laptop with branded graphical element in background

Developed with top post-secondary institutions and leading organizations, earn a credential you can share online by completing this course.

checkmark icon

Industry-recognized

team members icon

Downloadable certificate

Model-Centric Safety Assurance for AI-Driven Autonomous Vehicles

  • Please log in or sign up

Browse by role