Are we too hard on artificial intelligence for autonomous driving?

Are we too hard on artificial intelligence for autonomous driving?

Spread the love

I recently attended and presented at the “Implementation of ISO 26262 & SOTIF” conference in Detroit. Its subtitle was “Taking an Integrated Approach to Automotive Safety”. After three days, my head was spinning with the number of ISO/SAE and other standards. And at the end of day two, after another example that fooled the self-driving prototypes, I sighed and asked if anyone else would feel bad for those AIs. It’s like asking the AI ​​to do more than any human could ever do. My question elicited a few laughs, but sparked an honest discussion about how to quantify self-driving capabilities.

Automotive IQ, a division of IQPC, organized this conference and drew a crowd of approximately 70 people. It was an intimate setting and great networking with a mix of researchers and industry participants.

Day one was a discussion day, where we discussed the official release of ISO 21448, “Intended Functionality Security”. It sits somewhat above ISO 26262 and guides the practical design, verification, validation measures and operations phase activities necessary to achieve and maintain SOTIF. In discussions with other vendors of tools that deal higher up the design chain with self-driving scenarios, I’ve heard that customers see SOTIF as important and that some vendors in the area of ​​modeling scenarios had offers that could help.

Day two began with a panel discussion moderated by Rami Debouk, GM Technical Member for Systems Safety, with Mathieu Blazy-Winning, Director of Functional Safety, NXP Semiconductors, and Philip Koopman, Associate Professor, Carnegie Mellon University.

Philip Koopman described a standards-based systems engineering approach with critical vehicle safety functions addressed by FMVSS and NCAP, safety by SAE J3061 and SAE 21434, and equipment faults with functional safety mechanisms ISO 26262. For environmental and on-board cases for dynamic driving functions, ISO 21488 and SaFAD/ISO TR4804 should be applied, and ANSI/UL 4600 addresses system safety for highly automated vehicle safety cases beyond dynamic driving. And let’s remember the road test safety covered by SAE J3018.

From an OEM perspective, that’s a lot of standards to cover.

Mathieu Blazy-Winning of NXP explained how, as a Tier 2 supplier, they strive to comply with four standards to ensure functional safety for automotive and industrial applications:

  • IATF 16949 – harmonization of different assessment and certification systems worldwide in the automotive supply chain.
  • ISO 26262:2018 – notably using the hazard analysis and risk assessment (HARA) built into the standard.
  • IEC 61508:2010 – allowing more flexibility than ISO 26262 for hazard and risk analysis to assess hazards, including common techniques in ISO 12100.
  • Automotive SPICE – “Automotive Software Process Improvement and Capability Determination” to assess the performance of automotive OEM supplier development processes.

In addition to compliance with these, Mathieu’s team also monitors ISO 21448 SOTIF, ISO TR 9839 “Predictive Maintenance”, UL 4600 “Evaluation of Autonomous Products”, J3131 “Definition of Terms for Autonomous”, IEEE P2851 “Data Format for Interoperability” and the Accellera Functional Safety Working Group for automation, interoperability and traceability.

Your head is still spinning?

During the discussion, I asked how vendors assess the return on investment for investing in all of these standards and whether some of them are more important than others. Professor Koopman referred to the hierarchy of concurrent security needs presented above. It is not possible to take shortcuts. The security aspects build on each other. That’s why requirements trickle down the design chain to IP vendors, and they need to be traced.

Security needs are complex and different at each level of the design chain. And they build on each other.

On the third day, my presentation as a design chain vendor focused on the capabilities expected by customers in the area of ​​Network-on-Chip (NoC) which requires security features built into interconnect capabilities when they are used in the automobile. I also re-emphasized the view of scalability as a looming problem in security analytics that our Fellow and Chief Security Officer, Stefano Lorenzini, mapped earlier this year. Additionally, I addressed the issue of requirements tracing.

Much of the discussion during the conference focused on AI and its related security aspects. We have discussed examples that lead to bad AI behavior. Sunlight at a particular angle changes the perception of a traffic light to have multiple lights on. Most humans, myself included, would probably fail here too. According to NCAP, Europe recorded 51 deaths per million in 2019, reducing road fatalities by 6% in the past five years alone. Autonomous technologies intend to accentuate this curve. A recent study by IDTech found that poor system performance was the cause of just 1% of self-driving vehicle crashes, or 2 out of 83 cases.

So, are we being too hard on the AI, forced to conform to all these standards? Most likely. But it’s still necessary, given that responsibility and blame for accidents shifts from humans to machines and their creators. And there are still many questions to answer.

Exciting times ahead, let’s make sure they’re safe!

Frank Schirrmeister

Frank Schirrmeister

(All posts)

Frank Schirrmeister is Vice President of Solutions and Business Development at Arteris. He leads businesses in the automotive, data center, 5G/6G communications, mobile, aerospace and data center verticals, as well as in the IT technology sectors. artificial intelligence, machine learning and security. Prior to Arteris, Schirrmeister held various leadership positions at Cadence Design Systems, Synopsys, and Imperas, focusing on product marketing and management, solutions, strategic ecosystem partner initiatives, and customer engagement.

Leave a Comment

Your email address will not be published.