Quantcast
Channel: Intelligence Archives - CEVA’s Experts blog
Viewing all articles
Browse latest Browse all 9

Futureproofing Automotive AI to Manage Lifetime Cost

$
0
0

Cars and trucks are expected to continue their 10– to 20-year lifetimes for the foreseeable future, with corresponding implications for electronics reliability as we already know. More challenging is managing long service times for Automotive AI systems, especially given the rapid evolution of AI technology and the need to manage updates to field service problems discovered or regulatory changes. Recalls to upgrade hardware would be a very expensive option. Equally, Automotive AI software model service updates will depend on scalable systems to support service technicians handling many product lines across many locations. Hardware and software must be scalable both to support and simplify updates over long vehicle lifetimes and to support advancing vehicle architectures for new cars.

Automatic parking

Futureproofing Automotive AI hardware

Automotive AI is quickly spreading throughout the car, in obvious functions such as ADAS using intelligent vision and ranging around the car, but now also in intelligent battery management for EVs, in monitoring systems in the cabin to track driver awareness and occupant status and in infotainment for intelligent noise cancellation. These instances run on custom AI processors distributed around the car and are what we must future-proof against evolving auto architectures and Automotive AI model needs.

 

“Evolving” is an important constraint here. Occasionally AI foundation models take big revolutionary leaps, such as from CNNs to transformers, but more commonly models will evolve in smaller steps through car lifetimes and even product lines. Taking the common view of an AI model as a graph, many sub-graphs may be reused, a few may be added or modified and sequential stages between graphs may be refined. If the hardware platform can scale to handle such graphs from classic DNNs to generative models at competitive performance and power, it should be able to handle this level of model evolution. With one important caveat, to which we will get in the next section.

NeuPro-M block diagram Ceva’s NeuPro-m Block Diagram

 

Ceva NeuPro-M is a very powerful NPU IP family, providing a foundation for embedded Automotive AI design. This is a scalable platform configurable to meet exactly these needs. NeuPro-M can scale to run from 1 to 8 engines in parallel, each hosting a mixed-precision neural engine, a complementary activation unit and a state-of-the-art sparsity engine managing any form of sparsity across weights and data. Together with a vector processing unit (VPU) these all share common memory local to that engine to maximize local throughput. A local controller orchestrates flow between these functions. A common subsystem provides top-level orchestration between engines, next level shared memory, compression and decompression for weights and data and interfaces to the host design.

VPU – The Key to High Performance Customization

Product differentiation depends on customization, in Automotive AI as much as anywhere else in design. Some improvements are possible simply in how the product developer designs the graph using operations natively supported in the Automotive AI hardware. However, big advantages often depend on adding custom software which is not natively supported by the AI processor hardware. Sensor fusion provides a good example.

Think about automatic parking. This must fuse inputs from ultrasonic, radar and video sensors (or a subset of these) to figure out a workable path to reverse into a parking space. The sensors will each do their job in determining proximity to obstacles, but there is no out-of-the-box solution for fusing these inputs together to determine a path and next steps in steering and moving forward or back. These are algorithms that each car maker must design.

The algorithm must figure out curved paths for the car relative to obstacles and other sensed information in a 2D (or 3D) space while understanding velocities and the possibility that front or rear obstacles might move. This requires vector-based analysis.

Many processors would offload such a calculation to an external DSP or GPU with a significant and undesirable latency overhead considering the car is moving in a very constrained space. In contrast, in NeuPro-M NPU IP, a programmable VPU is embedded in each engine. Some parts of a fusion algorithm can be programmed to run on it with comparable performance to other neural operations running in the same engine, since they share the same local memory, delivering fast and low latency performance for sensing plus fusion. In fact, at Ceva we have built our SensPro2 sensor hub DSP around this capability, so it can be used in conjunction with a custom AI engine, for an efficient chip and system design.

This level of performance for custom algorithm development to extend an Automotive AI algorithm provides a strong level of futureproofing for almost any advance you could imagine, whether in core DNN layers or in extension like fusion.

Futureproofing for evolving AI stacks

Futureproofing the hardware is a start, but it must also be supported by futureproofed Automotive AI models and software stacks. Ceva has extensive experience in supporting interfaces to all the standard networks, together with compiler and optimization functions to map to NeuPro-M-based hardware.

We are proud to share that our software stack and model now leverages open-source elements, including TVM, ONNX runtime and DeepSpeed. This commitment to open-source components is your assurance that your interface to evolving models and emerging capabilities in the AI inference and optimization ecosystem will track seamlessly with NeuPro-M based implementations.

Frequently Asked Questions

What is the future of AI in vehicles?

AI is ubiquitous in modern vehicle design, to improve safety and driver support while at the same time adding more convenience and appeal to driver and passenger experiences. These capabilities are now believed to count more to buyer attraction than traditional mechanical differentiators such as horsepower and passenger seating.

 How do smart cars use AI?

AI is everywhere! In sensing proximity to other cars, pedestrians, and other obstacles. In lane-keeping, self-parking support, and detecting small children and dogs when reversing. In the cabin, monitoring driver attention and checking for children still in the car when you leave to go shopping. In voice-based commands for navigation and other infotainment features.

How will AI affect cars?

Power and cost are two important considerations. Increased power consumption reduces range for EVs, even for conventional ICE vehicles through increased load. Multiple AI applications around the car can add substantially to the already significant cost of modern vehicles, pushing many out of reach of consumers. Equally important is retaining long-term benefit vehicles in service for 10-20 years which must remain current with safety and regulatory updates.


Viewing all articles
Browse latest Browse all 9

Latest Images

Trending Articles





Latest Images