Friday, June 13, 2025, 4:56 AM
×

Nvidia debuts new AI models and tools for robotics, smart cities and autonomous vehicles

Wednesday 11 June 2025 23:43

Nvidia Corp. today debuted several announcements aimed at empowering developers and industry professionals building autonomous vehicles, robot fleets and smart cities.

During Nvidia GTC Paris at VivaTech, the company’s technology conference, Nvidia showcased Nvidia Drive, an autonomous vehicle development platform now in production. It’s enabling leading brands, including some of Europe’s premier automakers, to build self-driving cars.

Drive consists of several interconnected systems, including DGX systems and graphics processing units designed for training artificial intelligence models and developing artificial intelligence software. It also incorporates the Nvidia Omniverse and Cosmos platforms, which are used for simulation and synthetic data generation to facilitate the testing and validation of autonomous driving scenarios. There’s also the AGX in-vehicle computer, responsible for processing real-time sensor data to ensure safe and automated driving capabilities.

Nvidia said Drive is a unified software stack, using deep learning and AI foundation models trained on large datasets of human driving behavior to process sensor data to eliminate the need for predefined rules.

“AV software development has traditionally been based on a modular approach, with separate components for perception, prediction, planning and control,” Xinzhou Wu, Nvidia vice president of auto, said in a blog post. “While there are benefits to this approach, it also opens up potential inefficiencies and errors that can hinder development at scale.”

The company said safety is an important component of all autonomous vehicle development. Earlier this year, Nvidia launched Nvidia Halos, a safety system that integrates hardware, software, AI models and tools to ensure safe AV development and deployment from cloud to car.

Halos includes an advanced AI Systems Inspection Lab with membership that includes Continental AG, Ficosa International S.A., OmniVision Technologies, Inc., On Semiconductor Corp. and Sony Semiconductor Solutions Corp. Newly announced automotive leaders joining to verify the safe integration of their products with Nvidia technologies to advance AV safety include Robert Bosch GmbH, Easyrain i.S.p.A. and Nuro Inc.

Nvidia releases AI models and developer tools for smart cars

To help accelerate the development of next-generation autonomous vehicle architectures, Nvidia released Cosmos Predict-2, a new world foundation model with improved world prediction capabilities for high-quality synthetic data generation.

World foundation models are a type of generative AI model that understands the dynamics of the real world, including physical and spatial properties. They can be used to represent and predict dynamics such as motions, force and spatial relationships from sensor data. That means they can be used to assist in the training of robot and AV models by generating simulations of the real world, predicting human behavior and guiding robots, thus increasing safety and accuracy.

The Nvidia Research team post-trained the Cosmos models on 20,000 hours of real-world driving data. Using the AV-specific models to generate Multiview video data, the company said, the team improved model performance when representing challenging conditions such as fog and rain.

In addition to Cosmos Predict-2, Nvidia released Cosmos Transfer as an Nvidia NIM microservice, which allows easy deployment on data center GPUs. This microservice augments datasets and generates photorealistic videos using structured input or ground-truth simulations from Nvidia Omniverse, the company’s 3D simulation platform. In combination with the NuRec Fixer model helps inpaint and resolve gaps in reconstructed AV data.

CARLA, short for Car Learning to Act, the world’s leading open-source AV simulator, has integrated Cosmos Transfer and Nvidia NuRec into its latest release. By doing so, CARLA’s user base of more than 150,000 AV developers can now render generative simulation scenes and viewpoints with high fidelity and generate endless variations of lighting, weather and terrain using simple prompts.

A blueprint for smart city AI

As cities continue to grow and planners need to address issues such as sustainable services, many are turning to digital twins and AI models to get the job done.

This very scenario was addressed by IBM Corp.’s Smarter Cities initiative in 2014 and the need is only expected to expand as urban populations as set to double by 2050.

Building a digital twin of a city and testing smart city AI agents can be a daunting task. The resources needed are complex and resource-intensive because of the technical and operational challenges.

To help deal with these challenges, Nvidia today announced the Nvidia Omniverse Blueprint for smart city AI, a reference framework that combines Nvidia Omniverse, Cosmos, NeMo and Metropolis platforms.

Using the blueprint, developers can generate simulation-ready, or SimReady, photorealistic digital twins of cities to build and test AI agents that can monitor and optimize city operations. Leading companies including 22nd Century Group, AVES Reality GmbH, Bentley Motors Ltd. and Milestone Systems Inc. are among the first to use the new blueprint.

Linker Vision Corp. was among the first to partner with Nvidia to deploy smart city digital twins and AI agents for Kaohsiung City, Taiwan working with AVES Reality, a digital twin company. It uses aerial imagery of the city and infrastructure to generate 3D geometry and ultimately SimReady digital twins. It now scales up to analyze 50,000 video streams in real-time with generative AI to understand and narrate complex urban events such as floods and traffic accidents.