-

Helm.ai Introduces VidGen-2: Generative AI for Higher Resolution and Enhanced Realism Multi-Camera Video for Autonomous Driving

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced the launch of VidGen-2, its next-generation generative AI model for producing highly realistic driving video sequences. VidGen-2 offers 2X higher resolution than its predecessor, VidGen-1, improved realism at 30 frames per second, and multi-camera support with 2X increased resolution per camera, providing automakers with a scalable and cost-effective solution for autonomous driving development and validation.

Trained on thousands of hours of diverse driving footage using NVIDIA H100 Tensor Core GPUs, VidGen-2 leverages Helm.ai’s innovative generative deep neural network (DNN) architectures and Deep Teaching™, an efficient unsupervised training method. It generates highly realistic video sequences at 696 x 696 resolution, double that of VidGen-1, with frame rates ranging from 5 to 30 fps. The model also enhances 640 x 384 resolution video quality at 30 fps, delivering smoother and more detailed simulations. Videos can be generated by VidGen-2 without an input prompt or with a single image or input video as the prompt.

VidGen-2 also supports multi-camera views, generating footage from three cameras at 640 x 384 (VGA) resolution for each. The model ensures self-consistency across all camera perspectives, providing accurate simulation for various sensor configurations.

The model generates driving scene videos across multiple geographies, camera types, and vehicle perspectives. The model not only produces highly realistic appearances and temporally consistent object motion, but also learns and reproduces human-like driving behaviors, simulating the motions of the ego-vehicle and surrounding agents in accordance with traffic rules. It creates a wide range of scenarios, including highway and urban driving, multiple vehicle types, pedestrians, cyclists, intersections, turns, weather conditions, and lighting variations. In multi-camera mode, the scenes are generated consistently across all perspectives.

VidGen-2 gives automakers a significant scalability advantage over traditional non-AI simulators by enabling rapid asset generation and imbuing agents in simulations with sophisticated, real-life behaviors. Helm.ai’s approach not only reduces development time and cost but also closes the “sim-to-real” gap, offering a highly realistic and efficient solution that broadens the scope of simulation-based training and validation.

“The latest enhancements in VidGen-2 are designed to meet the complex needs of automakers developing autonomous driving technologies,” said Vladislav Voroninski, Helm.ai’s CEO and founder. “These advancements enable us to generate highly realistic driving scenarios while ensuring compatibility with a wide variety of automotive sensor stacks. The improvements made in VidGen-2 will also support advancements in our other foundation models, accelerating future developments across autonomous driving and robotics automation.”

About Helm.ai

Helm.ai develops next-generation AI software for ADAS, autonomous driving, and robotics automation. Founded in 2016 and headquartered in Redwood City, CA, the company reimagines AI software development to make scalable autonomous driving a reality. Helm.ai offers full-stack realtime AI solutions, including deep neural networks for highway and urban driving, end-to-end autonomous systems, and development and validation tools powered by Deep Teaching™ and generative AI. The company collaborates with global automakers on production-bound projects. For more information on Helm.ai, including products, SDK, and career opportunities, visit https://helm.ai or follow Helm.ai on LinkedIn.

Helm.ai


Release Versions

More News From Helm.ai

Key Safety Milestone: Helm.ai Achieves ASPICE Capability Level 2, Demonstrating Readiness for Mass Production of Safety-Critical Software

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today announced it has achieved Automotive SPICE (ASPICE) Capability Level 2 for engineering processes, as assessed by UL Solutions. This significant safety milestone underscores Helm.ai’s commitment to delivering innovative, reliable, and production-ready software solutions for mass-market road vehicles. Helm.ai’s proprietary Deep Teaching™...

Helm.ai Announces GenSim-2: Generative AI with Video Editing Capabilities for Autonomous Driving Development

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of advanced AI software for high-end ADAS, autonomous driving, and robotics automation, today unveiled GenSim-2, its next-generation generative AI model for creating and modifying video data for autonomous driving. GenSim-2 introduces state-of-the-art, AI-based video editing capabilities, including dynamic weather and illumination adjustments, object appearance modifications, and consistent multi-camera support. These advancemen...

Helm.ai Introduces WorldGen-1, a First of Its Kind Multi-sensor Generative AI Foundation Model for Autonomous Driving

REDWOOD CITY, Calif.--(BUSINESS WIRE)--Helm.ai, a leading provider of AI software for high-end ADAS, Level 4 autonomous driving, and robotics, today announced the launch of a multi-sensor generative AI foundation model for simulating the entire autonomous vehicle stack. WorldGen-1 synthesizes highly realistic sensor and perception data across multiple modalities and perspectives simultaneously, extrapolates sensor data from one modality to another, and predicts the behavior of the ego-vehicle a...
Back to Newsroom