Helm.ai Announces VidGen-1: State of the Art Generative AI Video for Autonomous Driving

Download

Loading media player...

VidGen-1: Generative AI Video Model for Autonomous Driving

REDWOOD CITY, Calif.--()--Helm.ai, a leading provider of advanced AI software for high-end ADAS, Level 4 autonomous driving, and robotic automation, today announced the launch of VidGen-1, a generative AI model that produces highly realistic video sequences of driving scenes for autonomous driving development and validation. This innovative AI technology follows Helm.ai’s announcement of GenSim-1 for AI-generated labeled images and is significant for both prediction tasks and generative simulation.

Trained on thousands of hours of diverse driving footage, Helm.ai’s generative AI video model leverages innovative deep neural network (DNN) architectures combined with Deep Teaching —a highly efficient unsupervised training technology—to create realistic video sequences of driving scenes. The videos, produced at a resolution of 384 x 640, variable frame rates up to 30 frames per second, and up to minutes in length, can be generated at random without an input prompt or can be prompted with a single image or input video.

VidGen-1 is able to generate videos of driving scenes in different geographies and for multiple types of cameras and vehicle perspectives. The model not only produces highly realistic appearances and temporally consistent object motion but also learns and reproduces human-like driving behaviors, generating motions of the ego-vehicle and surrounding agents acting according to traffic rules. The model simulates realistic video footage of various scenarios across multiple cities internationally, encompassing urban and suburban environments, a variety of vehicles, pedestrians, bicyclists, intersections, turns, weather conditions (e.g., rain, fog), illumination effects (e.g., glare, night driving), and even accurate reflections on wet road surfaces, reflective building walls and the hood of the ego-vehicle.

Video data is the most information-rich sensory modality in autonomous driving and comes from the most cost-effective sensor—the camera. However, the high dimensionality of video data makes AI video generation a challenging task. Achieving a high level of image quality while accurately modeling the dynamics of a moving scene, hence video realism, is a well-known difficulty in video generation applications.

"We've made a technical breakthrough in generative AI for video to develop VidGen-1, setting a new bar in the autonomous driving domain. Combining our Deep Teaching technology, which we’ve been developing for years, with additional in-house innovation on generative DNN architectures results in a highly effective and scalable method for producing realistic AI-generated videos. Our technology is general and can be applied equally effectively to autonomous driving, robotics, and any other domain of video generation without change," said Helm.ai’s CEO and Co-Founder, Vladislav Voroninski.

VidGen-1 offers automakers significant scalability advantages compared to traditional non-AI simulations, by enabling rapid asset generation and imbuing the agents in the simulation with sophisticated real-life behaviors. Helm.ai's approach not only reduces development time and cost but also effectively closes the “sim-to-real” gap, providing a highly realistic and efficient solution that greatly widens the applicability of simulation-based training and validation.

"Predicting the next frame in a video is similar to predicting the next word in a sentence but much more high dimensional,” added Voroninski. “Generating realistic video sequences of a driving scene represents the most advanced form of prediction for autonomous driving, as it entails accurately modeling the appearance of the real world and includes both intent prediction and path planning as implicit sub-tasks at the highest level of the stack. This capability is crucial for autonomous driving because, fundamentally, driving is about predicting what will happen next."

About Helm.ai

Helm.ai is developing the next generation of AI software for high-end ADAS, Level 4 autonomous driving, and robotic automation. Founded in 2016 and headquartered in Redwood City, CA, the company has re-envisioned the approach to AI software development, aiming to make truly scalable autonomous driving a reality. For more information on Helm.ai, including its products, SDK, and open career opportunities, visit https://www.helm.ai/ or find Helm.ai on LinkedIn.

Contacts

Media Contact:
Satoko Nakayama
Helm.ai
press@helm.ai

Release Summary

VidGen-1, Helm.ai's generative AI video model, produces realistic video sequences of driving scenes for autonomous driving development and validation.

Contacts

Media Contact:
Satoko Nakayama
Helm.ai
press@helm.ai