Xiaomi has officially launched and open-sourced its new autonomous driving framework, Xiaomi OneVL. This innovative model aims to improve how self-driving systems understand, reason about, and predict complex road situations.

Xiaomi OneVL combines multiple AI technologies in one system
According to Xiaomi, OneVL is the industry’s first framework to integrate several key technologies—Vision-Language-Action (VLA), world models, and latent space inference—into a single cohesive system. Building on the reasoning strengths of the earlier XLA model, OneVL enhances both the speed and accuracy of inference in autonomous driving tasks.
Traditionally, autonomous driving research has treated VLA and world models as separate paths. VLA systems focus on interpreting traffic scenes and generating driving actions, while world models predict how those scenes might evolve over time. Xiaomi’s OneVL unifies these approaches through latent space reasoning, offering a more comprehensive understanding of dynamic driving environments. Related coverage: Samsung devices eligible for One UI 9 update: Here’s.
Strong performance with improved interpretability
Xiaomi reports that OneVL performs strongly across multiple mainstream benchmarks for perception, reasoning, and planning. The framework reportedly surpasses the accuracy of explicit Chain-of-Thought (CoT) reasoning methods while maintaining similar processing speeds to latent space CoT systems.
Interpretability is a key feature of OneVL. The system can explain its decision-making process using both natural language and visual representations. This means the model can describe why a vehicle should take a particular driving action and visually display predictions of potential future road scenarios.
Open source release signals Xiaomi’s AI ambitions
Xiaomi’s release of OneVL follows its recent open-source debut of the Omnivoice audio generation model. By making OneVL open source, Xiaomi is stepping up its presence in the competitive AI and smart mobility sectors, encouraging wider development and collaboration in autonomous driving technologies.
(Via)






