Deepen AI Unveils Next-Generation Targetless Multi-Sensor Calibration Platform for Physical AI applications

Automating multi-sensor calibration with higher accuracy, minimal data requirements, and real-world deployment flexibility

With this release, teams can move to a system-level approach that delivers both speed and precision using real-world data.””

— Mohammad Musa, CEO and Co-Founder at Deepen AI

SANTA CLARA, CA, UNITED STATES, April 14, 2026 /EINPresswire.com/ — Deepen AI today announced the release of its latest targetless calibration platform, built to simplify and accelerate calibration for complex Autonomous Vehicles (AV), Automotive ADAS and robotics sensor suites.

As sensor stacks become more sophisticated, traditional calibration methods are increasingly becoming a bottleneck in deploying autonomous systems at scale. These approaches are often manual, iterative, and dependent on physical targets. Deepen AI’s new solution introduces a fully automated and unified approach that calibrates all sensors simultaneously.

The platform estimates intrinsic, extrinsic, and temporal parameters across the entire sensor suite in a single streamlined workflow, removing the need for sensor-by-sensor calibration. It supports a wide range of configurations including multiple LiDARs, Radars, Cameras, IMUs, and GNSS receivers, and processes all inputs in one pass using a single continuous dataset such as a ROS bag. This approach streamlines operations while delivering high performance, achieving up to 0.05° angular accuracy and 0.7 cm positional accuracy, exceeding traditional target-based calibration techniques.

Key capabilities include:
– Simultaneous calibration across all sensors using a single dataset
– Support for multi LiDAR, camera, radar, IMU, and GNSS configurations
– Accuracy of up to 0.05° and 0.7 cm
– No strict requirement for loop closure or fixed driving patterns

“Calibration has traditionally been one of the most time-consuming, complex and fragmented steps in deploying autonomous systems,” said Mohammad Musa, Founder and CEO of Deepen AI. “With this release, teams can move to a system level approach that delivers both speed and precision using real-world data.”

The system is designed to work without controlled environments or rigid data collection protocols, allowing teams to seamlessly integrate calibration into existing workflows for both research and large-scale production deployments. It requires only simple and practical conditions, with calibration possible in locations such as parking lots, garages, or quiet streets, provided the environment is mostly static with minimal moving objects. A minimum of 30 seconds of continuous driving data is required.

The platform is already being deployed with customers working on highly complex sensor configurations, where multiple LiDARs and cameras need to be calibrated together as a single system. In one such deployment, the full sensor stack was calibrated during a normal drive in a parking garage, parking lot, or a small residential street, without any special driving patterns or looped trajectories. Using only a short duration of driving data, Deepen AI simultaneously performed intrinsic, extrinsic, and temporal calibration across all sensors in a single workflow. This unified approach not only simplifies operations and improves consistency, but also delivers accuracy that surpasses traditional target-based calibration methods, making it well suited for both research and production environments.

Deepen AI provides AI powered data infrastructure solutions for Physical AI and autonomous systems, including data collection, annotation, sensor fusion, calibration, and validation. The company works with leading automotive, robotics, and mapping organizations to build safer and more reliable AI systems at scale.

Reach out to info@deepen.ai or visit www.deepen.ai for more information

Mohammad Musa
Deepen AI
+1 650-560-7130
email us here
Visit us on social media:
LinkedIn

Legal Disclaimer:

EIN Presswire provides this news content “as is” without warranty of any kind. We do not accept any responsibility or liability
for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this
article. If you have any complaints or copyright issues related to this article, kindly contact the author above.

Media gallery