
These performance metrics are for demonstrative purposes only, based on configurations with proven results. Actual performance may vary by setup. Our algorithms are optimized for use with any chip, platform, or sensor. Contact us for details.
Reprojection Error
±1 px in typical environments
Runtime
30-90 min
Required Recording Length
3-5 min
Supported companion hardware
Nvidia Jetson, AMD64 device with Nvidia GPU
Basis-SW/OS
Linux, Docker required
Interfaces
ROS2 bag
Input - Sensors
• Any type of camera (sensor agnostic)
• Any type of IMU or GPS
Input - Data
• Camera's video frames
• IMU's raw measurements
• GPS points
Output - Data
Intrinsic and extrinsic calibration in 'kalibr' format
Minimum
Recommended
RAM
16 GB
32 GB
Storage
30 GB
50 GB
Camera
640 x 480 px, 10 FPS
1920 x 1080 px, 30 FPS
IMU
100 Hz or GPS
300 Hz or GPS
The information provided reflects recommended hardware specifications based on insights gained from successful customer projects and integrations. These recommendations are not limitations, and actual requirements may vary depending on the specific configuration.
Our algorithms are compatible with any chip, platform, sensor, and individual configuration. Please contact us for further information.
See how it works
Here is an example of the intrinsic calibration of an infrared camera.

visionairy®
Auto Extrinsic Camera Calibration
Targetless extrinsic camera calibration enables precise relative alignment between multiple cameras without requiring a physical calibration target.
Performance Metrics
Position accuracy
±2.5 cm in typical environments
update rate
Up to 200 Hz
initialization time
<1 second
Maximum Velocity
20 m/s with full accuracy
Operating Range
Unlimited (environment-dependent)
Drift
<0.1% of distance traveled
Benefits
Target-free
Eliminates the hassle and limitations of traditional calibration setups.
Faster & flexible calibration
Integrates seamlessly with diverse camera sensors for maximum deployment flexibility.
Robust performance
Capable of re-identifying objects instantly in live video streams for responsive applications.
Why Auto Extrinsic Camera Calibration?
Traditional camera calibration relies on physical targets, which can be time-consuming, error-prone, and impractical in many scenarios.
Our targetless approach leverages natural scene features and advanced optimization techniques to deliver accurate, consistent calibration without the hassle of setting up calibration patterns. It's faster, more flexible, and robust in real-world environments.
Enables multi-camera fusion
Improves spatial accuracy
Supports scalable deployments
Features
Targetless Auto Extrinsic Camera Calibration uses visual information and motion data to achieve precise relative calibration—without requiring external targets:
Supports Thermal and RGB Cameras - Easily calibrate across both thermal and RGB modalities for greater flexibility in diverse environments.
3D Reconstruction of the Scene - Reconstruct the surrounding scene in 3D to enable accurate spatial understanding and alignment.
Extrinsic and Intrinsic Estimation if GPS or IMU is available - Improve calibration precision by incorporating GPS or IMU data for intrinsic and extrinsic parameter estimation.
Supports Simultaneous Multi-Sensor Calibration - Simultaneously calibrate multiple cameras or sensor types for synchronized, multi-view systems.
Supports Common Camera Models - Pinhole, Radtan, Equidistant, and Double Sphere.
