Title 01
Change the text and add your own content, including any information that is relevant to share. Then customize the font, size and scale to make it your own.
Title 02
Change the text and add your own content, including any information that is relevant to share. Then customize the font, size and scale to make it your own.
Title 03
Change the text and add your own content, including any information that is relevant to share. Then customize the font, size and scale to make it your own.


visionairy®
GPS-Denied Navigation
VISIONAIRY® GPS-Denied Navigation enables drones, ground vehicles, and maritime platforms to navigate autonomously and reliably when GNSS is unavailable, unreliable, or compromised. It combines Visual-Inertial Odometry (VIO) with Map-Based Relocalization to deliver robust, high-precision localization for complex defense missions across all domains.
Why GPS-Denied Navigation?
Defense operations frequently require autonomous navigation in environments where GPS is jammed, spoofed, or entirely inaccessible—such as urban canyons, indoor facilities, dense forests, deep valleys, or active electronic warfare zones.
VISIONAIRY® GPS-Denied Navigation addresses this challenge by combining:
-
Visual-Inertial Odometry (VIO) – Fuses camera and inertial sensor data for continuous motion estimation.
-
Map-Based Relocalization – Enables assets to reconnect to known reference maps to correct drift and maintain localization accuracy over long missions.
This dual-layer approach ensures resilient, precise navigation—providing operational freedom even in the most GPS-challenged scenarios.
Reliable GPS-Independent Localization
Enhanced Accuracy via Map-Based Relocalization
Lightweight, Real-Time, Multi-Domain Operation
See how it works


Benefits
GPS-Free, Robust Navigation
Operates autonomously using onboard cameras, IMUs, and pre-existing maps without reliance on GNSS.
Lightweight, Real-Time Performance
Runs efficiently on resource-constrained platforms with minimal CPU and memory footprint.
Enhanced Accuracy with Map-Based Relocalization
Minimizes long-term drift by using environmental maps as a dynamic reference for high-precision relocalization during the mission.
Features
VISIONAIRY® GPS-Denied Navigation integrates advanced sensor fusion, real-time map alignment, and scalable autonomy for defense missions across air, land, and sea.
Visual Cameras - High-resolution RGB imaging for feature detection and tracking
Thermal Imaging - Enables navigation in low-light, night, or visually obscured conditions.
Inertial Measurement Units (IMU) - Provides precise motion tracking, supporting continuous estimation without external signals.
Visual-Inertial Odometry (VIO) - Combines visual and inertial data to estimate motion in real time.
Map-Based Relocalization - Utilizes pre-existing maps or previously generated data to correct drift and improve positional accuracy over time.
Real-Time Onboard Processing - Runs fully on embedded computing platforms, requiring no cloud or ground-based processing.
Multi-Domain Compatibility - Optimized for seamless deployment on drones, ground vehicles, and maritime platforms.
These performance metrics are for demonstrative purposes only, based on configurations with proven results. Actual performance may vary by setup. Our algorithms are optimized for use with any chip, platform, or sensor. Contact us for details.
update rate
Up to 200 Hz
initialization time
<1 second
Maximum Velocity
20 m/s with full accuracy
Operating Range
Unlimited (environment-dependent)
Drift
<0.1% of distance traveled
Supported companion hardware
Nvidia Jetson, ModalAI Voxl2 / Mini, Qualcomm RB5, IMX7, IMX8, Raspberry PI
Supported flight controllers
PX4, APX4, Ardupilot
Basis-SW/OS
Linux, Docker required
Interfaces
ROS2 or Mavlink
Interfaces
ROS2 or Mavlink
Input - Sensoren
• Any type of camera (sensor agnostic)
• Any type of global orientation (optional)
Input - Data
• Any type of camera (sensor agnostic)
• Any type of IMU
• GNSS position data for intial position referencing
• In case of aerial vehicles: current flight altitude
• Intrinsic & extrinsiv sensor calibration
Output - Data
• Real-time visual position estimate
• Accurate metric 3D velocity estimate
Minimum
Recommended
RAM
2 GB
4 GB
Storage
20 GB
50 GB
Camera
640 x 480 px, 10 FPS
1920 x 1080 px, 30 FPS
IMU
100 Hz
300 Hz
The information provided reflects recommended hardware specifications based on insights gained from successful customer projects and integrations. These recommendations are not limitations, and actual requirements may vary depending on the specific configuration.
Our algorithms are compatible with any chip, platform, sensor, and individual configuration. Please contact us for further information.
Resources
Solutions
Customers
Spleenlab GmbH is a highly specialized AI perception software company founded with the vision of redefining safety and artificial intelligence.
©
2025
Spleenlab GmbH | All Rights Reserved