Benefits
High performance
Built-in pre- and post-processing ensures highest real-time performance.
Unified Runtime Core
One foundation powering the entire VISIONARY® suite
Modular
Easily extendible to new devices due to modular design
Features
AI Runtime is designed to streamline and accelerate AI model deployment across a wide range of hardware platforms. Its flexible architecture and optimized components make it a powerful solution for real-time, embedded, and edge AI applications. Core features include:
Vendor-Agnostic Acceleration - Compatible with a wide range of hardware vendors and platforms.
Multi-Backend Support - Built-in backends for Onnxruntime, LiteRT, NVIDIA CUDA, TensorRT, Triton Server, NXP i.MX, Texas Instruments TIDL, HAILO, Qualcomm QNN, and Rockchip.
Hardware-Optimized Pre- & Post-Processing - Ensures efficient, low-latency inference tailored to each device.
YAML-Based Configuration - Simple and intuitive setup without the need for custom code.
Easy Deployment Anywhere - Designed for fast integration and deployment on embedded, edge, and custom hardware.

SPLEENLAB®
AI Runtime
Spleenlab AI Runtime is a C++-based inference framework that abstracts hardware complexity to seamlessly connect VISIONAIRY® Software with various devices, offering optimized pre-/post-processing and configurable multi-backend support via YAML.
Why AI Runtime?
Running neural networks efficiently on diverse embedded devices is a complex challenge—each platform has its own constraints, requirements, and optimization needs. Spleenlab AI Runtime simplifies this process by offering a powerful, C++-based runtime layer that supports multiple inference backends across various hardware architectures.
Its built-in hardware abstraction, optimized pre- and post-processing, and intuitive YAML-based configuration enable rapid deployment, reduced development overhead, and faster time to market. As the foundation between Spleenlab VISIONARY® Software and your target hardware, AI Runtime ensures scalable, high-performance AI integration with minimal effort.
Fast & Easy Deployment
Streamlined Workflow
Future-Proof Architecture
See how it works
Supported companion hardware
Nvidia Jetson, ModalAI Voxl2 / Mini, Qualcomm RB5, IMX7, IMX8, Raspberry PI, Texas Instruments TDA4
Basis-SW/OS
Linux, Docker required
Input - Sensors
Any input
Minimum
Recommended
RAM
2 GB
4 GB
Storage
20 GB
50 GB
The information provided reflects recommended hardware specifications based on insights gained from successful customer projects and integrations. These recommendations are not limitations, and actual requirements may vary depending on the specific configuration.
Our algorithms are compatible with any chip, platform, sensor, and individual configuration. Please contact us for further information.

