Q.AI Intelligence logo representing the proprietary artificial intelligence algorithms that power all Quasi autonomous robot solutions and allow for extensive data analytics.

The Artificial Intelligence
of Advanced AMR Robots

Q.AI is the artificial intelligence (AI) software built into each Quasi autonomous robot. It’s the brain behind our AMRs, empowering advanced self-navigation, unparalleled ease-of-use, and extensive insight into operational data.

Q.AI Robot Autonomy

Precise Self-Navigation

Q.AI’s sophisticated autonomy stack combines powerful sensor data and intelligent path planning for reliable and efficient navigation in any setting.

Diagram of Q.AI-enabled LIDAR sensor, creating and updating a real-time facility map for reliable autonomous navigation.
360° LIDAR – Long-Range

Real-Time Area Mapping

Our AMR robots’ LiDAR system continuously scans surroundings in 360°, feeding the data into Q.AI’s live, appearance-based, mapping system. This creates and updates a dynamic reference map, crucial for precise localization, object detection, and reliable navigation.

16 tofs – Short-Range

Split-Second Obstacle Avoidance

The Q.AI navigation stack integrates perimeter Time-of-Flight (ToF) sensors to detect objects at close range to Quasi AMRs. This enables instant reaction and avoidance of obstacles, accurate maneuvers in tight spaces, and safe interaction with human coworkers.

Diagram of the Q.AI-enabled time of flight (ToF) sensors, detecting the surrounding environment for short-range obstacles for instantaneous reaction time to dynamic interference from humans, moving, and static objects.
Diagram of the Model C2 3D stereo camera sensor, used to scan and identify surrounding objects is its environment to accurately localize and update its live location within the facility map.
3D Depth camera – localization

Reliable Location & Orientation

3D stereo cameras provide Q.AI with detailed depth information, allowing Quasi AMRs to accurately identify their position based on outside elements, even after a system restart. This ensures uninterrupted operation without lost map data or re-localization, as well exact positioning of each stop.

Q.AI Intelligent Collaboration

Learning & Adaptation

Imbedded machine learning and instant knowledge sharing let your entire Quasi fleet integrate, scale, and grow more reliable with each delivery.

Robotic Operating System logo
ros 2 robotic system base

Total System & Hardware Integration

We’ve selected ROS 2 as the basis of our automated mobile robot software. This industry-standard platform allows seamless integration between a wide range of software and hardware, ensuring that Quasi AMRs adapt to existing infrastructure and workflows.

Leveraging ROS 2, Q.AI achieves real-time data exchange, precision robotic control, and effortless connection to warehouse and hospital management systems, ERPs, and other critical platforms.

fleet-wide intelligence

Collaborative Fleet Learning

Each completed delivery, avoided obstacle, and taken route adds to the Q.AI knowledge database of your robot. Past experiences are used to refine future navigation strategies and learnings are automatically shared across the entire fleet of robots for fast, global adaptation.

C2 Large Video
Expand Your Fleet Instantly

Scalability with Zero Downtime

AMRs powered by Q.AI scale easily alongside your growing operations. Instant knowledge transfer syncs facility maps and learned behaviors with new fleet robots in seconds – without needing to remap areas, retrain systems, or scale personnel – for zero downtime and immediate productivity.

Manipulator Path Planning and Trajectory Planning

Quasi AI implements various AI algorithms for motion planning, inverse kinematics, manipulation, 3D perception, and collision checking to calculate optimal movements for the manipulator (robotic arm and gripper) poses in 3D spaces relative to where the target object is located.

Once calculated, time-parameterized joint trajectories are executed by our proprietary motion controllers. Quasi AI sends each R2 motor speed and position commands and monitors motor feedback to cross-check calculated versus actual positions and compensate variations accordingly.

Quasi AI monitors motor temperatures, voltage, and currents to detect motor stalling due to collision(s) or other malfunctions and terminate trajectory execution when the allowed threshold is exceeded.

Model R2 Robot gripper view

AMR Robotic
Object Recognition and Object Detection

Quasi AI sophisticated object detection is built around a segmentation of stereo depth camera point clouds to detect individual objects, matching the desired objects using a video stream from the camera and projecting recognized objects from 2D images to 3D space.

Robotic arm of Quasi Model R2 manipulator robot identifying objects within its environment with stereo vision and Q.AI Intelligence

Once the desired object is detected, the Quasi AI grasp detection algorithm takes over – finding the best gripper position to acquire the item. From there, Quasi AI via inverse kinematics detects the best possible arm pose and approach to get the item and to retrieve it avoiding collisions.

We trained the Quasi AI detection pipeline using state-of-the-art CNN-based (convolutional neural network) deep learning grasp detection algorithms for intelligent visual grasp in many R2 robot usage scenarios. Furthermore, these detection algorithms allow quick and easy introduction of new, previously unseen objects into the pipeline by the end user of R2 robots.

Intelligent Reporting

Intelligent Reporting

Another side of Quasi AI data processing capabilities is its intelligent reporting. Next to the ability to process enormous streams of data from various inputs, we’ve added a generalization and learning block.

Quasi AI monitors patterns in data streams and learns to extract information relevant for reporting, auditing and dashboard visualizations.

Quasi AI Training charts

More about Q.AI in the Cloud

Discover More