Back To Blogs

Sim2real: Bridging the gap between Simulation and reality

By: Karthigeyan Ganesh Shankar & Chinmayee Wamorkar

Robotic systems tend to exhibit high entropy despite being comprised of ordered systems, perfectly blending mechanical, electrical, and software elements. For production-ready deployments, it is essential to maintain persistent, predictable states across all these verticals. Without this, we risk deviations, collisions, and unpredictable manoeuvres, disrupting the seamless flow of operations.

Given the complexity of the problem at hand, hardware iterations can be expensive, both in terms of time and money. The predictability and reliability of the system can be better understood through the help of a simulation of the robot in a controlled, high fidelity environment.

Gentle introduction to Nvidia’s Isaac Sim

Ati’s simulation journey started with Unity, using the framework to act as a control simulator to test the integrity of the vehicle in controlled environments. While Unity had its limitations, it was useful in translating physics models directly from reality and tweaking different configurations to observe the behavior of the model. The perception capabilities of the bot, with the lidar acting as its sensory input, were incorporated into Unity to get a virtual stream of lidar scans as the bot navigated through the warehouse, which was built in-house.

Recently, we made a conscious decision to move our simulation framework completely to Nvidia’s Isaac Sim, given that our bots were powered by Nvidia’s Jetson boards.

NVIDIA Isaac Sim is a powerful reference application that allows developers to design, simulate, test, and train AI-based robots and autonomous machines in a physically aware virtual environment. It is built on the NVIDIA Omniverse platform and offers the methodology to conduct high-fidelity simulations and reliable synthetic data generation for training machine learning models.

Simulation frameworks

Once the Nvidia Isaac Sim is installed (refer to Isaac Sim documentation for your environment), we wanted to understand and space out the different use-cases that could be delivered through Isaac Sim, namely:

  • Support the software development journey of Ati’s rich product portfolio – Tug, Lite, Pivot, Lifter and Pallet Mover right from controls, perception and localization standpoint.
  • Catalyze the training time and develop mature object detection pipelines to enhance the safety of the bot.
  • Support the team with high fidelity and realistic simulations to help visually convey the functionality of the bot to the customers in their work environment.

As a derivative, let’s take a look at one such use-case, table detection for the Lifter to dock and lift the table in Isaac Sim.

Transferring the learning to reality

In order to train the lifter to detect and lift the table, we started by rigging the robot to add joint articulations and driver configurations. To accomplish this, we used the Onshape importer plugin provided by Nvidia and followed the rigging best practices guide.

Lifter In a Warehouse Environment

Once we received satisfactory results from the Lifter kinematics, we moved to the Omniverse Replicator which was recently introduced to artificially generate data through domain randomization.

Omniverse replicator

We need an extensive amount of data to train our vision models. To not overfit, the data had to be augmented to account for different angles of approach, locations, lighting, potential distractors, and a multitude of versions of the objects we actually need to detect. Getting this variety of data, and then labeling each frame, is laborious. The way we do this while maintaining good data quality is by generating synthetic data.

Using the pallet detection pipeline as an inspiration, we developed the table detection workflow and used the trained model to detect and dock in reality as shown below.

Lifter in Simulation
Lifter’s view through LiDar
Lifter docking in reality

In addition to this, NVIDIA’s Omniverse Replicator and Isaac Sim have been effective in helping us study a diverse set of problems: 

  • Robust testing of obstacle detection and avoidance for a set of assets and scenarios. 
  • Mimicking noise on different sensors (GPS, IMU, cameras, lidars and ultrasound) and testing them. 
  • Replicating customer sites using 3D Nerf models to calculate takt time, sizing and specific requirements. 
  • Detecting and resolving conflicts between Ati and third party bots at intersections in brownfield deployments. 

We encourage our customers to experience our robots operating in their warehouses and factory environments through our simulation stack, in order to make informed decisions about automating their workflow.