The Linux Foundation Projects
Skip to main content
Blog

Accelerating Robotic Innovation through Open Source Technologies

In this blog, we talk with a panel of industry experts about the importance of simulation in robotics, along with current trends. We also dive into challenges that developers face, and how they can be addressed through open source solutions.

Q: We see the changes that have happened in the past couple of years with ROS, OGRE, Gazebo. What are some of the other trends and challenges that you see in the robotics industry?

Mabel Zhang: We’ve seen in the last few years that organizations are increasingly trusting open source simulators, from our work with NASA Viper when they put ROS on the ground software to send something to the moon. Mission-critical simulations are really important to NASA because they only get one try in the real world. And then we see that in autonomous driving through our work with Apex AI when they stripped down ROS 2 and got it certified. So, these are examples of seeing how people are actually trusting ROS 2 on the road. And then in marine robotics, they really have a need for simulation because it’s similar to space, the cost is really high. 

Robin Rowe: The biggest challenge in robotics is always the hardware, that we’re always limited by what is available to build in the physical world. Having good simulation is one way that we can address this challenge. Hardware is what makes robotics hard.

Matt Hansen: I think that simulation is how you can enable yourself as a robot developer to make progress without being constrained completely by the hardware, right? Now, in modern robotics software development workflows, you’re going to test 1,000 times in simulation before you test on the physical hardware, and that’s probably good because the hardware development team or person needs the hardware in their hands most of the time in order to verify that all the mechanical and electrical is working. So you don’t want two teams—your software team and your hardware team—both being bottlenecked by a limited number of prototype hardware systems that you have. Simulation enables you to basically break that bottleneck, decouple the software from the hardware, much better.

I think scaling is a huge issue today, and I work mostly in that area, helping companies to take their simulations from running on local desktop machines to scaling out and running in the cloud. You know, running tens of thousands of simulations a day in order to do large-volume data collection, validate their software and prove that their robot is safe and meets the necessary requirements.

Everyone thinks of adding more compute to a single simulation, and you can do more within that simulation, right? That’s one obvious way of increasing the performance of your simulator. But I think the ability to scale out horizontally—and the ability to spatially distribute simulations—is the next big wave that needs to happen. Say you want to simulate the surface of the moon and have many different robots, you know, you want to simulate robots building a human settlement on the moon. That takes spatially dividing that across many simulators, having many robots working in a swarm together to collaborate, cooperate. You may be running some reinforcement learning algorithms on some of them. All of that requires computing and scaling your compute horizontally. And I think that’s something I’m really excited about happening in this next generation of simulation, say, in the next 10-ish years.

Royal O’Brien: Yeah, I can definitely see that. When you’re talking about simulators and engines, having them Linux-based is key so they can scale out in the cloud very quickly, so they can use some of the new advanced GPU features, so they can actually do these things. You’re only going to get so many devices that you can operate, and you need simulations to create those hundreds of thousands of hours to be able to get that simulation. So, I think having cores like that is really important. 

Lars Gleim: I personally see two big drivers (trends) in simulation and robotics. The first trend is that dev ops is coming to robotics in more areas. So, you’re no longer physically constrained by the hardware. Like Matt said, you actually develop and iterate on your software stack for your robot and test it in simulation. And this actually also works for each individually simulated robot at super real time. And you can, of course, run as many simulations in parallel as you have robots. So, this really opens up new avenues for testing, and it also makes it possible to test scenarios which may not be safe in the real world, right? 

The second big trend—which is currently happening in academia and slowly moving to industry—is AI and robotics. With AI, we always have this problem that training AI models is very sample-inefficient, at least today. And if you want to train a neural network, let’s say, on a physical robot based on the data that it can collect in real time, you would have to wait a few years until it learned anything useful. But if you run it in simulators, optimize for this, which can run at a 100, it’s maybe thousands in real time, you can really learn helpful behavior much quicker. And with learned models, you always have this challenge where it may be unsafe, so you can also actually generate a lot of scenarios in simulation where you can test corner case behavior of your control set. This also applies for traditional control sets, of course. and we see a lot of applications for that already, for example, in autonomous driving.

Adam Dabrowski: I fully agree with Lars. I I will add that I still see a lot of companies that do not appreciate the value of simulation for building robotic use cases. This is still happening, so I think one of the challenges for advancement of robotics is actually knowing the best practices and understanding which tool is good for what. 

It’s important to understand what the best modern practices for simulation are depending on your approach because we have different goals for simulation. 

Simulation can be used to quickly prototype a solution. It can be used for validation, CI regression testing, integration testing, and so on. We can use it for synthetic data generation, for machine learning to enrich and make our data sets bigger and performance of our machine learning detectors better. And then—and this is very important—we can also use it to visualize use cases for customers to help convince them and say, “This is what it will look like when I deploy the robots in your environment, and this is the productivity that you can expect,” and so on. And you can work with this in a kind of relationship between the real case and the simulation. 

So I think the knowledge of best practices and which tools you can use is something that is still lacking. It’s a challenge.

Royal O’Brien: I think the reality is that we’re looking at this future of where digital and physical are going to start converging, and simulation is going to be the key that will keep you in immersion. I think a lot of people don’t look at the field of robotics as being as important as it truly is, and how it will tie into generative AI, and things they want to see in the future. It’s interesting that things that are not in the robotics industry have such an influence in the robotics industry, and vice versa. So, if you’re not from this industry, you should still find it interesting because you’ll come to find out that, eventually, you may get pulled into it in one way or another. 

What trends and challenges are you seeing in the robotics industry?

This discussion is an excerpt from an earlier panel discussion. View the live recording of the full discussion here.