Nvidia May Be Lone Rider on Next Big Technology Wave: Robotics
Dell's most forward-looking people spoke about the future at Dell World a few weeks ago. One of the sessions I attended dovetailed with something that appears to be glaringly obvious -- to me, anyway -- which is that is that robots likely will be the next big technology wave.
I then wandered around to find out what Dell was doing in robotics, and I couldn't find anything. Dell is not alone, as I'm not aware of any of the current leading technology firms doing anything in robotics with one exception: Nvidia.
Nvidia also figured out early that autonomous cars were going to be a thing and largely pivoted from the mobile device efforts that were not going much of anyplace to self-driving cars. It now dominates the important part of that trend -- the brain.
Well last week, Nvidia announced Isaac, which is based on its Jetson platform, and it is targeting robotics. Once again, Nvidia has anticipated the future and, in its segment, is largely going it alone.
Applying what it learned developing autonomous vehicles gave the company a huge jump on this segment, and its initial offering looks surprisingly mature as a result. I'll share some observations about what Nvidia's Isaac is going to enable and close with my product of the week: Cinego, a movie-watching solution that provides a big screen experience on your head and actually is damn comfortable.
The Elements of Success
Self-driving cars are basically robots that carry people. They are very advanced, because these robots must be able to deal with a massive variety of changing conditions in real time. Using a blend of cameras and technologies like LIDAR, they must look for and anticipate problems, respond to them in milliseconds, and ensure the safety of the vehicle, passengers, and anyone near the vehicle.
They are far more advanced and faster, in terms of being able to think and form decisions, than most defense systems, most computer systems, and most traffic control systems. They have to be -- otherwise, they wouldn't be safe on the road.
One of the elements Nvidia realized it needed late in the process was the ability to create electronic simulations of various traffic, road and weather conditions, and train the autonomous driving computers at computer speed.
Previously, training had been done at human speed on real roads, which limited significantly the system's learning speed and created potential life-threatening risks. Training on a virtual system entails little or no risk, so the result of the pivot to simulation was a massive increase in system capabilities.
Nvidia has applied these same tools to Isaac, and the result is that its robotic solution starts out years ahead of where it otherwise might be.
So, the end result is a robotic intelligence system with much of the power of Nvidia's Autonomous Vehicle system, giving it the ability to navigate, see and make decisions. Even voice command is built in, given that you largely will interface with an autonomous vehicle with your voice. Autonomous cars can read signs, so the robots based on this technology should be able to read as well.
The Robotic Future of Isaac
Using this system, developers should be able to give the robot the ability to respond to commands, read labels on food packaging and medicine bottles, and perform many of the same tasks as a caregiver over time.
Unlike with a monkey or a dog, should something happen to the robot requiring a replacement, the specific training could be passed on, so that the new robot wouldn't need to be retrained. Able to come when called, recognize danger, and automatically call for help, this emerging generation of care robots could reduce massively the cost of caring for those who have limited mobility.
Applied to a class of cleaning robots, this technology could make the Roombas of today look positively ancient. They would be able to dust, vacuum, mop, clean windows, and potentially even cook food -- initially basic meals like TV dinners. Eventually, they could evolve into full home care providers.
Outside, the robotic lawnmowers of today are very limited, requiring electronic borders and generally bouncing around the lawn like the first-generation Roombas. With this advanced ability to make decisions, the robot could not only make the lawn look better, but also issue alerts about problems, make recommendations about how to fix them, and begin to execute the fixes at an ever-more-capable scale.
Trimming hedges -- and, depending on the model, trees -- as well as doing menial labor like pulling weeds would be well within the platform's capabilities, once trained. I'm thinking shoveling snow or running the snow blower on those frigid winter days could be the killer app in colder climates.
Wrapping Up: Nvidia Is Right Again
It amazes me that Nvidia has been able to do this twice. It anticipated the technology need for autonomous cars and the far larger coming wave of robotics.
I don't think we yet realize how the coming wave of robots will change our lives, hopefully for the better. It certainly will be amazing, and with the boost Nvidia got from autonomous cars, the result will come far faster than I think any of us realize.
I just wonder how long it will be before the other tech companies catch on. I'm just looking forward to being able to sleep in and let something else do my winter morning chores.