GPUs, AI robots, robotics, robotic

Big Tech is pouring big money into the development of autonomous, general purpose humanoid robots that will rely on AI software to help them move with a wide range of motion and have the ability to learn new tasks. And unlike single-purpose built robots of today, the new general purpose robots are designed in the short term to be employed across a wide range of industries like manufacturing, logistics, warehousing, retail and even home settings. Longer term, expect humanoid AI robots to join astronauts in space, assuming they don’t replace them completely.

The latest big investment target is a startup called Figure AI. The California-based company has received millions in major funding from Amazon founder Jeff Bezos, AI chip powerhouse NVidia, OpenAI, Microsoft and Samsung, among others while essentially remaining mum about their investments moves. Reports estimate Figure AI has received $675 million in its latest funding round that now puts a valuation on Figure AI of at least $2 billion. About $100 million comes from Bezos, while Microsoft has kicked in $95 million.

For its part, Figure AI is fairly forthcoming in its ambitions. According to the company’s website, the master plan is to build a feature-complete electromechanical robot with hands capable of human-like manipulation that will be integrated into the workforce. Volume production that would lower individual unit costs to an affordable price is the goal. Future AI thinks the price of an individual robot would drop further once the robots become capable of building other robots. For Figure AI, the choice is between millions of different types of robots or one robot with a general interface capable of millions of tasks. One caveat: “We will not place humanoids in military or defense applications.” A skeptic might say: “Good luck with that.” 

Figure AI also is specific about what its Figure 01 robot will be like beyond its two hands, two legs and a screen for a face. Figure 01 will stand 5’6” inches tall and weigh about 132 pounds, have a walking speed of 1.2 meters per second, and lift up to 44 pounds. Figure 01 will be able to operate for five hours before it needs recharging.

Figure 01 already has been embraced by German car maker BMW which says it will deploy the robot in its giant, non-unionized plant in Spartanburg, SC over the next 12 to 24 months, according to Reuters. The robots will be integrated into the manufacturing process with roles in the body shop, sheet metal and warehouse departments. The BMW plant currently employs about 11,000 people.

Figure 01 is likely to have a number of robotic buddies to keep it company. In Norway, a humanoid robot with a butler-like vibe named Neo is under development by an OpenAI-backed startup called 1X Technologies AS. The company received an influx of $100 million earlier this year. Neo is designed with both household and manufacturing tasks in mind.

Also on the humanoid robot roster are Elon Musk’s Optimus model, which recently demonstrated its improved walking style and touch sensitivity adroit enough to pick up an egg, that may ship as early as next year. Amazon is testing a humanoid robot called Digit built by Agility Robots Inc. which expects to mass produce 10,000 per year at new “robofab” billed as the world’s first humanoid robotics factory, in Salem, OR. And Vancouver-based Sanctuary AI is developing a 5’7”-inch, 155-pound humanoid robot dubbed Phoenix powered by an AI system called Carbon.

Sanctuary AI Unveils Phoenix™ – A Humanoid General-Purpose Robot Designed for Work

“We designed Phoenix to be the most sensor-rich and physically capable humanoid ever built and to enable Carbon’s rapidly growing intelligence to perform the broadest set of work tasks possible,” says Geordie Rose, co-founder and CEO of Sanctuary AI. “We see a future where general purpose robots are as ubiquitous as cars.” 

The humanoid models in development come alongside advances in how to train them to operate in the physical world. In January for example, Google’s DeepMind Robotics Team unveiled a suite of tools called AutoRT, SARA-RT and RT-Trajectory that leverage large language models (LLM) and visual language models (VLM) to help robots navigate and understand their environment. The VLM would describe what it sees to the LLM which would suggest manipulation tasks. These tools also speed up the decision-making process with a technique that lowers computational requirements.  They would all work in conjunction to respond to a command like “cook us a delicious, healthy meal,” for example. 

Another tool the Google team devised is a robot control model (RT-1 or RT-2) that can be used to deploy a small army of robots (20 robots simultaneously or up to 52 operating individually) to gather training data in novel environments. In a nod to the “Three Laws of Robotics” developed by legendary science fiction author Isaac Asimov, the foremost safety rule is that a robot “may not injure a human being.” Additional safety guardrails include the stipulation that no robot attempts tasks involving humans, animals, sharp objects or electrical appliances. A further safeguard would require all robots to operate within line-of-sight of a human supervisor with a physical deactivation switch.

For the record, Asimov’s three laws of robotics are: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given to it by human beings except where such order would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Sounds like good warranty material.

TECHSTRONG TV

Click full-screen to enable volume control
Watch latest episodes and shows

Networking Field Day

TECHSTRONG AI PODCAST

SHARE THIS STORY