close
close

AI agents who play video games will change future robots

Video games played an important role in the development of AI. Many early demonstrations of machine learning included teaching computers to play games. Finally, Google Deepmind's mastery of the Starcraft 2 game was seen as evidence that machines could now compete with us in many areas in which we were previously undisputed champions.

Now games are used as a test bed to examine some of the most exciting new areas of the AI, including autonomous agents, robot real world and maybe even the search for AGI.

At this year's Game Developer Conference, the Deepmind AI department of Google demonstrated its research on what it describes as a scalable, taught MultiWorld agent (Sima).

The idea is to show that machines can navigate and learn in the 3D worlds of the video game environments. You can then use what you have learned to navigate very different worlds and tasks, all with your own rules, using all tools that are available to solve problems.

It may sound like a child's game, but this research could influence the development of agents -KI, which we use in our work and our personal life. So let's take a look at what it could mean and whether it could solve the ultimate KI challenge to create machines that can adapt to any situation as people can.

Virtual worlds

Video games offer a great environment for the training of AI, since the variety of tasks and challenges is almost infinite. It is important that the player usually solves these challenges with the help of a standard tool that everyone is accessed via the game controller.

This corresponds to the way AI agents tackle problems by selecting which tools should be used from a predefined selection.

Player worlds also offer safe, observable and scalable environments in which the effects of subtle changes to variables or behavior can be examined at low real costs.

Deepminds Simas were trained in nine different video game environments that were taken from popular games, including No Man's Sky, Valheim and Goat Simulator. The agents were given the ability to interact and control the games with natural voice commands, such as “picking up the key” or “switching to the blue building”.

Under the outstanding knowledge, research showed that agents are very effective in the transferable learning – what they learn in a game and use it to improve in another.

This was supported by observations that agents who were trained for eight of the nine games in which a game in which they were not sent were better sections than specialized agents who were only formed on one game.

This dynamic learning ability will be of crucial importance in a world, in which agents work together and help us to explore, interpret and understand messy problems and situations in the real world.

But what about looking a little further forward, at a time when it is common for robots to help us with physical and digital tasks?

Physical robot

The development of real robots that perform physical tasks has accelerated in the past decade, hand in hand with the development of AI. However, they are generally still used by large companies due to the high costs for training for specialists for specialists.

The use of virtual and video game environments can dramatically reduce these costs. The theory is that transferable learning enables physical robots to use your hands, arms or tools that you have to tackle for many physical challenges, even if you have not come across you beforehand.

For example, a robot who effectively learns how to use your hands in a warehouse can also learn how to use it to build a house.

Before it released Chatgpt, Openai showed research in this area. Dactyl is a robot hand that is trained in virtual simulated environments and has learned how to solve a ruby ​​cube. This was one of the first demonstrations of the potential to transfer skills that were learned in virtual environments to complex physical world tasks.

In recent times, Nvidia has expressly developed its Isaac platform to train robots to learn how to perform real tasks in virtual environments.

Nowadays, physical AI-supported robots are brought to work in storage rollers, agriculture, healthcare, deliveries and many other jobs. In most cases, however, these robots still make tasks for which they were specially trained – enormous costs for companies with very deep pockets.

But new models of “affordable” robots are on the horizon. Tesla plans to produce thousands of his Optimus robots this year and to assign many of them to work in his factories. And Chinese robotics developer UNTREE recently presented a humanoid robot of 16,000 US dollars, which can turn his hand on many tasks.

Since the price of robots falls and their AI brains become more powerful from day to day and speaking humanoid robots from science fiction earlier into daily reality than we think.

Opposite agi?

Almost 30 years ago, machines achieved their first big victory against people by defeating Gary Kasparov at Chess. Few would have predicted that a computer would exist that could not only beat the world champion in a game but in every game.

This ability to “generalize” information by taking knowledge from a task and excluding it to solve a completely different solving for humans, but that could change.

All of this will be very interesting for those who pursue the holy grail of AI development, artificial general intelligence (AGI).

Evidence that agents such as Deepminds Simas are able to transfer learning from one virtual game environment to another indicate that they may develop some of the properties required for AGI. It shows that you are gradually building skills that can be used to solve future problems.

Together with Openaai, Anthropic and Microsoft, Google has all found that the development of AGI is its ultimate goal, and it is clearly the logical end point of the current focus on the agent intelligence. Could another part of the puzzle be present with video games?

Leave a Comment