– Google DeepMind researchers have created AI models that can play multiple 3D games like a human and act on verbal instructions
– The AI model, SIMA, was trained on video gameplay by humans and can perform various actions and recognize objects and interactions
– The goal is to create a more natural game-playing companion that can adapt and produce emergent behaviors, unlike traditional hard-coded game characters.
Google DeepMind researchers have developed SIMA, a model that can play multiple 3D games and understand and act on verbal instructions, similar to how a human would. The model was trained on many hours of video showing gameplay by humans to associate visual representations with actions, objects, and interactions.
The model was able to generalize its learnings from one set of games to perform well in games it hadn’t been trained on. However, some games have unique mechanics that could challenge even the best-prepared AI. The researchers aim to create a more natural game-playing companion that can be cooperative and receive instructions from human players.
SIMA recognizes several dozen actions and can combine them to perform tasks in various games. The approach taken by Deepmind differs from traditional simulator-based training that uses reinforcement learning, as games used in this study did not provide a traditional reward signal.
By utilizing imitation learning from human behavior and text-based goals, SIMA is trained to perform a wide variety of tasks without the limitations of a strict reward structure. This approach allows the model to adapt and learn from a wide range of scenarios.
Other companies are also exploring similar approaches to AI training, such as using chatbots for NPC conversations and tracking AI simulated interactions in different research experiments. The future of AI in gaming looks promising with advancements like SIMA and other similar models.