Google is pitting AI against AI to see if they work together

 Researchers are experimenting with social dilemma to see how AI agents interact
Google is pitting AI against AI to see if they work together


With the advancements researchers have made so far in the field of artificial intelligence, it's not hard to imagine a future where large swathes, if not all of human society, is managed by AI systems. So it's only prudent that, if we're planning to hand over day-to-day functionality to multiple different AI, we ensure they're capable of working together for the greater good.

Google subsidiary DeepMind is doing just that, by essentially locking two AI systems into a social situation, to see whether they cooperate or instead resort to sabotaging each other to further their individual goals. These social experiments involve rules where individuals can profit from selfishness but if everyone is selfish, they all lose. Both experiments the researchers attempted were framed as basic video games that the AI agents played. In the first situation, called ‘Gathering', both AI agents were given a task to gather virtual apples from a slowly replenishing central pile. Additionally, they also have the option to “tag” the other player and temporarily take them out of the game, in which case the first player would have a precious few moments to collect more apples.

In the second situation, ‘Wolfpack', two players have to hunt a third in an arena filled with obstacles. In this case, points are gained not only by the player doing the capturing, but by all players near the prey when it's caught.

The researchers found that both AI systems decided at different times to work together, and at other times to cooperate, depending on the situation. For instance, in the apple gathering game, both AI started out collecting apples and essentially ignored the other. However, the more stocks dwindled, the more the AI took to tagging each other out of the game in order to get more apples. Additionally, when a more powerful AI was introduced into the situation, it began zapping the other players regardless of the stockpile size available.

So, does that mean the smarter AI thought that going on the offensive was the best strategy? The researchers suggest that since the tagging required more computations, taking up valuable apple-gathering time, the smarter AI was likely first to fire as it knew it was capable of the task without falling behind on resource gathering. The lesser AI were instead happy to cooperate when it was easier to do so.

Conversely, in the Wolfpack scenario, the smarter an AI, the more likely it was to cooperate with other players, as learning to work together to capture the prey also required more compute power.

These experiments cast light on the unique challenge AI developers will face in the years to come. It shows that AI can change their behaviour based on rules they face. If an AI is rewarded for being aggressive towards other systems (zap another player to get more apples for yourself) it will adopt that as early as possible. If it is rewarded for cooperating (everyone gets equal points when the prey is captured), it will attempt to work together with other systems. That means developers have to keep these rules in mind. An unchecked AI, faced with a situation involving limited resources, might be prone to prioritising itself above other systems, no matter the overall consequences. And if multiple AI are working together to handle things like a country's electric grid, economy, or traffic systems, in-fighting could have disastrous consequences.


No comments:

Post a Comment