How a Challenging Game Made Waves in the World of AI

Artificial intelligenceSince the beginning of the computer age, games have served as a vital test for assessing how effectively machines learn and make complex decisions. In recent years, automated agents have educated themselves quickly, surpassing humans at challenging games like Atari, GO, and poker variations.

Checkers, chess, and backgammon were the precursors of these game domains in that they provided complicated yet well-defined challenges for AI practitioners. AI has made terrific progress because these games have a defined set of rules to follow.

Hanabi is a game of memory and strategy. AI researchers have been searching for this holy grail in computer science with this constantly evolving puzzle, which has proved much more challenging than other games like poker or blackjack.

Once AI has mastered Hanabi, it raises the bar for future benchmarking projects.

What is Hanabi?

The award-winning game was created in 2010 by Antoine Bauza for French board game publisher Asmodée Éditions. Since 2018 Asmodée Éditions has become the second most prominent board game publisher globally, possibly thanks to Hanabi.

Hanabi is Japanese for fireworks, and if you want to play a table game, there’s no better option. Hanabi is a unique game where players don’t see their cards but are aware of the other players’ hands. The play then revolves around a specific order to create a firework show simulation, sounds easy, but of course, it’s not.

Information passed between players is limited, making this a skilled game of memory and interpretation.

Why is Hanabi Important to AI?

Limited information is a developer’s nightmare. After all, as much as we think AI is taking over, it is only as good as the computer scientists who write the programs.

Because Hanabi, the amount of information given to each player is limited, players must pick up on implied information like body language and verbal hints. Therefore the challenges presented by Hanabi were like no other, and to expand techniques, developers chose Hanabi to educate AI.

Additionally, to create the best program possible, AI must learn to provide maximum information in its gameplay to help the other players and, in turn, expand its repertoire.

The key for AI computer analysts is to navigate the muddy waters of the human mindset and the ever-changing imperfect reasoning environment. Do that, and you have the code cracked.

New Algorithms Required

Google commissioned a white paper published by Science Direct, and contributors agree that Hanabi’s cooperative objective distinguishes it from others. The complex mix of partial observability and collaborative incentives poses significant problems for AI in understanding policies and communicating with others.

Instead of using a separate channel, Hanabi’s communication combines both communication and environmental activities, not to mention that Antoine Bauza designed Hanabi’s coordination and communication problems to challenge human players.

Computer scientists will require a new generation of algorithms to address the issues. These include game theory, reinforcement learning, and emergent communication. Emergent communication is the study of how communication occurs between several AI agents in collaborative settings based on the work of multiple AI sub-fields.

Facebook and Google Agree On One Thing

Facebook and Google may be competitors in the race for AI technology, but they agree that Hanabi is a benchmark to test if algorithms can learn other agents’ rules. Facebook and Google know that a complete understanding of how others develop their strategies would also push artificial intelligence forward.

Both are working on the assumption to make sense in situations where planning and execution must be centrally coordinated. For instance, robots operating in a factory as a team or self-drive vehicles while constantly working on the general principle that one agent’s perception alone will probably not be accurate.

Which means they need to collaborate and adjust their actions based on others’ behaviors – using what’s known as “the wisdom of crowds.”

Examining the search performance in the face of an imperfect working partner model and how to make search more resistant to faults in that model are all considered a work in progress with any AI model.


The cooperative gaming mix and imprecise knowledge make Hanabi an exciting research issue for machine learning approaches in multi-agent contexts. Analyzing the most up-to-date reinforcement learning techniques utilizing deep neural networks and demonstrating that they are primarily insufficient even to exceed existing hand-coded bots when assessed in a self-play environment makes upgrading the Algorithms possible.

In addition, developers show that similar approaches don’t work in ad-hoc teams when agents play with unknown partners. Learning and playing Hanabi well is influenced by the human theory of mind.

Therefore a better understanding of the function that theory of mind reasoning could play well for AI systems. Learning to work with other agents, including humans, will be enhanced by advancements in both self-play learnings and adjusting to unfamiliar partners.

Although AI is learning fast, we are still far away from being able to replace human input.