Nine Men's Morris

Each year, as part of the course of Foundaments of Artificial Intelligence M, groups of students are invited to design a little prototype of an intelligent agent able to play to the game of Nine Men's Morris, a popular board game also known as Mill, Cowboy Checkers, Merrils, Mulino, Mulinello Grisia, Tris, or Filetto.

This challenge have made possible to acquire data which have been used for:

 

For any questions, feel free to contact professor Paola Mello (paola.mello *at* unibo.it), engineer Federico Chesani (federico.chesani *at* unibo.it) or Andrea Galassi (a.galassi *at* unibo.it). 

 

Useful resurces

 

Nine Men's Morris Good Moves Dataset

Brief history of these datasets

This data sets are collections of states of the game Nine Men's Morris, which have been created with the purpose to apply machine learning to train softwares to play to the game, and to test their performances.

These datasets have been created by Andrea Galassi, as part of his master thesis in Computer Science Engineering ("Ingegneria Informatica", in italian) and as part of a successive work.
Please feel free to contact him (a.galassi *at* unibo.it) or his thesis supervisors (Paola Mello, paola.mello *at* unibo.it, Federico Chesani federico.chesani *at* unibo.it) for further questions.

For further information, the thesis is available here: Symbolic versus sub-symbolic approaches: a case study on training Deep Networks to play Nine Men’s Morris game.

 

Good Moves Dataset (Matches Dataset)

The dataset consist of 100,154 game states and as many good moves elaborated by an Artificial Intelligence for the game of Nine Men's Morris.

None of the states in the dataset is symmetric to any other, therefore anyone can handle the symmetries as he/she prefers.
If all the symmetric states are explored, the dataset can reach 1,628,673 pairs.

The dataset contains states both reachable and unreachable during a normal match, decreasing the probability of reaching a training state during a testing match. The moves contained in it could be different from the optimal one, however, it constitutes a good knowledge base, from which other AI system can learn to play the game.

All the data have been generated making play an Artificial Intelligence called Deep Mill against other artificial intelligence and gathering the choices made by Deep Mill during the games.

Three version of the dataset are available:

 

Reachable States Dataset

The dataset consist of 2,085,613 states which are reachable through a finite sequence of legal moves starting from the initial empty board configurations. It has been generated exploring the space of the game states applying random choices from a reachable configurations.

None of the states contained in this dataset is present in the Good Moves Dataset.

 

Data format

The dataset does not distinction between black and white checkers but only between player checkers and enemy ones. An entry of the dataset consist of a string of 31 to 35 characters: 

 

NNMM: Neural Nine Men's Morris

This dataset has been used to train a neural networks system called Neural Nine Men's Morris to play the game, without inserting symbolic knowledge about the game rules.
After training the system has been tested on the whole expanded dataset (therefore considering any symmetric state) and the outcome is:

 

For more information about neural networks, NNMM and its testing, see Neural Nine Men's Morris and Symbolic versus sub-symbolic approaches: a case study on training Deep Networks to play Nine Men’s Morris game.

 

Useful resurces

 

Neural Nine Men's Morris

NNMM logo

Neural Nine Men's Morris (NNMM) is a software capable to build, train, test and use Deep Neural Networks which can be used for playing the game of Nine Men's Morris.
It has been demonstrated that the system is capable to learn to play by the rules of the game, even if the knowledge of those rules has not been provided to it.

It has been designed by Andrea Galassi, as part of his master thesis in Computer Science Engineering ("Ingegneria Informatica", in italian), and it has been improved as part of a successive work.
Please feel free to contact him (a.galassi *at* unibo.it) or his thesis supervisors (Paola Mello, paola.mello *at* unibo.it, Federico Chesani federico.chesani *at* unibo.it) for further questions.

For further information, the thesis is available here: Symbolic versus sub-symbolic approaches: a case study on training Deep Networks to play Nine Men’s Morris game.

 

Software and implementation

The software is under the MIT License and is available here: NNMM.

The system has been written in Python language, relying on the Lasagne, Theano and Numpy libraries.

 

NNMM architecture

System Architecture

NNMM relies on 3 neural networks to decide the move to make. Each network is dedicated to the choice of one of the three parts of the final move.

These parts are:

  • TO (T): where the checker will be placed
  • FROM (F): what checker will be moved
  • REMOVE (R): what adversary's checker will be removed.

The system is designed to allow whichever configuration of the three decisions. In the image is depicted the system workflow with a TFR configuration.

 

Training and Testing results

NNMM has demonstrated to be able to learn to play by the game rules without having knowledge about them.

The configurations of the system has been trained (TFR, FTR and RFT) on the Matches Dataset and successively tested on the same dataset and on the State Dataset. Both dataset are available here: Nine Men's Morris Good Moves Dataset.

The accuracy scores on the Matches Dataset have been about 37%.

All the configurations, tested on both dataset, have been able to suggest a legal move (a move that follows the rules of the games) in more than the 99% of cases.

The networks used for these tests are available for download: TFR, FTR, RFT.

 

Useful resurces

 

Nine Men's Morris Students Challenge

Each year, as part of the course of Foundaments of Artificial Intelligence M, groups of students are invited to design a little prototype of an Artificial Intelligence able to play to the game of Nine Men's Morris, a popular board game also known as Mill, Cowboy Checkers, Merrils, Mulino, Mulinello Grisia, Tris, or Filetto.

 

This experience has led to:

This research has been presented at the XVIth International Conference of the Italian Association for Artificial Intelligence: A Game-Based Competition as Instrument for Teaching Artificial Intelligence.

 

For any questions, feel free to contact professor Paola Mello (paola.mello *at* unibo.it), engineer Federico Chesani (federico.chesani *at* unibo.it) or Andrea Galassi (a.galassi *at* unibo.it).

 

2016/2017 Challenge

Slides of the current challenge (in italian): 2017 slides

 

Nine Men's Morris Good Moves Dataset

Thanks to this competition it has been possible to create a dataset of good moves for this game, that can be freely used for any machine learning approach: dataset page.

 

Acknowledgements

Thanks to every student who has participate to this competition:

Arianna Zanetti, Alessio Paccoia, Mirco Nani, Alessio Tonioni, Francesco Burelli, Antonio Murgia, Francesco Lo Cascio, Mirko Collura, Costantino Casaburi, Silvio Olivastri, Andrea Battaglia, Andrea Zanotti, Pierluigi Zama Ramirez, Niccolò Aimi, Francesco Giorgianni, Massimiliano La Ferrara, Angelo Maglione, Raffaele Rotella, Alessio Durinzi, Stefano Nicoletti, Alessandro Di Cesare, Filippo Franzoni, Tommaso Madonia, Andrea Galassi, Francesco Ricci, Cosimo Scarcella, Leo Gioia, Amedeo Palopoli, Alfonso Solimeo, Alessandro Vadruccio, Paolo Sarti, Marco Di Vincenzo, Marcello Ballanti, Dario Alise, Giuseppe Di Marzo, Federico Ruggeri, Dario Pasquali, Sara Bevilacqua, Nicola Lazzari, Andrea Rossetto, Nicola Ghiselli, Michele Gelsomini, Giocanni Grammatico, Alessandro Celi, Matteo Carano, Daniele Pettinari, Leonardo Mora, Leonardo Baroncelli, Francesco Calabria, Kevin Zaini, Alex Casadio, Simone Nigro, Nikos Thomopulos, Boschini Matteo, Silvestri Mattia, Guardati Simone, Buratti Luca, Gianessi Mattia, Berlati Alessandro, Corni Gabriele, Andraghetti Lorenzo, Bombino Andrea, Di Felice Marco, Damini Enrico, Stivala Giada, Cinesi Andrea, Neri Giacomo, Paternesi Claudio, Foti Daniele, Alessi Nicola, Zirondelli Alberto, Buscaroli Riccardo.

2016/2017 Challenge Results

Winner

The winner of the 2016/2017 competition, with 9 wins and 5 draws out of 14 matches, is called Akatsuki.

A sum up of the competition results can be found in the attached file "2017end". The detailed results are illustrated in the file "2017results", while the output of each match is available in the file "2017matches".

 

Partecipant Teams

Any student who has partecipated is free to notify any inaccuracy to Andrea Galassi ( a.galassi *at* unibo.it )

Team 1: MoulinBleu

Negascout algorithm.

State as a 3D byte matrix.

Use of hashtable (Zobrist hashing).

Heuristic: number of open mills and number of checkers.

 

Team 2: Akatsuki

Heuristic inspired by Petcu–Holban one, but takes into account also the opponent player situation.

 

Team 3: EUropean Genius

State as a 3x8 matrix. 

The heuristic function changes according to the moment of the game (not only the phase, but also the phases of a phase).

 

Team 4: Benchwarmers

Iterative Deepening Search.

The heuristic function takes into account the opponent checkers.

 

Team 5: MOLINARO

Negascout search, use of Peter Brook library.

Iterative Deepening.

Heuristic inspired by Petcu–Holban, but further parameter tuning.

 

Team 6: Mulino Bianco

State as a long integer (64 bit)

Symmetries considered to limit the possible actions generation.

Iterative Deepening with a different search algorithm for each phase.

Use of transposition table.

Heuristic function parameters tuned using genetic algorithms.

 

Team 7: Jar Jar

State as 2 integer of 4 bytes.

Use of 4 different heuristics according to the game phase.

Use of Best Node Search.

 

Team 8: bejoke

Heuristic function based on Fibonacci weights.

AttachmentSize
2017matches.zip84.49 KB
2017results.xls49.5 KB
2017end.pdf1.64 MB

2015/2016 Challenge Results

Winner

The winner of the 2015/2016 competition, with 14 wins out of 18, is called Samaritan.

 

Partecipant Teams

Any student who has partecipated is free to notify any inaccuracy to Andrea Galassi ( andrea.galassi7 *at* unibo.it )

Team 1: Alphabot

Binary representation of the board using two integers.

Iterative deepening search with alpha-beta pruning.

Multi-threaded idle-time searching.

The heuristic function is based on the value of the player's checkers.

Result achived: 10/18 wins.

Team 2: BotQuixote

State represented as arrays of bytes.

Perform state space search with Negamax algorithm and iterative deepening.

Use a revisited "Pectu-Holban" heuristic function and rely on transposition table to speed up the search.

Idle-time searching.

The project is available here (courtesy of the authors).

Team members: Ballanti Marcello, Di Vincenzo Marco e Sarti Paolo.

Result achived: 10/18 wins.

Team 3: Samaritan

Board state represented as two integers.

MTD(f) search algorithm with use of alpha-beta pruning and transposition tables.

The heuristic function consider many aspects of the board with different weights. The weights have been defined through genetic algorithms.

Iterative deepening search.

Result achived: 14/18 win.

Team 4: Cogito ergo Expando

Board state represented as an integer array.

Search in the state space with Negamax algorithm.

Transposition table to speed up the search.

Idle-time seatch based on the prediction of opponent move.

Iterative search and static state to improve memory usage.

Result achived: 1/18 wins.

Team 5: Mulinator

The project is available here (courtesy of the authors).

Team members: Lazzari Nicola and Rossetto Andrea

Result achived: 1/18 wins.

Team 6: TI GUSTA LA MANGUSTA?!

Alpha-beta search with Iterative Deepening.

Heuristic function based on mill possibility of the two players.

Result achived: 8/18 wins.

Team 7: DEEPLELE

Use of the AIMA library.

It performs an alpha-beta search that can be stopped with a timeout.

Parametric heuristic functions that changes according to game phase.

Result achived: 12/18 wins.

Team 8: Cook Iothin

Alpha-beta search with Iterative deepening.

Random choice of the next move during exploration.

The project is available here (courtesy of the authors).

Team members: Baroncelli Leonardo, Calabria Francesco and Zaini Kevin.

Result achived: 9/18 wins.

Team 9: BarbaMill

Board represented as two integer.

Alpha-beta search with iterative deepening and move sorting.

Idle-time search.

Members: Casadio Alex, Nigro Simone.

The project is available here (courtesy of the authors).

Result achived: 9/18 wins.

Team 10: La gallina Rosita

Negamax search.

Parametric heuristic function with different weight for each phase.

Use of enumerative instead of strings for the representation of the state.

To avoid memory limits, a single game state is mantained and changed through the search.

The project is available here (courtesy of the authors).

Team members: Thomopulos Nikos

Result achived: 11/18 wins.

AttachmentSize
risultati15-16.zip74.61 KB

2014/2015 Challenge Results

Winner

The winner of the 2014/2015 competition, with 12 wins out of 12, is called DeepMill and is available at Github.

 

Partecipant Teams

Any student who has partecipated is free to notify any inaccuracy to Andrea Galassi ( andrea.galassi7 *at* unibo.it )

Team 1: negaGlik

Project written in C language.

Uses the NegaScout algorithm to perform a state space search.

Focus on limiting the number of access to memory through static data structures and on limiting the occupied space.

The heuristic function consider many elements, giving a different weight to each one according to the game phase.

Result achived: 10/12 wins.

Team 2: Squadra Ginew

Project written in Java language.

Performs a state space search with Iterative Deepening and alpha-beta algorithm.

Members: Aimi Niccolò, Zama Ramirez Pierlugi, Zanotti Andrea.

The project is available here (courtesy of the authors).

Result achived: 4/12 wins.

Team 3: Nexus

Project written in Java language.

Result achived: 1/12 win.

Team 4: Duniro

Project written in Java language.

Board representation contains many informations, it is based upon "positions" linked between each other.

Exploit state space search using Limitated Iterative Deepening and alpha-beta algorithm.

The heuristic function is slightly different for each phase and analyzes many factors, among which there are the number of completed and almost completed mills.

Result achived: 4/12 wins.

Team 5: DeepMill

Project written in Java language.

Compact representation of the state: two binary representation of the board (2 integer of 32 bits) and two integer counters for the number of checkers in players' hands.

A search in the state space is made using Iterative Deepening and the NegaScout algorithm (which is an improvment of alpha-beta search).

Uses a different heuristic function for each phase, considering the number of blocked checkers, the number of Mills present and other factors, focusing on blocking the adversary's checkers.

Uses a Transposition table: during the search, each visited state is saved in an Hash Map which exploit some symmetries of the problem to evaluate the heuristic faster and therefore speed up the search.

The original project is available here (courtesy of the authors).

The updated project is available at Github.

Members: Madonia Tommaso, Di Cesare Alessandro and Franzoni Andrea.

Result achived: 12/12 wins.

Team 6: UgoBugo

Project written in Java language.

Uses the NagaScout algorithm to perform the search in the state space

Members: Galassi Andrea.

Result achived: 4/12 wins.

Team 7: Unknown

Project written in Java language.

Members: Scarcella Cosimo e Ricci Francesco.

The project is available here (courtesy of the authors).

Result achived: 7/12 wins.

 

 

AttachmentSize
risultati1415.zip35.99 KB

A Game-Based Competition as Instrument for Teaching Artificial Intelligence

TitleA Game-Based Competition as Instrument for Teaching Artificial Intelligence
Publication TypeConference Proceedings
Year of Conference2017
AuthorsChesani, F., A. Galassi, P. Mello, and G. Trisolini
EditorEsposito, F., R. Basili, S. Ferilli, and F. A. Lisi
Conference NameAI*IA 2017 Advances in Artificial Intelligence: XVIth International Conference of the Italian Association for Artificial Intelligence, Bari, Italy, November 14-17, 2017, Proceedings
Series TitleLecture Notes in Computer Science
Pagination72–84
PublisherSpringer International Publishing
Conference LocationCham
ISBN Number978-3-319-70169-1
Abstract

This paper reports about teaching Artificial Intelligence (AI) by applying the experiential approach called ``learning by doing'', where traditional, formal teaching is integrated with a practical activity (a game competition, in our case), that is relevant for AI discipline and allows for an active and playful participation of students.

URLhttps://doi.org/10.1007/978-3-319-70169-1_6
DOI10.1007/978-3-319-70169-1_6