Table of Contents

Proiect

Project Title: Escape the Maze

Project Description

This project involves the development of a competitive game where multiple AI agents, created by teams of students, navigate through programmatically generated mazes. The student teams are responsible for both developing the AI agents (clients) and setting up a server that connects these AI agents. Additionally, the server will load the programmatically generated maze maps and function as a viewer to display the progress of the game.

Objective

The goal is to determine the most efficient AI solution through direct competition among the AI agents, with the current positions of the players being highlighted on the screen. The player whose AI exits the maze first is declared the winner. The server will monitor win conditions and award points accordingly.

Game Mechanics

Maze Generation Requirements

The maze must be generated according to the following constraints:

Each generated maze needs to be checked for validity to see if it respects the constraints imposed.

Files

The maze generator must output an image, in 8bpp, grayscale format, with the following color representations for each pixel:

The maze generator can take the same image back as input to generate the exact same maze, or other images to generate new mazes. Obs: Images which contain undefined pixel colors are to be rejected.

Server

The client, represented by an agent strategy, and the server will communicate with each other through a series of JSON commands. The first time a client connects to a server it gets assigned a UUID, thus the server will distinguish between a new connection and a reconnection attempt based on the UUID.

The first connection from an agent will always be an empty JSON, whereas every recconection will be a JSON containing the UUID, in the following format:

{
"UUID": ""
}

In order to simplify the problem, the server can work in a friendly mode where it communicates to an agent its initial coordinates in the maze and the maximum maze size: width, height, alongside the 5×5 tiles it sees initially. Thus, the first JSON request back from the server should be in the following format:

{ 
"UUID": "",
"x [optional]": "",
"y [optional]": "",
"width [optional]": "",
"height [optional]": "",
"view": "string of the matrix representation of the visible area around the agent"
"moves": "total number of moves/commands available for the agent in the first turn"
}

In a normal turn the agent sends a JSON to the server in the following format: {input: “string of commands up to length 10”}

The server will output back a JSON with the following format:

{
"command_1": {
  "name": "name of command, ex: "N"",
  "successful": "0|1",
  "view": "string of the matrix representation of the visible area around the agent after the move;"
          ex for 3x3: "[0, 255, 255; 0, 255, 0; 0, 255, 0]"
},
"command_2": {
  "name": "",
  "successful": "",
  "view": ""
},
...
"command_N": {
  "name":"",
  "successful": "",
  "view": ""
},
"moves": "total number of available moves for the next turn"
}

In the case of a friendly solve, the server will always output the value of a trap if it's inside the agent's visible area. However, in the case of an unfriendly solve, traps are only shown if the agent is 1 tile away from them and their type is hidden using the value of 90.

Once an agent solves a maze, or the server decides the agent is taking too long so it gets timed out, the server will send a JSON with the following format:

{
"end": "0|1, based on if the agent solved the maze or not"
}

Following a solve, the server can test the agents on a new maze, for this it sends a request in the following format:

{
"x [optional]": "",
"y [optional]": "",
"width [optional]": "",
"height [optional]": "",
"view": "string of the matrix representation of the visible area around the agent"
"moves": "total number of moves/commands available for the agent in the first turn"
}

The server can store generated mazes as images and output them back on request.

In the case where multiple agents are on the same server, they don't interact with one other, so that each agent has a fair chance at solving the maze. For this reason, every trap triggered by an agent will only affect that specific agent, so the server needs to keep track of which agent triggered which trap.

Agents

An agent can work in one of two modes:

Each agents performance is measured in one of three ways:

For the real-time mode the agents will have a maximum time allotted before sending each command. If the allotted time expires, the agent is timed out and disqualified, and the maze is considered unsolved. The maximum time can be set before each run, or be preset depending on the maze difficulty.

Each AI agents behaviour must be unique, avoid creating different agents that have only a minor part of their strategy modified.

Viewer

The viewer should output the maze and the agents solving it in the following manner:

Team collaboration

After the initial phase of developing the project: For a successful collaboration, the teams should consider the following:

Teams still need to develop individual solutions. They are allowed to implement and test the same strategies, but each team needs to implement that strategy in their own code.

Milestones and Grading

The project is worth 6 points and is split into 3 major milestones, which will happen in Labs 3, 8, and 12. The milestones and their allocated points are as follows:

  1. Setup milestone (0.5p), lab 3, 21-25 October 2024
    1. team members and their respective roles
    2. the chosen programming language, potential tools etc.
    3. the chosen development methodology (ex: Agile, Scrum etc.)
    4. The teams will have to create private projects on the MPS gitlab, in the form of Day_Hour_TeamName, ex: Monday_8_Team1
      1. All documents for the next milestones will be part of the repository for each team in wiki or others files.
      2. The README of the repo will contain all other links used - if the team uses a third-party solution for project tracking.
      3. The README will also contain the names of all participant members and the student groups they are a part of.
  2. First demo solution (2.5p), lab 8, 25-29 November 2024, which will be graded on the following:
    1. Maze Generator (0.3p)
    2. Server (0.3p)
    3. Viewer (0.4p)
    4. AI Agents (at least one, points awarded based on the agents solving efficiency) (0.6p)
    5. Documentation, which must contain the following (0.6p):
      1. SDD
      2. testing reports
      3. meeting minutes
    6. Respecting the chosen methodology(0.3p):
      1. respecting team roles
      2. respecting scheduling and set tasks
  3. Final demo solution (3p), lab 12, 8-14 January 2025, which will be graded on the following:
    1. Potential improvements and bug fixes on (0.5p):
      1. Maze Generator
      2. Server
      3. Viewer
    2. Improved AI Agents based on team collaborations (at least one additional solution different from the original developed one) (1p)
    3. Documentation (0.7p)
      1. new testing reports
      2. new meeting minutes (collaborative + individual)
    4. Presentation highlighting the results, potentially the evolution of the solution (0.5p)
    5. Respecting the chosen methodology (0.3p):
      1. respecting team roles
      2. respecting scheduling and set tasks

Try developing the project in a couple of notable phases:

  1. Develop core components:
    1. Implement a barebones maze generator (it only places walls and a path in a predefined rectangular area)
    2. Implement a dummy agent (random movement) and check if it works as intended.
    3. In parralel to the maze generator implement the server and the viewer (preferably you have 1 person assigned to each component, but you may also choose not to develop them in parralel).
    4. You can design the viewer/server to only interact with one agent in this phase.
    5. Test that all components interact properly.
    6. Implement some actual solving strategy on an agent and see how it performs.
  2. Improve solution:
    1. Add special tiles to the maze that don't complicate the solving strategy too much (ex: movement traps, fog, tower)
    2. Update the viewer to reflect these changes.
    3. Add the possibility that the agent can use X-RAY points.
    4. Try developing a different strategy on a different agent.
    5. Test how they perform.
  3. Team collaboration phase
    1. Test each others agents.
    2. Obtain feedback related to your strategies, and decide on some better solutions.
    3. Add portals to the maze and see how these affect your agent strategies.
    4. Try designing new strategies to combat the addition of portals.
  4. Final improvements
    1. Add consumables to the maze.
    2. Try creating mazes with tricky layouts and test how the agents perform.
    3. Update the viewer/server so multiple agents can run at once on the same maze.
    4. Fix any potential issues in the code.