Outline
Group Members: Jake Cillay and Wil Troxel
In our project, we will be training an agent to solve a randomized maze from a first-person camera, using reinforcement learning. The environment that the agent will be placed in has already been created and has a clear start point and end point. Our project will focus on having the agent solve the maze using a single texture (concrete) and once that task is accomplished, we will attempt to have the agent solve the maze using a different texture (mirror, carpet, etc.). We also plan to add this agent and maze to the open AI Gym to gain practice in reinforcement learning. Our main goal is being able to translate this simulation into the real world and have the agent solve the maze in any environment they are placed in. Though we are only training this agent in a simulation there can be ethical implications of reinforcement learning in the real world. For example having an agent in the real world be able to drive a car, which is like solving a maze, could take away from the gig economy for those who drive for Uber or Lyft. Reinforcement learning can be used to help humans in many ways, but there are also situations in which it poorly affects human life. It is important to consider these implications as the field of AI continues to grow.