Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Plan - Getting To v1.0 #1

Open
6 of 26 tasks
mfekadu opened this issue Aug 30, 2019 · 0 comments
Open
6 of 26 tasks

The Plan - Getting To v1.0 #1

mfekadu opened this issue Aug 30, 2019 · 0 comments
Assignees
Labels
enhancement New feature or request

Comments

@mfekadu
Copy link
Owner

mfekadu commented Aug 30, 2019

Overview

This issue will describe the plan to get to a minimum viable simulation. It's like a software requirements document but allowing for more technical details.

Step 0: Make A Simple Physics-based 2D Environment 🛠

This step partially satisfies my need for cognitive closure by starting with something to check off but also specifies the environment where the organisms live.

Requirements For The Simulation Environment

  • The simulation has basic physics/collisions/etc
    • Pymunk takes care of physics
  • The simulation can draw shapes (at least circles, squares, triangles, line-segments)
    • Pymunk makes shapes and Pyglet displays shapes
  • The simulation has outer-borders (or at least some way to keep organisms in a finite space...yes, limited energy per organism and food source clustered in a small area is a reasonable solution)
  • The simulation has arbitrarily placed walls/caves/etc.
    • Edit: September 8, 2019 Marking as a low priority task for now because I want to minimize the complexity of the basic simulation to begin with.

Step 1: Implement A "Save State" Mechanism

Requirements

  • The simulation's "state" can be saved, including every object and relevant data attached to each object (e.g., location history?) to:
    • 1. be able to restart from that state
      • Edit: September 8, 2019 Marking as low priority because I am not sure if restarting from a given state is worthwhile? A restart from a given state is possible because the replay feature works in Step 1: Implement A "Save State" Mechanism #2 by saving the binary representation of the entire "space" (Pymunk object), which includes all shapes inside the simulation. Perhaps this feature will be more useful as the simulation gets more complex, but even then it should be trivial to implement the code. The non-trivial part is considering how that initial state might interact with any randomly generated numbers and whether those consequences are acceptable. ¯\_(ツ)_/¯
    • 2. be able to replay a timelapse of the simulation
  • The "save state" mechanism will be fast, perhaps saving to a file in batches after building a cache?
  • consider using pickle because this example and @ryanprior suggest it.

Step 2: Extend The 2D Environment With Important Stuff For Evolution

Requirements

  • The simulation can arbitrarily spawn food
  • The simulation will allow the user to increase/decrease the food spawn rate
  • The organisms can move in any direction (up, down, left, right, etc.)
  • The organisms can sense another object inside it's "field of view" (FOV)
  • The organism has a limited angle and limited range for FOV.
    here's a crude drawing of ray casting to calculate "field of view."
        /|
       / |
     /   |
  /      |
O----o--.|
   \     |
     \   |
       \ |
        \|
  • There exists some way for an organism to know that a food object in its FOV is genuine food
  • The organism can choose to eat food
  • The organism can physically interact (collision) with food
  • The organism can "grab" food and "un-grab" food (throw?)
  • Can the organism see color? (yes by multi-channel FOV?) (but is that necessary? Why not a hex code? utilize fewer data points.)

Step 3: Extend The Organisms With Simple Brains

After reading Up and Down the Ladder of Abstraction (UDLA), I think the simulation would benefit significantly from incremental development with lots of visual representations of each step in the development process. So before adding fancy neural network brains, the organisms in the simulation should be able to follow a simple handwritten ruleset.
Requirements

  • The organisms can follow a simple rule like, "if food is in range of sensors, then move to it and eat."
  • The simulation's state can be recorded while running a simple ruleset.
  • The simulation will display an interactive visualization of all states, at all times
    • see UDLA for inspiration

Step 4: Extend The Organisms With Automated Brains

This step is the fuzziest in my mind. Should a convolutional neural network be used? How about a recurrent neural network? Reinforcement learning? I have no clue what's best.

Requirements

  • The organisms have neural network brains
  • The brains can output
    • move_up
    • move_down
    • move_left
    • move_right
    • eat
    • grab
    • ungrab
  • The brains take as input
    • An array of ray-cast-projections for FOV in Red
    • An array of ray-cast-projections for FOV in Blue
    • An array of ray-cast-projections for FOV in Green
    • the previous K actions, where K is some arbitrary number
@mfekadu mfekadu added the enhancement New feature or request label Aug 30, 2019
@mfekadu mfekadu self-assigned this Aug 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant