Skip to content

elliotwoods/MuscleMemory

Repository files navigation

Introduction

An open source 'industrial servo motor' which employs Reinforcement Learning to train itself how best to drive a load. The user can decide in this case what is meant by 'best', e.g. speed, accuracy or energy efficiency. The model is run locally on a low cost microcontroller (i.e. ESP32), but is trained remotely where more computational resources are available (e.g. in the cloud).

image

Project status

If you would like to get involved with development and prototyping then please get in touch via elliot@kimchiandchips.com.

Update 2021-05

Reinforcement learning

Network architecture

  • Server
    • High performance hardware (e.g. desktrop CPU + GPU)
    • FastAPI REST service
    • TensorFlow implemented RL algorithms (e.g. DDPG / NAF)
  • Client
    • Low cost hardware (e.g. ESP32 microcontroller)
    • MicroPython
    • TensorFlow Lite module
    • (download model from server, run actor, gather samples, send to server) : repeat

Prior work

Muscle Memory builds on the work of previous projects, most notably Mechaduino by Tropical Labs. A list of other open source motor driver projects can be seen at https://github.com/cajt/list_of_robot_electronics

Credits

Muscle Memory is a project of Kimchi and Chips art studio, and is partly funded by the Arts Council of Korea via the State, Action, Reward project.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published