Clone the repository
git clone https://github.com/mitmedialab/doodlebot-controller-js
cd doodlebot-controller-js/For both of the below cases, the backend should be running. To do this, you can open a new terminal and
cd game-server/
npm run dev
If it's the first time, don't forget to run npm install first. As of now, the backend runs in http://localhost:4000.
To run the frontend:
You'll have to connect a camera to your computer
python3 -m http.server-
Install cam2ip on the computer that will have the camera connection. From the Readme, I downloaded macOS 64bit OpenCV, unzipped it, and executed it.
-
To avoid problems with CORS, activate the Chrome or Firefox "allow CORS" extensions.
-
In the computer that has the camera installed, run the following script in 2 different terminals:
/Users/prg/Downloads/cam2ip-1.6-darwin-cv2/cam2ip -bind-addr :<port>Where
<port>will be a diferent number in each terminal (I used56000and56001). Note the colon before the port number. If you want to show the camera to more devices, use other terminals. -
For each computer, in the interface type the address
<ip_address>:<port>with the available ports. You can find the<ip_address>by running the commandifconfig | grep 192.168(on the computer that has the camera connection).
cd virtual-board/
python3 -m http.serverPython serves the frontend in http://localhost:8000. If, for some reason, you want to serve it in a different port, you can do python3 -m http.server 8001.
To be run from the doodlebot-controller-js folder
python3 -m http.serverAnd then go to http://localhost:8000/virtual-board/test.html
As of now, there are several folders, some with different purposes and others with no purpose but just me testing stuff. As that description is probably not useful, here's a more detailed explanation:
| Folder name | Description | Important? |
|---|---|---|
| virtual-board | This is the core of this project. This provides virtual-board/grid.js which handles the creationg/deletion and update of bots/obstacles/coins. For more details, you can check out its documentation. Apart from this, the folder also provides graph.js with general code to find shortest path on a Graph using Dijkstra; and grid-graph.js which provides VirtualGrid-specific methods to find the shortest way for a bot to go to a given object (i.e., a coin). |
YES |
| game-server | The game-server.js file serves the socket connection between different clients as means of staying synchronized. It basically makes it so that when an object is created/updated/deleted in one client, this also happens on the other client. | Yes |
| marker_detector | It contains the code necessary to manage the data that comes from the camera stream. Most important files are camera-controller.js which provides the CameraController class to find Aruco codes and detect a given color on an image; constants.js with camera-specific values; and opencv_compiled.js which is a compiled version of Opencv JS. I had to compile it myself because the one provided by OpenCv doesn't have the opencv-contrib which is the part that has the Aruco detection. |
Not too much, only changing the deviceId on constants.js for correct camera detection. |
| doodlebot_control | Initial test of connecting the (real) doodlebots to the browser using Bluetooth. The doodlebot_control/doodlebot.js provides the Doodlebot class, which is used in the final physical version of the game. A lot of this was taken from Randi's scratch3_doodlebot repo |
Very little, as I don't think much (if anything) has to change. |
| arucogen | A clone of a PR from arucogen that allows for printing colored aruco codes (main website doesn't). Tried this to see if detection was good, and sadly it isn't | No |
| camera_calibration | An attempt to calibrate the camera I use (and get a cameraMatrix and distCoeffs). Ultimately didn't show great promise |
No |
| color-detection | Playground to test color detection using OpenCV. Not important now as most of my findings got written into the filterColor method in camera-controller |
No |
| server | A server I created when I thought I'd let students train their own ML model. This server acted as a way to store training data. Not used anymore. | No |
| virtual-board-training | UI connected to the previous server that sent information to gather training data. Not used anymore | No |
| workers | Playground when I was checking out how Web Workers worked. My reasoning was that I wanted two different threads to control different bots, so that they act independently. This never worked as WebWorkers have limited functionality and currently they don't support Bluetooth connections. They also have limited types of data they can send from the main thread (as it always have to be copied). |
No |
See it on the documentation page
So far the only workaround I've found is to explicitely set the camera's deviceId on the camera constraints. To find the deviceId you need to open the console (on a window where you've granted Camera access) and run
(await navigator.mediaDevices.enumerateDevices()).filter(
(x) => x.kind === "videoinput"
);