-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Floating World View Widget or JupyterLab Support? #11
Comments
Hi Tony! I will say that a couple of colleagues and I are thinking about writing a book with associated notebooks, and we would like Jyro to be useful to the end. So, if it doesn't do what you need it to do, then it probably doesn't do what we need it to. I'm willing to try to make it viable. I think making it work like ipyturtle and making it jupyterlab compatible are both doable. There are a couple of other items I'd like to improve, including making the lighting more realistic, and making the UI a little less clunky. Improving the 3D view might be beyond what we can do (we tried to keep the computational requirements low). I'd be happy to help put together a list of wishlist items for a new version. What is your timeline? |
Hi Doug TImeline is... OU courses take forever (months and months) but we're meeting to decide (maybe?!) on a way forward for course update end of next week. Will hopefully post a review of sorts of possibilities I've found by end of today or over w/e; My doodlings so far w/ Jyro are here. I need to move onto other packages now but things I got stuck on include: no ability to do a line follower (I figure this would complicate camera view, but camera could be blind to marks on floor? Though you do have a trace? Maybe lines on floor would complicate that too!), I couldn't offhand see how to introduce puck into world and then eg grab it w/ gripper, close gripper, reverse w/ puck, place puck s/where else. I notice in other issues an issue on camera view of other robots in world (I didn't try: do you just pass a list of different robot objects into world?); I haven't really thought about multirobot sim, but could be useful, esp w/ multiple pucks too. As other considerations come to mind, I'll try them out in my demo or raise them here. One thing I do want to explore is embedding streaming data charts using data collected from robot in visual simulator as it runs. PS your book project sounds exciting:-) |
@psychemedia Yes, no downward facing sensors. A fun task that we did in intro CS, but wasn't the goal of Jyro (mostly camera and range sensor oriented for deep learning models). Downward-facing sensors might be possible (and would be interesting to have different colors of flooring, too.) Yes, another limit is not being able to see other robots or pucks. Streaming data charts as it runs should be easy, I think. |
Something related that could be useful: I ported Jyro over to Java (converted into Javascript) for my Processing-based CS intro: https://jupyter.brynmawr.edu/services/public/dblank/CS110%20Intro%20to%20Computing/2017-Spring/Lectures/Robot%20Control.ipynb |
Ah, yes, thanks for the reminder of that processing one... I thought I recalled something on those lines;-) Cribbed the files into my review repo if that's okay just so I can run multiple demos in one Binder container. (I really should have thought about doing the review as a Jupyter Book from the start... ho hum...if nothing else, it may be a useful exercise pondering what needs to change in order to make an effective Jupyter Book... |
Hi Doug We decided yesterday we would be going with One comment from others in module team regarded the UI / screen real estate management, with inline simulator being fixed in location inside the notebook as not ideal. Although I far prefer the notebook UI (I'm guessing cribbing from the Another comment was that the processing port simulator seemed much smoother. Whilst we're committed to the python code, I did start idly wondering about processing style widgets in a Python notebook eg https://github.com/jtpio/ipyp5 or the new canvas widget https://github.com/martinRenou/ipycanvas ) but coding animations / visual simulations isn't something I've ever really played with (now might be the time for me to start!) Although I haven't started thinking about |
Well, thank you very much! I take that vote with a sense of responsibility. Very exciting! Some notes:
Very exciting! I look forward to working with you to make this pedagogically useful! |
@dsblank It's actually we who should be thanking you for sharing the code:-) (I did suggest we try to get you over for a few days...hmm, thinks... there's a community workshop call out, isn't there?) Will contribute as and where we can, and share back content WIP privately if not publicly (I'll share as much on open repos as I can get away with!). I'm also happy to test things and will be exploring various teaching strategy approaches to explore the space a bit more, even if we don't end up going there for the teaching material. (eg we use screencasts for some things, so I'll probably have a look at Jupyter Graffiti, and maybe even some bits of selenium automation as tool for creating screencasts). Accessibility is another thing we always have to bear in mind, both in terms of keyboard accessibility / support for visually impaired users, but also equivalent activities for students who can't access a particular activity. Again, I'll be looking for strategies around and about to support this. Deployment wise, which is something we need to consider from the start (1k students twice a year, all at a distance on a wide variety of machines, some of which may be pretty ropey...), I'd be looking at delivering personal environments via a Docker container (containds is already looking promising in this regard — I always did have a soft spot for Kitematic!), as well as a hosted service behind Jupyterhub, probably Dockerspawned, although a plugin for a TLJH custom environment could be on the cards too. The course has a couple of other software requirements which require virtualisation, so I'm keen to explore Docker delivery. |
@dsblank "The Processing port is smoother in that it runs in the browser. But now, having written both kinds of systems, maybe there is something to be learned" To what extent could that be wrapped as an ipywidget, then controlled from py? By the by, |
@psychemedia Interesting idea! Going to take a deep dive on this this weekend. |
We're looking for a new simulator for an online distance education course, and I'm currently playing with every simulator I can find (which isn't many; review will be here).
Playing with Jyro (leading candidate:-), I noted that embedding the simulator world view beneath a single notebook code cell can make it hard to see what's going on in the world as a code is built up, perhaps across several cells.
The ipyturtle widget allows a turtle canvas to be embedded below a code cell or floated over the notebook as the notebook scrolls beneath. It'd be handy if this floating feature was available in jyro.
At the moment it looks the simulator is rendered via
display()
[ https://github.com/Calysto/jyro/blob/master/jyro/simulator/simulator.py#L593 ] whereasipyturtle
creates adiv
with an appropriate style attribute (fixed
orstatic
; https://github.com/gkvoelkl/ipython-turtle-widget/blob/master/js/lib/example.js#L60 ).How easy / hard would it be to support a floating simulator world view widget? Any issues likely to arise?
I did also wonder whether Jyro would work in JupyterLab, and allow the simulator window to be torn off into it's own panel, but that seemed not to work? (I don't really understand Jupyterlab. It seems far more complex to try to build extensions in than notebooks?!)
The text was updated successfully, but these errors were encountered: