Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Floating World View Widget or JupyterLab Support? #11

Open
psychemedia opened this issue Nov 15, 2019 · 10 comments
Open

Floating World View Widget or JupyterLab Support? #11

psychemedia opened this issue Nov 15, 2019 · 10 comments

Comments

@psychemedia
Copy link

We're looking for a new simulator for an online distance education course, and I'm currently playing with every simulator I can find (which isn't many; review will be here).

Playing with Jyro (leading candidate:-), I noted that embedding the simulator world view beneath a single notebook code cell can make it hard to see what's going on in the world as a code is built up, perhaps across several cells.

The ipyturtle widget allows a turtle canvas to be embedded below a code cell or floated over the notebook as the notebook scrolls beneath. It'd be handy if this floating feature was available in jyro.

At the moment it looks the simulator is rendered via display() [ https://github.com/Calysto/jyro/blob/master/jyro/simulator/simulator.py#L593 ] whereas ipyturtle creates a div with an appropriate style attribute (fixed or static; https://github.com/gkvoelkl/ipython-turtle-widget/blob/master/js/lib/example.js#L60 ).

How easy / hard would it be to support a floating simulator world view widget? Any issues likely to arise?

I did also wonder whether Jyro would work in JupyterLab, and allow the simulator window to be torn off into it's own panel, but that seemed not to work? (I don't really understand Jupyterlab. It seems far more complex to try to build extensions in than notebooks?!)

@dsblank
Copy link
Contributor

dsblank commented Nov 15, 2019

Hi Tony! I will say that a couple of colleagues and I are thinking about writing a book with associated notebooks, and we would like Jyro to be useful to the end. So, if it doesn't do what you need it to do, then it probably doesn't do what we need it to. I'm willing to try to make it viable.

I think making it work like ipyturtle and making it jupyterlab compatible are both doable. There are a couple of other items I'd like to improve, including making the lighting more realistic, and making the UI a little less clunky. Improving the 3D view might be beyond what we can do (we tried to keep the computational requirements low).

I'd be happy to help put together a list of wishlist items for a new version. What is your timeline?

@psychemedia
Copy link
Author

psychemedia commented Nov 15, 2019

Hi Doug

TImeline is... OU courses take forever (months and months) but we're meeting to decide (maybe?!) on a way forward for course update end of next week. Will hopefully post a review of sorts of possibilities I've found by end of today or over w/e; conx is also looking a likely candidate...

My doodlings so far w/ Jyro are here. I need to move onto other packages now but things I got stuck on include: no ability to do a line follower (I figure this would complicate camera view, but camera could be blind to marks on floor? Though you do have a trace? Maybe lines on floor would complicate that too!), I couldn't offhand see how to introduce puck into world and then eg grab it w/ gripper, close gripper, reverse w/ puck, place puck s/where else.

I notice in other issues an issue on camera view of other robots in world (I didn't try: do you just pass a list of different robot objects into world?); I haven't really thought about multirobot sim, but could be useful, esp w/ multiple pucks too.

As other considerations come to mind, I'll try them out in my demo or raise them here.

One thing I do want to explore is embedding streaming data charts using data collected from robot in visual simulator as it runs.

PS your book project sounds exciting:-)

@dsblank
Copy link
Contributor

dsblank commented Nov 15, 2019

@psychemedia Yes, no downward facing sensors. A fun task that we did in intro CS, but wasn't the goal of Jyro (mostly camera and range sensor oriented for deep learning models). Downward-facing sensors might be possible (and would be interesting to have different colors of flooring, too.) Yes, another limit is not being able to see other robots or pucks. Streaming data charts as it runs should be easy, I think.

@dsblank
Copy link
Contributor

dsblank commented Nov 15, 2019

Something related that could be useful: I ported Jyro over to Java (converted into Javascript) for my Processing-based CS intro: https://jupyter.brynmawr.edu/services/public/dblank/CS110%20Intro%20to%20Computing/2017-Spring/Lectures/Robot%20Control.ipynb

@psychemedia
Copy link
Author

Ah, yes, thanks for the reminder of that processing one... I thought I recalled something on those lines;-) Cribbed the files into my review repo if that's okay just so I can run multiple demos in one Binder container. (I really should have thought about doing the review as a Jupyter Book from the start... ho hum...if nothing else, it may be a useful exercise pondering what needs to change in order to make an effective Jupyter Book...

@psychemedia
Copy link
Author

Hi Doug

We decided yesterday we would be going with Jyro and conx for our course rewrite with an April 2020 handover I think. My first line of attack will be to just see how far I can convert our current activities to near equivalents using the packages as they currently stand, maybe with tiny itch scratching additions where required / quickly figured out.

One comment from others in module team regarded the UI / screen real estate management, with inline simulator being fixed in location inside the notebook as not ideal. Although I far prefer the notebook UI (I'm guessing cribbing from the ipyturtle extension to the DOMWidgetModel and DOMWidgetView would be one way of floating the simulator?) I don't know how much work is involved trying to get the widget working as a tear-offable one in JupyterLab (I haven't tried to build anything for JupyterLab). I'll get a much better feel for the usability requirements for our use case once I've ported the original activities over which hopefully will be in the next 2-3 weeks. I probably should also spend some time in JupyterLab just to get a feel for how we might write materials in that sort of UI. I think someone else has also volunteered to help with the activities so I'll see if their skills / interest / time availability also extends to contributing to the simulator. (One thing we really lack at the OU is a pool of grad students / Masters project students who could work on things like this...)

Another comment was that the processing port simulator seemed much smoother. Whilst we're committed to the python code, I did start idly wondering about processing style widgets in a Python notebook eg https://github.com/jtpio/ipyp5 or the new canvas widget https://github.com/martinRenou/ipycanvas ) but coding animations / visual simulations isn't something I've ever really played with (now might be the time for me to start!)

Although I haven't started thinking about conx activities at all, I suspect that one way of using that will be to develop voila driven apps that are widget controlled, and then perhaps reveal to students how they were put together. Our time budget in terms of student study time for the NN topics is really limited in this course though, although there is a third year machine learning course that conx might provide a good on-ramp for and that I will try to pitch to that module team if they haven't already started exploring it.

@dsblank
Copy link
Contributor

dsblank commented Nov 23, 2019

Well, thank you very much! I take that vote with a sense of responsibility. Very exciting!

Some notes:

  1. I will take a look at some kind of tear-off or singleton viewer for the Jyro simulator. Will need to figure out some API I think for the slightly different semantics. The conx "dashboard" will probably want the same kind of treatment.

  2. The Processing port is smoother in that it runs in the browser. But now, having written both kinds of systems, maybe there is something to be learned.

  3. To be able to sense other objects (robots, pucks, etc) needs to be explored... it can easily be done, but the computational requirements can easily exceed the processing power of, and bandwidth to, the server.

  4. conx is written to work with keras and circa tensorflow 1.14. There is a new tensorflow 2 with built-in keras, but that will require some re-engineering to work with conx. (Our next big rewrite of conx will probably be as stand-alone packages to work along side of the new tensorflow.keras (version 2).

Very exciting! I look forward to working with you to make this pedagogically useful!

@psychemedia
Copy link
Author

@dsblank It's actually we who should be thanking you for sharing the code:-) (I did suggest we try to get you over for a few days...hmm, thinks... there's a community workshop call out, isn't there?)

Will contribute as and where we can, and share back content WIP privately if not publicly (I'll share as much on open repos as I can get away with!).

I'm also happy to test things and will be exploring various teaching strategy approaches to explore the space a bit more, even if we don't end up going there for the teaching material. (eg we use screencasts for some things, so I'll probably have a look at Jupyter Graffiti, and maybe even some bits of selenium automation as tool for creating screencasts). Accessibility is another thing we always have to bear in mind, both in terms of keyboard accessibility / support for visually impaired users, but also equivalent activities for students who can't access a particular activity. Again, I'll be looking for strategies around and about to support this.

Deployment wise, which is something we need to consider from the start (1k students twice a year, all at a distance on a wide variety of machines, some of which may be pretty ropey...), I'd be looking at delivering personal environments via a Docker container (containds is already looking promising in this regard — I always did have a soft spot for Kitematic!), as well as a hosted service behind Jupyterhub, probably Dockerspawned, although a plugin for a TLJH custom environment could be on the cards too. The course has a couple of other software requirements which require virtualisation, so I'm keen to explore Docker delivery.

@psychemedia
Copy link
Author

@dsblank "The Processing port is smoother in that it runs in the browser. But now, having written both kinds of systems, maybe there is something to be learned"

To what extent could that be wrapped as an ipywidget, then controlled from py? By the by, jp_proxy_widget seems to straightforwardly wrap js packages in an ipywidgets container (I had a quick play here). It's also supposed to work in JupyterLab (I think) though IIRC my demo didn't work for me when I tried (I was hoping to be able to tear out the spectrogram into its own window).

@dsblank
Copy link
Contributor

dsblank commented Jan 16, 2020

@psychemedia Interesting idea! Going to take a deep dive on this this weekend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants