-
Notifications
You must be signed in to change notification settings - Fork 240
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pass variables between exercises #72
Comments
That's by design so that each exercise is completely independent (users can execute any exercise without worrying about whether they completed previous exercises correctly). You can compensate for this with setup chunks that "prime" each exercise with the correct preconditions. See the documentation here: https://rstudio.github.io/learnr/exercises.html#exercise_setup |
I was hoping to use this for a class but students would need to enter the data. I can see the advantage of independent exercises, but the tutorial creator could make them independent even if you allow connections among the exercises. |
Part of the reasoning there is to make tutorials compatible with an
architecture than runs everything in another process (or even on another
machine). As soon as you allow dependencies on previous exercises to creep
in this becomes much, much more difficult. Agreed though that this makes
entry of arbitrary data impossible, perhaps we can eventually reach a
middle ground that supports that approach as an option.
…On Mon, May 15, 2017 at 2:11 PM, rachelss ***@***.***> wrote:
I was hoping to use this for a class but students would need to enter the
data. I can see the advantage of independent exercises, but the tutorial
creator could make them independent even if you allow connections among the
exercises.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#72 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAGXx_mZTzu8cwXMqZuXsz-mZI3wUwxCks5r6JVagaJpZM4NUmAY>
.
|
#rachelss I'd like to talk with you about this. I also have been having to work around the each-exercise-it's-own-session design. I've been finding ways to do this, but I'm still trying to sort out whether the difficulty is my limited imagination of how to write exercises or a genuine need for communication among the exercise chunks. Could you describe to me the problem you're facing? With modest probability, I might help you find a work around, but I know it would help me sharpen my thinking about this situation. |
BTW it's not technically difficult to allow the sharing, it's more of a
policy decision. I'm open to doing whatever the consensus is here.
…On Mon, May 15, 2017 at 2:22 PM, Daniel Kaplan ***@***.***> wrote:
#rachelss I'd like to talk with you about this. I also have been having to
work around the each-exercise-it's-own-session design. I've been finding
ways to do this, but I'm still trying to sort out whether the difficulty is
my limited imagination of how to write exercises or a genuine need for
communication among the exercise chunks. Could you describe to me the
problem you're facing? With modest probability, I might help you find a
work around, but I know it would help me sharpen my thinking about this
situation.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#72 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAGXxyVhCTzTGQk5vXqLKWlieCbaxo6Fks5r6JfagaJpZM4NUmAY>
.
|
@dtkaplan I would like students in our intro bio course to graph in R. To do this they need a very low barrier so they can (initially) focus on how to load data and make a single graph. Exercise 1 is then load data. Assume example data given. Student then changes the data file. Student loads data with and without header, and with/without rownames as data. Exercise 2 is make graph. Instructions are shown with initial data (loaded and hidden by author). Student then changes to their data. Students could be asked questions after each exercise. The goal is to break it up as much as possible to allow focusing on a single task. Finally students would need to be able to print. @jjallaire If there are concerns about sharing data among exercises but you're willing to allow this can you write in warnings that come up when publishing that alert the author about using data that is not built in. I can see where it could be done unintentionally and with problematic results. I do understand I am asking you to extend learnr to a whole new use that might be beyond its scope. However, I think this interface would be appealing to students. It is currently challenging to get students over the initial learning curve for R and I am looking for anything to help. |
This is exactly the sort of situation I've faced, @rachelss. Here are some of the ways to deal with it with stand-alone exercises, although you know best what would work for your students.
I can certainly be wrong about (2) --- I'll have my first chance to test it out this week at USCOTS. @jjallaire, I think the policy decision you made is a good one, since it's good to encourage people to make the exercises as independent as possible. One way that I've thought my encourage the independent-exercise model while allowing an override by the exercise author would be to have a |
@dtkaplan I do like the Google sheets idea and that is the plan. |
The biggest downside to allowing environment sharing is that the exercises
now cannot in principle be run on an "exercise farm". The package doesn't
currently take advantage of this but it would be nice to swap this in
underneath all the existing tutorials later. A secondary downside is that
users must now execute the chunks in order (obviously in an instructor-led
classroom setting this isn't such a big problem).
Please keep me appraised of whether you think it's urgent that environment
sharing be allowed as an option.
…On Mon, May 15, 2017 at 3:02 PM, rachelss ***@***.***> wrote:
@dtkaplan <https://github.com/dtkaplan> I do like the Google sheets idea
and that is the plan.
The idea to copy previous lines of code into the current chunk is a
thought. It results in a complete script at the end, which I like. I'll
work up exercises this way and get back to this.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#72 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAGXx-xkkE5jkKJc5rXLM6nQ1f1SYxOYks5r6KFXgaJpZM4NUmAY>
.
|
@jjallaire I will try having students thanks for all the discussion and advice |
You can also "seed" code from the previous chunks in the new chunks (this would work especially well with progressive reveals of sections, https://rstudio.github.io/learnr/exercises.html#progressive_reveal). In other words, you just insert the correct solution from the previous chunks at the beginning of the new chunks. This allows "building on" the previous chunks in a fashion but still preserves independent execution. |
Recently, I taught a bunch of people (all R newbies) the data science process. The emphasis was on understanding the DS process and not on learning R. Within the tutorial the students went through the typical phases: Importing data, exploring, transforming, modeling, evaluating models. What I ended up doing was to calculate and to put all major results of the different DS process steps either into the global environment or into setup chunks which were shared by several exercises . Within the tutorial the students themselves calculated those results again in the respective chunks. However, all of the results were already available to them right from the start (As R newbies they did not know the
Well, this could be one solution but can result in an ugly one: Continuing with the DS process example, you would need to "seed" the complete code for splitting the data into a training and test set plus the model building steps in order to predict new values using the respective model and the test set in an exercise at the end. In tutorials in which you really follow a certain kind of process this setup/solution seems to be too much of a repetition. Long story short, I think "exercise chaining" would be a valuable feature. |
+1 on this one. Allowing input from Shiny widgets (sliderInput, numericInput, ...) would be incredibly useful as well. |
Environment sharing would be very valuable for my teaching project as well. I'm currently exploring exercise setup, https://rstudio.github.io/learnr/exercises.html#exercise_setup, but this does not suffice when I need the first chunk to be an exercise. I'm voting for an option where you can explicitly define the environment for each chunk. |
I agree - chaining is useful, but so far not having a problem with self-generating the chain in a series of setup code chunks. Obviously it is laborious writing lots of setup chunks, but it at least makes you think explicitly about what an exercise relies on. |
I will also agree having the option to chain would be incredibly useful for a class I'm helping develop. The idea would be to have learnR simulating what is essentially a Jupyter Notebook for students and that could definitely come in handy without requiring the setup to just run code in the browser |
Because the chaining options don't exist in LearnR, we're unfortunately going to have to migrate our development thus far to RTutor, which offers very similar features to LearnR but with chaining enabled. |
really wish this was an option in learnr :) |
+1 for adding optional environment sharing between code chunks in learnr. We have been (painstakingly) recreating environments in setup chunks, but we are now writing a new course (for an MSc in Health Data Science) on deep learning, and re-running the set-up code before each chunk is no longer viable, nor is pre-computing everything and stuffing it in the .GlobalEnv at start-up ideal (which is what the "setup" chunk does). You can of course explicitly assign objects to the global environment, and they are then available to subsequent code chunks, like this: Write the R code required to add two plus two: ```{r two-plus-two, exercise=TRUE} Exercise with Code```{r what-is-four, exercise=TRUE} but you need to explain to the students that all those assign() calls are needed as a consequence of using the learnr environment, and aren't otherwise required That's ugly, and confusing for the students. If there were post-chunk hooks that shared the chunk environment, then certain objects could be assigned invisibly to .GlobalEnv and it would all just work. Well, if the execution order of chunks is correct... ways around this might be to support pre-chunk check function calls, or maintain a list of chunks on which the current chunk depends and whether they have all been run in the correct order. That sounds messy and complex... How does Jupyter handle this issue? Actually, maybe just a chunk option that causes that chunk to execute in the global environment? such chunks could even be rendered with slightly different background colour to distinguish them from stand-alone chunks. We teach our students about R environments and scoping rules when we introduce functions. It confuses them at first, but it is worth it later on. Thus explaining that some code chunks in a learnr document execute in the global environment and thus share their state isn't a problem for us. So a single flag on a code chunk to make it a global environment chunk would suffice, I think. This could use the same hook as the special-casing of the chunk name 'setup" which causes it to execute in teh global environment. Hmmm, this might be a very easy change indeed. |
Has there been any progress on this issue? I spent my whole day today learning Learnr and porting a tutorial on MNL, SVM, and Ordinal regression to Learnr only to meet with a terrible error message that it couldn't find the data set that we read in the previous chunk! It's a great package but unfortunately, I can't use it. I looked at RTutor but it's not as polished as Learnr. |
@ashgreat, yes, each learnr chunk executes in its own environment. There are chunk hooks provided to allow other chunks to set up the environment for a target chunk, or all chunks. You just need to use those. They work well provided the chunk set-up is fast and simple. If it involves a lot of code or computation however, it's a problem. See my comment above for some work-arounds for that, but a proper solution is needed. Apart from that major flaw, learnr is very good. |
Thanks @timchurches. Yes, I saw your solution. Unfortunately, my exercise is pretty complex and using @jjallaire, you wrote above
Could you please consider altering the policy? It looks like many of us would like to have the option of passing objects between chunks. |
+1 to be able to define the environment in which code chunks are run, while keeping them independent as default |
@nischalshrestha has created a fair compromise by allowing for setup chunks to be chained together. in #390 which was just merged.
Example: ```{r setup, include = FALSE}
library(learnr)
d <- 3
```
```{r setupA}
a <- 5
```
```{r setupB, exercise.setup = "setupA"}
b <- a + d
```
Default behavior.
Chunks that will be executed: setup, user submission for ex0
```{r ex0, exercise=TRUE}
d # 3
```
Chain setup values.
Chunks that will be executed: setup, setupA, setupB, user submission for ex1
```{r ex1, exercise=TRUE, exercise.setup = "setupB"}
x <- b + 1
x # 9
```
Do not need to use the full chain.
Chunks that will be executed: setup, setupA, user submission for ex2
```{r ex2, exercise=TRUE, exercise.setup = "setupA"}
y <- a + 1
y # 6
```
Use other exercise's pre-filled code chunk.
Chunks that will be executed: setup, setupA, setupB, ex1, user submission for ex3
```{r ex3, exercise=TRUE, exercise.setup = "ex1"}
z <- x + 6
z
``` Closing as fixed in #390 |
If I load data into a variable in one exercise that variable is not found in the next exercise.
The text was updated successfully, but these errors were encountered: