Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes for assessments dashboard #44887

Merged
merged 10 commits into from
Feb 25, 2022
Merged

Conversation

jamescodeorg
Copy link
Contributor

@jamescodeorg jamescodeorg commented Feb 16, 2022

This PR contains several fixes for the assessments dashboard.

The biggest change is a medium-term workaround for a bug where the assessments dashboard displays incorrect data. This bug arises because student answers for assessments are stored in two places: (1) the answer for each individual question is stored on the UserLevel corresponding to the individual question (a.k.a. sublevel), and (2) a JSON blob with the answers to all questions in the assessment is stored on the UserLevel corresponding to the assessment (a.k.a. LevelGroup). These two places are updated separately by the client so they can and do get out of sync.

This PR changes the assessments dashboard to read the student answers from the individual questions like other parts of the website. In order to not regress performance, we now process students in batches of 20, retrieving all UserLevel rows needed to construct the response for the 20 students in a single query.

The long-term direction here is still TBD. After this change, the JSON blob on the UserLevel for the assessment is not currently being used. I've left the related code and data alone for now and will revisit and cleanup as appropriate when we take a deeper look at assessments next year. This is tracked as LP-2220.

While I was in this area, I also fixed some cosmetic issues that made the page look really sloppy.

Before:
Screen Shot 2022-02-11 at 4 54 39 PM

After:
Screen Shot 2022-02-22 at 10 19 06 AM

Links

Testing story

Deployment strategy

Follow-up work

Privacy

Security

Caching

PR Checklist:

  • Tests provide adequate coverage
  • Privacy and Security impacts have been assessed
  • Code is well-commented
  • New features are translatable or updates will not break translations
  • Relevant documentation has been added or updated
  • User impact is well-understood and desirable
  • Pull Request is labeled appropriately
  • Follow-up work items (including potential tech debt) are tracked and linked

@jamescodeorg jamescodeorg force-pushed the jamescodeorg/fix-assessments-1 branch 2 times, most recently from e50772a to 46dad3a Compare February 21, 2022 00:57
@jamescodeorg jamescodeorg changed the title Assessments (WIP) Fixes for assessments dashboard Feb 22, 2022
@jamescodeorg jamescodeorg requested a review from a team February 22, 2022 19:03
level_result[:student_result] = student_answer
level_result[:status] = ""
when Multi
answer_indexes = Script.cache_find_level(level.id).correct_answer_indexes_array
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it make sense to have a method in the Multi.rb model where it takes a student answer and returns the result (unsubmitted, correct, incorrect)?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea, but I think if we did that, we'd want to also move the logic for the other level types into their respective classes and that feels out of scope for this change. Ok if we save that for another day?

# A given user with no UserLevel matching the given criteria is omitted from
# the returned hash. The associated LevelSource data for each UserLevel is also
# prefetched to prevent n+1 query issues.
def self.user_levels_by_user(user_ids, script_id, level_ids)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious why you chose to put this in the User model

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question! It seems like several of our progress-related methods are here including User#user_levels_by_level right above this method and User# user_levels_by_user_by_level right below which seemed very similar. I'm open to moving it if there's a better place to put it?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My initial instinct is a query that's returning UserLevels for multiple users would make more sense in the UserLevel model, but if this is the patten, I'm fine with leaving it as-is. I'd love if we had more clear patterns on what should go in our models and where queries should live, because it's often unclear to me/ feels really disorganized.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100% agree that this is confusing and disorganized. 😞

@jamescodeorg jamescodeorg merged commit 36a0fdf into staging Feb 25, 2022
@jamescodeorg jamescodeorg deleted the jamescodeorg/fix-assessments-1 branch February 25, 2022 17:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants