Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add facilitator and workshop rollups #29518

Merged
merged 9 commits into from
Jul 26, 2019
Original file line number Diff line number Diff line change
Expand Up @@ -104,18 +104,18 @@ export class FacilitatorAveragesTable extends React.Component {
<tr>
<td>Total responses</td>
<td>
{
{JSON.stringify(
this.props.facilitatorResponseCounts['this_workshop'][
this.props.facilitatorId
]
}
)}
</td>
<td>
{
{JSON.stringify(
this.props.facilitatorResponseCounts['all_my_workshops'][
this.props.facilitatorId
]
}
)}
</td>
</tr>
{Object.keys(questionOrder).map(category => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -128,10 +128,12 @@ def generic_survey_report

# GET /api/v1/pd/workshops/experiment_survey_report/:id/
def experiment_survey_report
results = report_single_workshop(@workshop, current_user)
this_ws_report = report_single_workshop(@workshop, current_user)
rollup_report = report_rollups(@workshop, current_user)

results = this_ws_report.merge(rollup_report)
results[:experiment] = true

# TODO: add rollup report before returning result to client
render json: results
rescue => e
notify_error e
Expand Down
2 changes: 1 addition & 1 deletion dashboard/lib/pd/survey_pipeline/daily_survey_decorator.rb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ def self.process_data(context)

OUTPUT_KEYS.each do |key|
context[key] ||= {}
context[key].merge! results[key]
context[key].deep_merge! results[key]
end

context
Expand Down
9 changes: 5 additions & 4 deletions dashboard/lib/pd/survey_pipeline/daily_survey_parser.rb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,8 @@ module Pd::SurveyPipeline
class DailySurveyParser < SurveyPipelineWorker
include Pd::JotForm::Constants

REQUIRED_INPUT_KEYS = [:survey_questions, :workshop_submissions, :facilitator_submissions]
REQUIRED_INPUT_KEYS = [:survey_questions]
OPTIONAL_INPUT_KEYS = [:workshop_submissions, :facilitator_submissions]
OUTPUT_KEYS = [:parsed_questions, :parsed_submissions]

# @param context [Hash] contains necessary input for this worker to process.
Expand All @@ -15,11 +16,11 @@ class DailySurveyParser < SurveyPipelineWorker
def self.process_data(context)
check_required_input_keys REQUIRED_INPUT_KEYS, context

results = transform_data context.slice(*REQUIRED_INPUT_KEYS)
results = transform_data context.slice(*(REQUIRED_INPUT_KEYS + OPTIONAL_INPUT_KEYS))

OUTPUT_KEYS.each do |key|
context[key] ||= {}
context[key].merge! results[key]
context[key].deep_merge! results[key]
end

context
Expand All @@ -33,7 +34,7 @@ def self.process_data(context)
#
# @return [Hash{:questions, :submissions => Hash}]
#
def self.transform_data(survey_questions:, workshop_submissions:, facilitator_submissions:)
def self.transform_data(survey_questions:, workshop_submissions: [], facilitator_submissions: [])
workshop_submissions = parse_submissions(workshop_submissions)
facilitator_submissions = parse_submissions(facilitator_submissions)

Expand Down
15 changes: 11 additions & 4 deletions dashboard/lib/pd/survey_pipeline/mapper.rb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ class GenericMapper < SurveyPipelineWorker
attr_reader :group_config, :map_config

REQUIRED_INPUT_KEYS = [:question_answer_joined]
OUTPUT_KEYS = [:summaries]
OUTPUT_KEYS = [:summaries, :errors]

# Configure mapper object.
#
Expand Down Expand Up @@ -52,7 +52,7 @@ def process_data(context)
#
def map_reduce(question_answer_joined:)
groups = group_data question_answer_joined
{summaries: map_to_reducers(groups)}
map_to_reducers(groups)
end

# Break data into groups using groupping configuration.
Expand All @@ -63,6 +63,7 @@ def group_data(data)
# Map groups to reducers using mapping configuration.
def map_to_reducers(groups)
summaries = []
errors = []

groups.each do |group_key, group_records|
# Apply matched reducers on each group.
Expand All @@ -71,15 +72,21 @@ def map_to_reducers(groups)
next unless condition.call(group_key)

reducers.each do |reducer|
reducer_result = reducer.reduce group_records.pluck(field)
# Only process values that are not nil
reducer_result = reducer.reduce group_records.pluck(field).compact
next unless reducer_result.present?

summaries << group_key.merge({reducer: reducer.name, reducer_result: reducer_result})
rescue => e
errors << e.message
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not blocking on this PR, but I'm interested in a discussion about how specific we can usefully get with this (and similar) rescue blocks. I know we want to be resilient against failures, but I wonder if we can define useful boundaries of that resilience that will help us notice very unexpected scenarios faster.

Related: Rescue StandardError, Not Exception - ThoughtBot (especially the "Best-case Scenario" example)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My train of thought when dealing with errors (StandardError) is

  1. What code is more likely to produce errors? (Any code could potentially fail, but some has higher risk than the other.)
  2. What errors are expected (everything else is unexpected)?
  3. What errors are fatal at what layers? (An error could be fatal and halt execution of a callee function but at the same time is non-fatal to the caller.)
  4. What to do when catch an expected/unexpected fatal/non-fatal error? A few options are
    a) Interrupt the current code flow, throw error up 1 layer and let the upper layer deal with it.
    b) Swallow the error and continue with the current code flow.
    c) Swallow the error but report it via Honeybadger (externally) or record it and bubble the info up to upper layers.

In this particular case, the rescued block executes one reducer function on one group of data, which is a part of applying multiple reducers to multiple groups. I chose option (c) here, swallow all errors, record and bubble them up. This is best-effort approach, getting usable results but still notify higher layers about caught errors.

end
end
end

summaries
{
summaries: summaries,
errors: errors
}
end
end
end
206 changes: 206 additions & 0 deletions dashboard/lib/pd/survey_pipeline/survey_pipeline_helper.rb
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,161 @@
require 'pd/survey_pipeline/daily_survey_joiner.rb'
require 'pd/survey_pipeline/mapper.rb'
require 'pd/survey_pipeline/daily_survey_decorator.rb'
require 'pd/survey_pipeline/survey_rollup_decorator.rb'

module Pd::SurveyPipeline::Helper
QUESTION_CATEGORIES = [
FACILITATOR_EFFECTIVENESS_CATEGORY = 'facilitator_effectiveness',
WORKSHOP_OVERALL_SUCCESS_CATEGORY = 'overall_success',
WORKSHOP_TEACHER_ENGAGEMENT_CATEGORY = 'teacher_engagement'
]

# Summarize facilitator-specific and general workshop results from all workshops
# that a group of selected facilitators have facilitated.
#
# @param workshop [Pd::Workshop]
# @param current_user [User]
#
# @return [Hash]
#
def report_rollups(workshop, current_user)
# Filter list of facilitators that the current user can see.
facilitator_ids =
if current_user.program_manager? || current_user.workshop_organizer? || current_user.workshop_admin?
workshop.facilitators.pluck(:id)
else
[current_user.id]
end

# Roll up facilitator-specific results and general workshop results for each facilitator
results = {}
facilitator_ids.each do |facilitator_id|
results.deep_merge! report_facilitator_rollup(facilitator_id, workshop)
results.deep_merge! report_workshop_rollup(facilitator_id, workshop)
end

results
end

# Summarize facilitator-specific results from all related workshops
# that a facilitator have facilitated.
#
# @param facilitator_id [Number] a valid user id
# @param workshop [Pd::Workshop] a valid workshop
#
# @return [Hash{:facilitators, :facilitator_response_counts, :facilitator_averages, :errors => Hash, Array}]
# facilitators: {facilitator_id => fac_name}
# facilitator_response_counts: {this_workshop, all_my_workshops => {facilitator_id => count}}
# facilitator_averages: {
# fac_name => {qcategory, qname => {this_workshop, all_my_workshops => score}},
# questions => {qname => qtext}
# }
# errors: Array
#
def report_facilitator_rollup(facilitator_id, workshop)
context = {
current_workshop_id: workshop.id,
facilitator_id: facilitator_id,
question_categories: [FACILITATOR_EFFECTIVENESS_CATEGORY],
submission_type: 'Facilitator'
}

# Retrieve data
related_ws_ids = find_related_workshop_ids(facilitator_id, workshop.course)
context[:related_workshop_ids] = related_ws_ids
context.merge! retrieve_facilitator_surveys([facilitator_id], related_ws_ids)

# Process data
process_rollup_data context

# Decorate
Pd::SurveyPipeline::FacilitatorSurveyRollupDecorator.process_data context

context[:decorated_summaries]
end

def report_workshop_rollup(facilitator_id, workshop)
context = {
current_workshop_id: workshop.id,
facilitator_id: facilitator_id,
question_categories: [WORKSHOP_OVERALL_SUCCESS_CATEGORY, WORKSHOP_TEACHER_ENGAGEMENT_CATEGORY],
submission_type: 'Workshop'
}

# Retrieve data
related_ws_ids = find_related_workshop_ids(facilitator_id, workshop.course)
context[:related_workshop_ids] = related_ws_ids
context.merge! retrieve_workshop_surveys(related_ws_ids)

# Process data
process_rollup_data context

# Decorate
Pd::SurveyPipeline::WorkshopSurveyRollupDecorator.process_data context

context[:decorated_summaries]
end

def process_rollup_data(context)
# Transform data
Pd::SurveyPipeline::DailySurveyParser.process_data context
Pd::SurveyPipeline::DailySurveyJoiner.process_data context

# Convert string answers to numbers
context[:question_answer_joined].each do |qa|
if qa.dig(:option_map, qa[:answer])
qa[:answer_to_number] = qa[:option_map][qa[:answer]]
end
end

# Mapper + Reducer
# Summarize results for all workshops
group_config_all_ws = [:name, :type, :answer_type]

is_selected_question_all_ws = lambda do |hash|
context[:question_categories].any? {|category| hash[:name]&.start_with? category}
end

map_config_all_ws = [
{
condition: is_selected_question_all_ws,
field: :answer_to_number,
reducers: [Pd::SurveyPipeline::AvgReducer]
}
]

Pd::SurveyPipeline::GenericMapper.new(
group_config: group_config_all_ws, map_config: map_config_all_ws
).process_data context

# Summarize results for the current workshop
group_config_this_ws = [:workshop_id, :name, :type, :answer_type]

is_selected_question_this_ws = lambda do |hash|
hash[:workshop_id] == context[:current_workshop_id] &&
context[:question_categories].any? {|category| hash[:name]&.start_with? category}
end

map_config_this_ws = [
{
condition: is_selected_question_this_ws,
field: :answer_to_number,
reducers: [Pd::SurveyPipeline::AvgReducer]
}
]

Pd::SurveyPipeline::GenericMapper.new(
group_config: group_config_this_ws, map_config: map_config_this_ws
).process_data context
end

# Summarize all survey results for a workshop.
#
# @param workshop [Pd::Workshop]
# @param current_user [User]
#
# @return [Hash]
#
def report_single_workshop(workshop, current_user)
# Fields used to group survey answers
group_config = [:workshop_id, :form_id, :facilitator_id, :name, :type, :answer_type]
Expand Down Expand Up @@ -54,4 +207,57 @@ def create_generic_survey_report(context, workers)
w.process_data context
end
end

# Find all workshops of the same course and facilitated by a facilitator.
#
# @param facilitator_id [Number] valid facilitator id
# @param course [String] valid course name
#
# @return [Array<Number>] list of workshop ids
#
def find_related_workshop_ids(facilitator_id, course)
return [] unless facilitator_id && course.present?

Pd::Workshop.left_outer_joins(:facilitators).
where(users: {id: facilitator_id}, course: course).
distinct.
pluck(:id)
end

# Retrieve facilitator submissions and survey questions for selected facilitators and workshops.
#
# @param facilitator_ids [Array<number>] non-empty list of facilitator ids
# @param ws_ids [Array<number>] non-empty list of workshop ids
#
# @return [Hash{:facilitator_submissions, :survey_questions => Array}]
#
# TODO: Move these functions into a Retriever
#
def retrieve_facilitator_surveys(facilitator_ids, ws_ids)
fac_submissions = Pd::WorkshopFacilitatorDailySurvey.where(
facilitator_id: facilitator_ids, pd_workshop_id: ws_ids
)
form_ids = fac_submissions.pluck(:form_id).uniq

{
survey_questions: Pd::SurveyQuestion.where(form_id: form_ids),
facilitator_submissions: fac_submissions
}
end

# Retrieve workshop daily submissions and survey questions for selected workshops.
#
# @param ws_ids [Array<number>] non-empty list of workshop ids
#
# @return [Hash{:workshop_submissions, :survey_questions => Array}]
#
def retrieve_workshop_surveys(ws_ids)
ws_submissions = Pd::WorkshopDailySurvey.where(pd_workshop_id: ws_ids)
form_ids = ws_submissions.pluck(:form_id).uniq

{
survey_questions: Pd::SurveyQuestion.where(form_id: form_ids),
workshop_submissions: ws_submissions
}
end
end