Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

E2015 Conflict Notification #1714

Closed
wants to merge 25 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
fb2bea0
Initial Commit
sid189 Apr 2, 2020
8d93ed1
Add conflict report link to mailer
sppapalkar Apr 20, 2020
14deb25
Add conflict report link to conflict email
sppapalkar Apr 20, 2020
b80f077
add view partials and helper methods
carlklier Apr 21, 2020
4011655
Add conflict report to reports dropdown
sppapalkar Apr 21, 2020
c33a8a0
Add helper methods for conflict report
sppapalkar Apr 21, 2020
046836b
Add method to get all reviewee answers
sppapalkar Apr 21, 2020
b47fc5f
Refactor variable names in review_mapping_helper
sppapalkar Apr 21, 2020
42ba93c
Fix wrong teammates bug in conflict report
sppapalkar Apr 21, 2020
9950af5
Move code from report view to helper method
sppapalkar Apr 21, 2020
97f5e7e
sorted graph and added conversion between raw and percents
carlklier Apr 22, 2020
9077ac1
fix convert raw points to % bug, refactored getting score color to se…
carlklier Apr 24, 2020
253b424
Add controller test for review conflict report
sppapalkar Apr 24, 2020
3f67aae
Add test for review mapping helper
sppapalkar Apr 24, 2020
2338b69
Test if Review Conflict Report page is getting rendered or not
sid189 Apr 24, 2020
0e2fa4d
Added comments to review_mapping_helper_spec.rb
sid189 Apr 24, 2020
17f81da
added class tag to ol styling
carlklier Apr 24, 2020
1d41cce
Test for button is report
sppapalkar Apr 24, 2020
546cc46
Add comments in review_mapping_helper
sppapalkar Apr 24, 2020
d1a2e65
Test for answers_by_round_for_reviewee
mehtasahil31 Apr 24, 2020
245ba25
Merge branch 'beta' of https://github.com/sid189/expertiza into beta
mehtasahil31 Apr 24, 2020
ff94783
Reverting changes to Gemfile.lock
mehtasahil31 Apr 24, 2020
d908309
Revert schema changes
sppapalkar Apr 25, 2020
181fb65
Change answer spec to test specific value
sppapalkar May 3, 2020
0e0b4ff
Change conflict report test description
sppapalkar May 3, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
50 changes: 50 additions & 0 deletions app/assets/javascripts/conflict_report.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
function switchPoints() {
var raws_graph = document.getElementsByClassName("raw_points_graph");
var percents_graph = document.getElementsByClassName("percentage_points_graph");
var raws = document.getElementsByClassName("raw_points");
var percents = document.getElementsByClassName("percentage_points");
var button = document.getElementById("switchPointsButton");

for (var k=0; k<raws_graph.length; k++){
switch_graph_points_helper(raws_graph[k]);
}

for (var k=0; k<percents_graph.length; k++){
switch_graph_points_helper(percents_graph[k]);
}

for (var k=0; k<raws.length; k++){
switch_points_helper(raws[k]);
}

for (var k=0; k<percents.length; k++){
switch_points_helper(percents[k]);
}

switch_button_helper(button)
}

function switch_graph_points_helper(x) {
if (x.style.display === "none") {
x.style.display = "block"
} else {
x.style.display = "none";
}
}

function switch_points_helper(x) {
if (x.style.display === "none") {
x.style.display = "inline"
} else {
x.style.display = "none";
}
}

function switch_button_helper(button){
if (button.innerHTML === "Convert Points to Percents"){
button.innerHTML = "Convert Points to Raw"
} else {
button.innerHTML = "Convert Points to Percents"
}

}
9 changes: 7 additions & 2 deletions app/controllers/response_controller.rb
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
# Initial Commit
class ResponseController < ApplicationController
helper :submitted_content
helper :file
Expand Down Expand Up @@ -88,7 +89,9 @@ def update
questions = sort_questions(@questionnaire.questions)
create_answers(params, questions) unless params[:responses].nil? # for some rubrics, there might be no questions but only file submission (Dr. Ayala's rubric)
@response.update_attribute('is_submitted', true) if params['isSubmit'] && params['isSubmit'] == 'Yes'
@response.notify_instructor_on_difference if (@map.is_a? ReviewResponseMap) && @response.is_submitted && @response.significant_difference?
# Call notify instructor to send conflict mail if the response is submitted and the submitted response is a conflict to previous response
# The base URL is sent to the method as it is not accessible from the model
@response.notify_instructor_on_difference(request.base_url) if (@map.is_a? ReviewResponseMap) && @response.is_submitted && @response.significant_difference?
rescue StandardError
msg = "Your response was not saved. Cause:189 #{$ERROR_INFO}"
end
Expand Down Expand Up @@ -171,7 +174,9 @@ def create
error_msg = ""
# only notify if is_submitted changes from false to true
if (@map.is_a? ReviewResponseMap) && (was_submitted == false && @response.is_submitted) && @response.significant_difference?
@response.notify_instructor_on_difference
# Call notify instructor to send conflict mail if the response is submitted and the submitted response is a conflict to previous response
# The base URL is sent to the method as it is not accessible from the model
@response.notify_instructor_on_difference(request.base_url)
@response.email
end
redirect_to controller: 'response', action: 'save', id: @map.map_id,
Expand Down
29 changes: 29 additions & 0 deletions app/helpers/report_formatter_helper.rb
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,35 @@ def review_response_map(params, _session = nil)
@avg_and_ranges = @assignment.compute_avg_and_ranges_hash
end

# Create a response map for the reviewers and reviewee as well as the teams and the reviewee_id for Review Conflict Report
def review_conflict_response_map(params, _session = nil)
# Initialize assignment id and name
assign_basics(params)
#Find all team associated with the assignment
teams = Team.select(:id, :name).where(parent_id: @id).order(:name)
@teams = ({})
@reviewers = ({})
# Create a map of reviewers and reviewee
teams.each do |reviewee|
@reviewers[reviewee.name] = reviewers_name_id_by_reviewee_and_assignment(reviewee, @id)
@teams[reviewee.name] = reviewee.id
end
end

#Get the reviewers of a particular assignment and particular reviewee for Review Conflict Report
def reviewers_name_id_by_reviewee_and_assignment(reviewee, id)
temp_reviewers = User.select(" DISTINCT participants.id, users.name")
.joins("JOIN participants ON participants.user_id = users.id")
.joins("JOIN response_maps ON response_maps.reviewer_id = participants.id")
.where("response_maps.reviewee_id = ? and response_maps.reviewed_object_id = ?", reviewee.id, id)
reviewers = ({})
# Create a map of reviewer id and name
temp_reviewers.each do |reviewer|
reviewers[reviewer[:id].to_s] = reviewer[:name]
end
reviewers
end

def feedback_response_map(params, _session = nil)
assign_basics(params)
# If review report for feedback is required call feedback_response_report method in feedback_review_response_map model
Expand Down
201 changes: 201 additions & 0 deletions app/helpers/review_mapping_helper.rb
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,207 @@ def get_data_for_review_report(reviewed_object_id, reviewer_id, type)
[response_maps, rspan]
end

#Get the response_id for a particular review for Review Conflict Report
def response_id_review_conflict(reviewer_id, reviewee_id, reviewed_object_id)
response_id = ResponseMap.select(:id).where(reviewee_id: reviewee_id, reviewer_id: reviewer_id, reviewed_object_id: reviewed_object_id)
response_id[0][:id]
end

#Get Review scores for all the rounds of a particular team for the Review Conflict Report
def review_score_for_team(reviewed_object_id, team_name)
question_answers=[]
reviewee_id = Team.select(:id).where(name: team_name, parent_id: reviewed_object_id)
# Add the total scores assigned to reviewee by each round
reviewee_id.each do |reviewee|
total_rounds = Assignment.find(reviewed_object_id).rounds_of_reviews
question_answers = Array.new(total_rounds)
# Find total scores for each round
(0..total_rounds-1).each do |round|
review_answers = Answer.answers_by_round_for_reviewee(reviewed_object_id, reviewee,round+1)
question_answers[round] = review_score_helper_for_team(review_answers)
end
end
question_answers
end

#Get review score for each round of particular team
def review_score_helper_for_team(review_answers)
question_answers={}
# Add scores of all reviews
review_answers.each do |review_answer|
if question_answers.key?(review_answer[:reviewer_id])
if review_answer[:answer].nil?
question_answers[review_answer[:reviewer_id]] += 0
else
question_answers[review_answer[:reviewer_id]] += review_answer[:answer]
end
else
if review_answer[:answer].nil?
question_answers[review_answer[:reviewer_id]] = 0
else
question_answers[review_answer[:reviewer_id]] = review_answer[:answer]
end
end
end
question_answers = Hash[question_answers.sort_by {|id, score| score}]
end

# Calculate all metrics using other helper methods
def get_score_metrics(review_scores, max_score)
metrics = {}
metrics[:average] = average_of_round(review_scores)
metrics[:std] = std_of_round(metrics[:average],review_scores)
metrics[:upper_tolerance_limit] = calculate_upper_tolerance(review_scores, max_score)
metrics[:lower_tolerance_limit] = calculate_lower_tolerance(review_scores)
# Calculate percentage scores
review_scores_percents = review_scores.inject({}) { |h, (reviewer, answer)| h[reviewer]=(answer.to_f/max_score.to_f)*100; h}

metrics[:average_percent] = average_of_round(review_scores_percents)
metrics[:std_percent] = std_of_round(metrics[:average_percent],review_scores_percents)
metrics[:upper_tolerance_limit_percent] = calculate_upper_tolerance(review_scores_percents, 100)
metrics[:lower_tolerance_limit_percent] = calculate_lower_tolerance(review_scores_percents)
metrics
end

#Average score of a particular round for Review Conflict Report
def average_of_round(review_scores)
average=0.0
count=0
# Add all review scores
review_scores.each do |reviewer,answer|
average+=answer
count+=1
end
# Divide review scores by count to find average
if count != 0
average=average/count
end
average.round(2)
end

#Standard Deviation of a particular round for Review Conflict Report
def std_of_round(average,question_answer)
accum=0.0
count=0
# Summing mean squared difference
question_answer.each do |reviewer,answer|
accum+=(answer-average)**2
count+=1
end
# Find std deviation
if count != 0
accum=Math.sqrt(accum/count)
end
accum.round(2)
end

#Get team members for a particular team for Review Conflict Report
def team_members(team_name, assignment_id)
team_id = Team.select(:id).where(name: team_name, parent_id: assignment_id)
members = TeamsUser.where(team_id: team_id)
members
end

#Get Max assignment score for a particular round in an assignment for Review Conflict Report
def max_assignment_review_score_per_round(reviewed_object_id)
assignment = Assignment.find(reviewed_object_id)
total_rounds = assignment.rounds_of_reviews
review_max_scores = Array.new(total_rounds)
# Create a map of rounds and corresponding max scores
(0..total_rounds-1).each do |round|
review_max_scores[round] = SummaryHelper::Summary.new.max_score_of_assignment_per_round(assignment, round)
end
review_max_scores
end

# Calculate the upper tolerance limit based off the threshold for conflict
def calculate_upper_tolerance(question_answer, max_score)
average = average_of_round(question_answer)
std = std_of_round(average,question_answer)
upper_tolerance = (average+(2*std)).round(2)

if upper_tolerance < max_score
return upper_tolerance
else
return max_score
end
end

# Calculate the lower tolerance limit based off the threshold for conflict
def calculate_lower_tolerance(question_answer)
average = average_of_round(question_answer)
std = std_of_round(average,question_answer)
lower_tolerance = (average-(2*std)).round(2)
# Return zero if negative
[lower_tolerance, 0].max
end

# Highlight the conflicting score with a different coloured graph
def get_conflict_color(is_graph, score, upper_tolerance, lower_tolerance)
# Check tolerance values to devide graph color
if score > upper_tolerance
return '#ffa200'
elsif score < lower_tolerance
return '#fff200'
else
if is_graph
return '#A90201'
else
return '#FFFFFF'
end
end
end

#Generate the bar chart for reviewers score for a particular round for Review Conflict Report
def generate_score_chart(review_max_score, question_answer, by_raw_points)
upper_tolerance = calculate_upper_tolerance(question_answer, review_max_score)
lower_tolerance = calculate_lower_tolerance(question_answer)
colors = []
scores = Array.new
question_answer.each do |reviewer,answer|
if !by_raw_points
scores << ((answer.to_f/review_max_score.to_f)*100).round(2)
else
scores << answer.to_f
end
# Different color for conflicting bars
colors << get_conflict_color(true, answer, upper_tolerance, lower_tolerance)
end
labels = (1..scores.length).to_a
chartSize = 5 + 2*scores.length
# Initialize chart
data = {
labels: labels,
datasets: [
{
label: "percentage score",
backgroundColor: colors,
data: scores,
hoverBackgroundColor: "blue",
}
]
}
options = {
width: "125",
height: chartSize,
scales: {
xAxes: [
{
ticks: {
beginAtZero: true,
max: 100
}
}
]
}
}
if by_raw_points
data[:datasets][0][:label] = "raw points score"
options[:scales][:xAxes][0][:ticks][:max] = review_max_score.to_f
end
horizontal_bar_chart data, options
end

#
# gets the team name's color according to review and assignment submission status
#
Expand Down
15 changes: 15 additions & 0 deletions app/helpers/summary_helper.rb
Original file line number Diff line number Diff line change
Expand Up @@ -159,6 +159,21 @@ def get_max_score_for_question(question)
question.type.eql?("Checkbox") ? 1 : Questionnaire.where(id: question.questionnaire_id).first.max_question_score
end

#Get maximum score of the assignment per round for Review Conflict Report
def max_score_of_assignment_per_round(assignment,round)
# Get all questions used in each round
rubric = get_questions_by_assignment(assignment)
rubric_questions_used = rubric[assignment.varying_rubrics_by_round? ? round : 0]
assignment_max_score_per_round = 0
# Add the max score for each question to find the max score for the assignment
rubric_questions_used.each do |q|
next if q.type.eql?("SectionHeader")
q_max_score = get_max_score_for_question(q)
assignment_max_score_per_round += q_max_score
end
assignment_max_score_per_round
end

def summarize_sentences(comments, summary_ws_url)
summary = ""
param = {sentences: comments}
Expand Down
1 change: 1 addition & 0 deletions app/mailers/mailer.rb
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,7 @@ def notify_grade_conflict_message(defn)
@new_score = @body[:new_score]
@conflicting_response_url = @body[:conflicting_response_url]
@summary_url = @body[:summary_url]
@review_conflict_report_url = @body[:review_conflict_report_url]
@assignment_edit_url = @body[:assignment_edit_url]

defn[:to] = 'expertiza.development@gmail.com' if Rails.env.development? || Rails.env.test?
Expand Down
11 changes: 11 additions & 0 deletions app/models/answer.rb
Original file line number Diff line number Diff line change
Expand Up @@ -122,6 +122,17 @@ def self.answers_by_question_for_reviewee(assignment_id, reviewee_id, q_id)
end
# end added by ferry, required for the summarization

def self.answers_by_round_for_reviewee(assignment_id, reviewee_id,round)
#get all answers for a reviewee on an assignment for a particular round
question_answers = Answer.select("answers.answer, response_maps.reviewer_id")
.joins("join responses on responses.id = answers.response_id")
.joins("join response_maps on responses.map_id = response_maps.id")
.where("response_maps.reviewed_object_id = ? and
response_maps.reviewee_id = ? and
responses.round = ?", assignment_id, reviewee_id,round)
question_answers
end

# start added by ferry for answer tagging
def get_reviewee_from_answer(answer)
resp = Response.find(answer.response_id)
Expand Down
9 changes: 5 additions & 4 deletions app/models/response.rb
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ def self.avg_scores_and_count_for_prev_reviews(existing_responses, current_respo
[scores_assigned.sum / scores_assigned.size.to_f, count]
end

def notify_instructor_on_difference
def notify_instructor_on_difference(base_url)
response_map = self.map
reviewer_participant_id = response_map.reviewer_id
reviewer_participant = AssignmentParticipant.find(reviewer_participant_id)
Expand All @@ -206,9 +206,10 @@ def notify_instructor_on_difference
reviewee_name: reviewee_name,
new_score: total_score.to_f / maximum_score,
assignment: assignment,
conflicting_response_url: 'https://expertiza.ncsu.edu/response/view?id=' + response_id.to_s,
summary_url: 'https://expertiza.ncsu.edu/grades/view_team?id=' + reviewee_participant.id.to_s,
assignment_edit_url: 'https://expertiza.ncsu.edu/assignments/' + assignment.id.to_s + '/edit'
conflicting_response_url: base_url + '/response/view?id=' + response_id.to_s,
summary_url: base_url + '/grades/view_team?id=' + reviewee_participant.id.to_s,
review_conflict_report_url: base_url + '/reports/response_report?class=form-inline&id=' + assignment.id.to_s,
assignment_edit_url: base_url + '/assignments/' + assignment.id.to_s + '/edit'
}
).deliver_now
end
Expand Down