Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation Schema + a necessary revamp #118

Merged
merged 24 commits into from
Feb 12, 2023
Merged

Conversation

Udayraj123
Copy link
Owner

@Udayraj123 Udayraj123 commented Jan 3, 2023

Fixes #89
This PR attempts to provide a generic solution for evaluation.

This is PR#1 of a chained PR.
Next

@Udayraj123
Copy link
Owner Author

@Rohan-G you can start having a look in the main files. Starting with the schema. I'll push working changes soon.

@Udayraj123 Udayraj123 force-pushed the feature/evalution-refactor branch 4 times, most recently from 0bee168 to 603a4d3 Compare January 7, 2023 09:56
@Udayraj123 Udayraj123 changed the title Evaluation Schema + a small revamp Evaluation Schema + a necessary revamp Jan 7, 2023
fix: refactor; add sample evaluation.json

fix: update gitignore and rename ignored folder

fix: setup skeleton; refactor
fix: refactor; connect evaluation schema

fix: checked validation using evaluation schema
…ther; move constants/configs to better places;

fix: refactor; move instance ops to core; minor changes

refactor: extract tuning config from instance

fix: add screeninfo for simple window info
fix: evaluation schema fixes
@Udayraj123
Copy link
Owner Author

Udayraj123 commented Jan 7, 2023

The changes include(wip list):

  • working evaluation code
  • support for 4+ types of marking schemes: default(+1, 0), negative, multi-weighted, streak-based
  • support for answer_key_omr.csv
  • (foundation) support for answer_key_omr.jpg
  • add range operator support
  • add validations for marking scheme and omr response
  • picking up window_width (using screeninfo)
  • pickup evaluation.json recursively;
  • added schema for all 3 jsons
  • use format strings in all logs
  • improved error messages
  • reduce hardcoded configs
  • support config.json per instance
  • pickup config.json recursively;
  • test threshold jump value
  • upgrade pip packages
  • load json error handling
  • separate out utils into ImageInstanceOps, ImageUtils(static methods) and InteractionUtils(static methods)
  • add upsc mock; refactor; create evaluation-schema

@Udayraj123 Udayraj123 marked this pull request as ready for review February 1, 2023 13:49
FORMAT = "%(message)s"

# TODO: set logging level from config.json dynamically

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Env variable would be better.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will add in a later PR

Comment on lines 1 to 18
import ast
import os
import re
from copy import deepcopy
from fractions import Fraction

import cv2
import pandas as pd
from rich.table import Table

from src.logger import console, logger
from src.schemas.evaluation_schema import (
BONUS_SECTION_PREFIX,
DEFAULT_SECTION_KEY,
MARKING_VERDICT_TYPES,
QUESTION_STRING_REGEX_GROUPS,
)
from src.utils.parsing import get_concatenated_response, open_evaluation_with_validation

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Re-org imports

Copy link
Owner Author

@Udayraj123 Udayraj123 Feb 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like isort hook and black standards is keeping the same order as earlier.
It should sort it alphabetically, but oddly it's not.
Moving import cv2 to line 2 again moves it back to line 7

Comment on lines 21 to 26
def parse_float_or_fraction(result):
if type(result) == str and "/" in result:
result = float(Fraction(result))
else:
result = float(result)
return result

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move to appropriate place (maybe utils?)

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressing this in PR#3 now

@@ -0,0 +1,598 @@
import ast

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will do via PR#3

* fix: load answer key from image with working samples

* fix: pytests set up with snapshots for all samples

* fix: run sample1 on pre-commit; run all tests on pre-push

fix: add default_install_hook_types

fix: add explicit stages

* fix: update snapshots

* fix: bug fixes

* [Feature] Simplify template jsons (#127)

* fix: update snapshots

feat: simplify template schema and block logic; consume it; use fields terminology; update all samples

fix: bug fixes

* fix: refactor template.py; minor template json cleanup

* fix: changes after updating wiki

* feat: setup tests for template validations; fixed few bugs

fix: renaming

fix: test

* fix: refactor

fix: refactor tests structure and add utils

* fix: minor fixes

* fix: reorder imports; rename question -> field

* fix: review changes

* fix: remove streak logic
@Udayraj123 Udayraj123 merged commit 080b360 into master Feb 12, 2023
@Udayraj123 Udayraj123 deleted the feature/evalution-refactor branch February 12, 2023 06:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Feature][Core] Implement a generalised evaluation script
3 participants