Skip to content

Commit

Permalink
Merge e4665c2 into 86f3907
Browse files Browse the repository at this point in the history
  • Loading branch information
aloukina committed May 7, 2020
2 parents 86f3907 + e4665c2 commit e7e34d4
Show file tree
Hide file tree
Showing 1,107 changed files with 171,336 additions and 56,068 deletions.
41 changes: 38 additions & 3 deletions doc/contributing.rst
Expand Up @@ -57,7 +57,7 @@ To write a new experiment test for RSMTool (or any of the other tools):

(a) Create a new directory under ``tests/data/experiments`` using a descriptive name.

(b) Create a JSON configuration file under that directory with the various fields appropriately set for what you want to test. Feel free to use multiple words separated by hyphens to come up with a name that describes the testing condition. The name of the configuration file should be the same as the value of the ``experiment_id`` field in your JSON file. By convention, that's usually the same as the name of the directory you created but with underscores instead of hyphens.
(b) Create a JSON configuration file under that directory with the various fields appropriately set for what you want to test. Feel free to use multiple words separated by hyphens to come up with a name that describes the testing condition. The name of the configuration file should be the same as the value of the ``experiment_id`` field in your JSON file. By convention, that's usually the same as the name of the directory you created but with underscores instead of hyphens. If you are creating a new test for RSMCompare or RSMSummarize, use one of the existing RSMTool or RSMEval experiments as input and has the same name. This will ensure that these inputs will be regularly updated and remain consistent with the current outputs generated by these tools. If you must create a test for a situation not covered by a current tool, create a new RSMTool/RSMEval test first following the instructions on this page.

(c) Next, you need to add the test to the list of parameterized tests in the appropriate test file based on the tool for which you are adding the test, e.g., RSMEval tests should be added to ``tests/test_experiment_rsmeval.py``, RSMPredict tests to ``tests/test_experiment_rsmpredict.py``, and so on. RSMTool tests can be added to any of the four files. The arguments for the `param()` call can be found in the :ref:`Table 1 <param_table>` below.

Expand Down Expand Up @@ -129,9 +129,44 @@ To do this, you should now run the following:
python tests/update_files.py --tests tests --outputs test_outputs
This will copy over the generated outputs for the newly added tests and show you a report of the files that it added. If run correctly, the report should *only* refer to model files (``*.model``/``*.ols``) and the files affected by the functionality you implemented. If you run ``nosetests`` again, your newly added tests should now pass.
This will copy over the generated outputs for the newly added tests and show you a report of the files that it added. It will also update the input files form tests for RSMCompare and RSMSummarize. If run correctly, the report should *only* refer to model files (``*.model``/``*.ols``) and the files affected by the functionality you implemented. If you run ``nosetests`` again, your newly added tests should now pass.

At this point, you should inspect all of the new test files added by the above command using to make sure that the outputs are as expected. You can find these files under ``tests/data/experiments/<test>/output`` where ``<test>`` refers to the test(s) that you added.

If your changes resulted in updates to input files to RSMSummarize or RSMCompare, you need to re-run the tests for these two tools and update re-run the file update to update the outputs.

Once you are satisified that the outputs are as expected, you can commit all the them.

Two examples below walk you through the process:

.. topic:: Example 1: No change to input files.
You made a code change to better handle an edge case that only affects one test.

1. Run ``nosetests --nologcapture tests/*.py``. The affected test failed.

2. Run ``python tests/update_files.py --tests tests --outputs test_outputs`` to update test outputs. You will see the total number of deleted, updated and missing files. There should be no deleted files and no missing files. Only files for your new test should be updated. There are no warnings in the output.

3. If this is the case, you are now ready to commit your change.

.. topic:: Example 2: Change to input file.
You made a code change that changes the output of many tests. For example, you renamed one of the evaluation metrics.

1. Run ``nosetests --nologcapture tests/*.py``. Many tests now failed since the outputs changed.

2. Run ``python tests/update_files.py --tests tests --outputs test_outputs`` to update test outputs. The files affected by your change are shown as added/deleted. You also see the following warning:

.. code-block::
WARNING: X input files for rsmcompare/rsmsummarize tests have been updated. You need to re-run these tests and update test outputs
3. This means that the input files for rsmsummarize/rsmcompare have changed and it is likely that the current test outputs no longer match the expected output. You need to re-run the tests for these two tools.

4. Run ``nosetests --nologcapture tests/*rsmsummarize*.py`` and ``nosetests --nologcapture tests/*rsmcompare*.py``. If you see errors, make sure they are related to the changes you made.

3. Once you are satisfied, rerun ``python tests/update_files.py --tests tests --outputs test_outputs`` to update test outputs. This should only update the outputs for the `rsmsummarize` tests.

4. If this is the case, you are now ready to commit your changes.

At this point, you should inspect all of the new test files added by the above command using to make sure that the outputs are as expected. You can find these files under ``tests/data/experiments/<test>/output`` where ``<test>`` refers to the test(s) that you added. Once you are satisified that the outputs are as expected, you can commit all the them.

Advanced tips and tricks
------------------------
Expand Down
2 changes: 1 addition & 1 deletion rsmtool/notebooks/comparison/evaluation.ipynb
Expand Up @@ -105,7 +105,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.7.6"
}
},
"nbformat": 4,
Expand Down
2 changes: 1 addition & 1 deletion rsmtool/notebooks/comparison/header.ipynb
Expand Up @@ -434,7 +434,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.8"
"version": "3.7.6"
}
},
"nbformat": 4,
Expand Down
36 changes: 12 additions & 24 deletions rsmtool/notebooks/comparison/true_score_evaluation.ipynb
Expand Up @@ -14,8 +14,7 @@
"outputs": [],
"source": [
"if not out_dfs['true_score_evaluations'].empty:\n",
" variance_columns = ['N','N_single','N_double','h1_var_single','h1_var_double', 'h2_var_double','true_var']\n",
" prmse_columns = ['N','N_single', 'N_double','sys_var_single','sys_var_double','mse_true','prmse_true']\n",
"\n",
" markdown_strs = []\n",
" markdown_strs.append(\"The tables in this section show how well system scores can \"\n",
" \"predict *true* scores. According to Test theory, a *true* score \"\n",
Expand All @@ -25,33 +24,22 @@
" \"human scores when multiple human ratings are available for a subset of \"\n",
" \"responses. In this notebook these are estimated using human scores for \"\n",
" \"responses in the evaluation set.\")\n",
" markdown_strs.append(\"#### Variance of human scores\")\n",
" markdown_strs.append(\"The table below shows variance of both sets of human scores \"\n",
" \"for the whole evaluation set and for the subset of responses \"\n",
" \"that were double-scored. Large differences in variance between \"\n",
" \"the two human scores require further investigation. The last column \"\n",
" \"shows estimated true score variance. \")\n",
" display(Markdown('\\n'.join(markdown_strs)))\n",
" pd.options.display.width=10\n",
" df_human_variance = out_dfs['true_score_evaluations'][variance_columns].copy()\n",
" # replace nans with \"-\"\n",
" df_human_variance.replace({np.nan: '-'}, inplace=True)\n",
" display(HTML('<span style=\"font-size:95%\">'+ df_human_variance.to_html(classes=['sortable'], \n",
" escape=False,\n",
" float_format=float_format_func) + '</span>'))\n",
" \n",
" markdown_strs = [\"#### Proportional reduction in mean squared error (PRMSE)\"]\n",
" markdown_strs.append(\"The table shows the variance of system scores for single-scored \"\n",
" \"and double-scored responses, and mean squared error (MSE) and \"\n",
" \"proportional reduction in mean squared error (PRMSE) for \"\n",
" \"predicting a true score with system score.\")\n",
" \n",
" markdown_strs.append(\"The table shows variance of human rater errors, \"\n",
" \"true score variance, mean squared error (MSE) and \"\n",
" \"proportional reduction in mean squared error (PRMSE) for \"\n",
" \"predicting a true score with system score.\")\n",
" display(Markdown('\\n'.join(markdown_strs)))\n",
" pd.options.display.width=10\n",
" prmse_columns = ['version', 'N','N raters', 'N single', 'N multiple', \n",
" 'Variance of errors', 'True score var',\n",
" 'MSE true', 'PRMSE true']\n",
" df_prmse = out_dfs['true_score_evaluations'][prmse_columns].copy()\n",
" df_prmse.replace({np.nan: '-'}, inplace=True)\n",
" display(HTML('<span style=\"font-size:95%\">'+ df_prmse.to_html(classes=['sortable'], \n",
" escape=False,\n",
" float_format=float_format_func) + '</span>'))\n",
" escape=False, index=False,\n",
" float_format=float_format_func) + '</span>'))\n",
"else:\n",
" display(Markdown(no_info_str))"
]
Expand All @@ -73,7 +61,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.7.6"
}
},
"nbformat": 4,
Expand Down
30 changes: 8 additions & 22 deletions rsmtool/notebooks/summary/true_score_evaluation.ipynb
Expand Up @@ -20,8 +20,9 @@
"metadata": {},
"outputs": [],
"source": [
"variance_columns = ['N','N_single','N_double','h1_var_single','h1_var_double', 'h2_var_double','true_var']\n",
"prmse_columns = ['N','N_single', 'N_double', 'system score type', 'sys_var_single','sys_var_double','mse_true','prmse_true']\n",
"prmse_columns = ['N','N raters', 'N single', 'N multiple', \n",
" 'Variance of errors', 'True score var',\n",
" 'MSE true', 'PRMSE true']\n",
"\n",
"def read_true_score_evals(model_list, file_format_summarize):\n",
" true_score_evals = []\n",
Expand Down Expand Up @@ -58,26 +59,11 @@
"outputs": [],
"source": [
"if not df_true_score_eval.empty:\n",
" markdown_strs = [\"#### Variance of human scores\"]\n",
" markdown_strs.append(\"The table below shows variance of both sets of human scores \"\n",
" \"for the whole evaluation set and for the subset of responses \"\n",
" \"that were double-scored. Large differences in variance between \"\n",
" \"the two human scores require further investigation. The last column \"\n",
" \"shows estimated true score variance. \")\n",
" display(Markdown('\\n'.join(markdown_strs)))\n",
" pd.options.display.width=10\n",
" df_human_variance = df_true_score_eval[variance_columns].copy()\n",
" # replace nans with \"-\"\n",
" df_human_variance.replace({np.nan: '-'}, inplace=True)\n",
" display(HTML('<span style=\"font-size:95%\">'+ df_human_variance.to_html(classes=['sortable'], \n",
" escape=False,\n",
" float_format=float_format_func) + '</span>'))\n",
" \n",
" markdown_strs = [\"#### Proportional reduction in mean squared error (PRMSE)\"]\n",
" markdown_strs.append(\"The table shows the variance of system scores for single-scored \"\n",
" \"and double-scored responses, and mean squared error (MSE) and \"\n",
" \"proportional reduction in mean squared error (PRMSE) for \"\n",
" \"predicting a true score with system score.\")\n",
" markdown_strs.append(\"The table shows variance of human rater errors, \"\n",
" \"true score variance, mean squared error (MSE) and \"\n",
" \"proportional reduction in mean squared error (PRMSE) for \"\n",
" \"predicting a true score with system score.\")\n",
" display(Markdown('\\n'.join(markdown_strs)))\n",
" pd.options.display.width=10\n",
" df_prmse = df_true_score_eval[prmse_columns].copy()\n",
Expand Down Expand Up @@ -107,7 +93,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.7.6"
}
},
"nbformat": 4,
Expand Down

0 comments on commit e7e34d4

Please sign in to comment.