Skip to content

Commit

Permalink
2020-02-10-exam
Browse files Browse the repository at this point in the history
  • Loading branch information
DavidLeoni committed Mar 4, 2020
1 parent 9336598 commit 82a5577
Show file tree
Hide file tree
Showing 3 changed files with 35 additions and 19 deletions.
43 changes: 27 additions & 16 deletions exams/2020-02-10/exam-2020-02-10-solution.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "code",
"execution_count": 1,
"execution_count": 5,
"metadata": {
"nbsphinx": "hidden"
},
Expand Down Expand Up @@ -273,7 +273,7 @@
"\n",
"First, you will begin with parsing an excerpt of wordnet `data/dogs.noun`, which is a noun database shown here in its entirety.\n",
"\n",
"[According to documentation](https://wordnet.princeton.edu/documentation/wndb5wn), a noun database begins with several lines containing a copyright notice, version number, and license agreement: these lines all begin with two spaces and the line number like \n",
"[According to documentation](https://wordnet.princeton.edu/documentation/wndb5wn), a noun database begins with several lines containing a copyright notice, version number, and license agreement: these lines all begin with **two spaces** and the line number like \n",
"\n",
"```\n",
" 1 This software and database is being provided to you, the LICENSEE, by \n",
Expand Down Expand Up @@ -364,7 +364,7 @@
"While parsing, skip the copyright notice. Then, each name definition follows the following format:\n",
"\n",
"```\n",
"synset_offset lex_filenum ss_type w_cnt word lex_id [word lex_id...] p_cnt [ptr...] | gloss \n",
"synset_offset lex_filenum ss_type w_cnt word lex_id [word lex_id...] p_cnt [ptr...] | gloss \n",
"```\n",
"\n",
"* `synset_offset`: Number identifying the synset, for example `02112993`. **MUST be converted to a Python int**\n",
Expand Down Expand Up @@ -419,6 +419,22 @@
" - `pos`: just parse it as a string (we will not use it)\n",
" - `source/target`: just parse it as a string (we will not use it)\n",
"\n",
"<div class=\"alert alert-warning\">\n",
" \n",
"**WARNING: DO NOT** assume first pointer is an `@` (_IS A_) !!\n",
"\n",
"In the full database, the root synset _entity_ can't possibly have a parent synset:<br/><br/>\n",
"\n",
"\n",
"```\n",
"\n",
"\n",
"0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18\n",
"00001740 03 n 01 entity 0 003 ~ 00001930 n 0000 ~ 00002137 n 0000 ~ 04431553 n 0000 | that which is perceived or known or inferred to have its own distinct existence (living or nonliving)\n",
"\n",
"```\n",
"</div>\n",
"\n",
"* `gloss`: Each synset contains a gloss (that is, a description). A gloss is represented as a vertical bar (`|`), followed by a text string that continues until the end of the line. For example, `a large breed having a smooth white coat with black or brown spots; originated in Dalmatia`"
]
},
Expand All @@ -431,7 +447,7 @@
},
{
"cell_type": "code",
"execution_count": 2,
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -446,17 +462,12 @@
" with open(filename, encoding='utf-8') as f:\n",
" line=f.readline()\n",
" r = 0\n",
" while r < 28:\n",
" while line.startswith(' '):\n",
" line=f.readline()\n",
" #print(line)\n",
" r += 1\n",
"\n",
"\n",
" # 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18\n",
" # 00001740 03 n 01 entity 0 003 ~ 00001930 n 0000 ~ 00002137 n 0000 ~ 04431553 n 0000 | that which is perceived or known or inferred to have its own distinct existence (living or nonliving)\n",
"\n",
" line=f.readline()\n",
"\n",
" while line != \"\":\n",
" i = 0\n",
"\n",
Expand Down Expand Up @@ -509,7 +520,7 @@
},
{
"cell_type": "code",
"execution_count": 3,
"execution_count": 11,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -942,7 +953,7 @@
},
{
"cell_type": "code",
"execution_count": 4,
"execution_count": 12,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -999,7 +1010,7 @@
},
{
"cell_type": "code",
"execution_count": 5,
"execution_count": 13,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -1036,7 +1047,7 @@
},
{
"cell_type": "code",
"execution_count": 6,
"execution_count": 14,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -1111,7 +1122,7 @@
},
{
"cell_type": "code",
"execution_count": 7,
"execution_count": 15,
"metadata": {},
"outputs": [
{
Expand Down Expand Up @@ -1860,7 +1871,7 @@
"toc_cell": false,
"toc_position": {},
"toc_section_display": true,
"toc_window_display": true
"toc_window_display": false
}
},
"nbformat": 4,
Expand Down
7 changes: 6 additions & 1 deletion index.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -449,7 +449,12 @@
"source": [
"## News\n",
"\n",
"**10 February 2020 - Published 2020-02-10 exam** [solutions](exams/2020-02-10/exam-2020-02-10-solution.ipynb)\n",
"**4 March 2020 - Pulished 2020-02-10 exam results**\n",
"\n",
"- [detailed grades](http://davidleoni.it/etc/spex/exams/2020-02-10-public-grades.html)\n",
"- [corrections](https://drive.google.com/open?id=1CBVHvf2gBrB7tCym2iDgTDIRPO8dSV-r)\n",
"- [solutions](exams/2020-02-10/exam-2020-02-10-solution.ipynb)\n",
"\n",
"\n",
"31 January 2020 - Published 2020-01-23 exam results\n",
"\n",
Expand Down
4 changes: 2 additions & 2 deletions past-exams.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@
"\n",
"\n",
"\n",
"## 2018-19 (Data science)\n",
"## Data science\n",
"\n",
"**NOTE**: 19-20 exams will be very similar to these, the only difference being that **you will also get an exercise on Pandas.**"
"**NOTE**: 19-20 exams are very similar to 18-19, the only difference being that **you might also get an exercise on Pandas.**"
]
},
{
Expand Down

0 comments on commit 82a5577

Please sign in to comment.