Classy is a way of searching for classes based on departments, distribs, periods, and medians. It allows you to find the best fits for you based on your priorities in a course.
I kept having difficulty figuring out what classes to take, and alternating between sites for looking up medians, the classes offered, the the class descriptions. I built out a solution that I was using about a year ago, but I finally finalized it and prettied it up.
- Search by median
- Search by department
- Search by distribs
- Search by periods
- Leverage powerful points system for searching
- View all information in helpful format
- Filter out previous classes
First, you'll need to set up the MySQL database. The
Create_Tables.sql file in the
scrapers folder contains the SQL code to create the tables. Then, you'll need to create a file called
secret.php in the
php folder. All that file has is a function called "createConnection" which will need to create a PDO connection to the database with the tables you previously created. You'll also need to create a file called
secrets.py in the
scrapers folder. All that file has is a function called "mysql_connect" which returns a MySQLdb.connect call with user/password information.
Then you'll need to populate the tables with the scrapers. Information on running the scrapers can be found under the 'scrapers' subsection. You can then upload the generated CSV files using the
Upload.sql file, which contains the SQL code to truncate or update the existing tables from a local CSV. Finally, add the new term to the util.php file as the last option, which will be the default.
The project relies on various scrapers to create a MySQL database that is queried against. These each have some oddities.
- scrape_orc.js - This JS code runs in your browser to download the current ORC information for all undergraduate courses. Navigate to http://dartmouth.smartcatalogiq.com/current/orc.aspx, and run the code in the command line. When it's finished processing, it will make a link called 'Download CSV' in the main header, which you can Right Click > Save As to save a local .csv file of the ORC data for uploading.
- scrape_timetable.py - This Python script needs some minor modifications to be run. Simply edit the last line to have the function call the proper term identifiers. "201703", "17S" is the 2017 Spring term, "201706, "17X" will be the summer, etc. This will generate a file 'scrapeClass_201701.csv'.
- scrape_medians.py - This Python script was adapted from mattgmarcus's file, who developed Median-Town. It can just be run with no parameters, and will generate a file 'medians.csv' which contains the averaged median for each class that it found from the data from 09W up to the present (it calculates the current year).
The second two scrapers can be run automatically with Python 3, and
python3 update.py. It will automatically check for new terms, and upload them accordingly, a check that's run on my server daily.
- CSS (Bootstrap Grid)
- JS (jQuery, Chosen)
Created by Alex Beals © 2017