Skip to content

Commit

Permalink
Style tweaks
Browse files Browse the repository at this point in the history
  • Loading branch information
joengelm committed Aug 26, 2016
1 parent 0644482 commit b8fd0e4
Show file tree
Hide file tree
Showing 2 changed files with 4 additions and 2 deletions.
5 changes: 3 additions & 2 deletions api/meta.py
Expand Up @@ -65,10 +65,11 @@ def github():
@app.route('/contribute', methods=['GET'])
def contribute():
# TODO: Get rid of inline documentation, this is very bad...
contents='The future of Brown APIs depends on you! All of our code is open source, and we rely heavily on contributions from the Brown community. You can view our code (along with open issues and future plans) [on Github](https://github.com/hackatbrown/apis).\r\n\r\n## How to Help\r\n\r\nThere are many ways to help further the development of Brown APIs. You can add new APIs, maintain and enhance current APIs, fix bugs, improve this website, or build better tools to help others contribute. Check the [issues](https://github.com/hackatbrown/apis/issues) on our Github for suggestions of what to do first. You don\'t need to be able to code to help either. Reach out to CIS and other university organizations to get easier and wider access to campus data.\r\n\r\n## General Development Information\r\n\r\nThe APIs are written in Python and run on a [Flask](http://flask.pocoo.org) server. This website is also served by the same server and uses [Jinja](http://jinja.pocoo.org) templates with the [Bootstrap](http://getbootstrap.com) framework.\r\n\r\nData is stored in a single [MongoDB](https://docs.mongodb.com/getting-started/python/introduction/) database hosted on [mLab.com](https://mlab.com/) (_Note: This was probably a bad decision that could really use some contributions to fix!_). Because there is only one copy of the database, developers must take care to avoid corrupting the data while testing fixes or new features.\r\n\r\n## Getting Started\r\n\r\nYou\'ll need the latest version of Python 3, along with `virtualenv` and `pip`. Go ahead and look up these programs if you aren\'t familiar with them. They\'re crucial to our development process.\r\n\r\n1. Clone this repository to your own machine:\r\n - `git clone https://github.com/hackatbrown/brown-apis.git`\r\n2. Open a terminal and navigate to the top level of the repository (_brown-apis/_).\r\n3. Create and activate a virtual environment (again, look up `virtualenv` online to understand what this does):\r\n - ``virtualenv -p `which python3` venv``\r\n - `source venv/bin/activate`\r\n4. Install all the required libraries in your virtual environment:\r\n - `pip install -r requirements.txt`\r\n5. Create a new branch for your changes. For example (while on the master branch):\r\n - `git checkout -b <descriptive-branch-name>`\r\n6. Make any changes you want to make.\r\n7. Commit your changes, push them to `origin/<branch-name>`, and open a new pull request.\r\n8. To test your code, you may merge them into the `stage` branch. These changes will be automatically reflected on our [staging server](http://brown-apis-staging.herokuapp.com/). You can merge changes from the develop branch into the staging branch with:\r\n - `git checkout stage`\r\n - `git fetch origin`\r\n - `git reset --hard origin/master`\r\n - `git rebase <your-branch-name>`\r\n - `git push --force`\r\n - Note: This won\'t work if multiple developers are doing this at the same time.\r\n9. You\'re code will be merged into `master` once your pull request is accepted.\r\n\r\n#### How to Run Scripts\r\n\r\n1. Navigate to the top-level directory (_brown-apis/_).\r\n2. Run the script from a package environment, allowing it to import the database from the _api_ package:\r\n - `python3 -m api.scripts.<scriptname>` where \'scriptname\' does NOT include the \'.py\' extension.\r\n3. You can include any script arguments after the command (just like you normally would).\r\n\r\n## Data Structures\r\n\r\nWe use MongoDB to store various menus and schedules, as well as client information. In MongoDB, all objects are stored as JSON, and there is no schema that forces all objects in a collection to share the same fields. Thus, we keep documentation of the different collections here (and in the API overviews below) to encourage an implicit schema. Objects added to the database should follow these templates. If you add a new collection to the database, remember to add a template here, too.\r\n\r\n#### db.clients ####\r\n\r\n- *username*: &lt;STRING&gt;,\r\n- *client_email*: &lt;STRING&gt;,\r\n- *client_id*: &lt;STRING&gt;,\r\n- *valid*: &lt;BOOLEAN&gt;, **<-- can this client make requests?**\r\n- *joined*: &lt;DATETIME&gt;, **<-- when did this client register?**\r\n- *requests*: &lt;INTEGER&gt; **<-- total number of requests made by this client (not included until this client makes their first request)**\r\n- *activity*: **list of activity objects which take the form:**\r\n * _timestamp_: &lt;DATETIME&gt;, **<-- time of request**\r\n * _endpoint_: &lt;STRING&gt; **<-- endpoint of request**\r\n- **DEPRECATED:** *client_name*: &lt;STRING&gt; **<-- replaced with _username_**\r\n\r\n#### db.api_documentations ####\r\n- *urlname*: &lt;STRING&gt;\r\n- *name*: &lt;STRING&gt;\r\n- *contents*: &lt;STRING&gt;\r\n- *imageurl*: &lt;IMAGE&gt;\r\n\r\n\r\n## High Level API Overviews\r\n\r\n### Dining\r\n\r\nThe Dining API is updated every day by a scraper that parses the menus from Brown Dining Services\' website. The hours for each eatery are entered manually inside of the scraper script before each semester. When the scraper is run, all this data is stored in the database. Calls to the API trigger various queries to the database and fetch the scraped data.\r\n\r\n#### db.dining\_menus\r\n\r\n- *eatery*: &lt;STRING&gt;,\r\n- *year*: &lt;INTEGER&gt;,\r\n- *month*: &lt;INTEGER&gt;,\r\n- *day*: &lt;INTEGER&gt;,\r\n- *start_hour*: &lt;INTEGER&gt;, **<-- these four lines describe a menu\'s start/end times**\r\n- *start_minute*: &lt;INTEGER&gt;, \r\n- *end_hour*: &lt;INTEGER&gt;, \r\n- *end_minute*: &lt;INTEGER&gt;,\r\n- *meal*: &lt;STRING&gt;,\r\n- *food*: [ &lt;STRING&gt;, &lt;STRING&gt;, ... ] **<-- list of all food items on menu**\r\n- *&lt;section&gt;*: [ &lt;STRING&gt;, &lt;STRING&gt;, ... ], **<-- category (e.g. "Bistro") mapped to list of food items**\r\n- ... (there can be multiple sections per menu)\r\n\r\n#### db.dining\_hours\r\n\r\n- *eatery*: &lt;STRING&gt;,\r\n- *year*: &lt;INTEGER&gt;,\r\n- *month*: &lt;INTEGER&gt;,\r\n- *day*: &lt;INTEGER&gt;,\r\n- *open_hour*: &lt;INTEGER&gt;,\r\n- *open_minute*: &lt;INTEGER&gt;, \r\n- *close_hour*: &lt;INTEGER&gt;, \r\n- *close_minute*: &lt;INTEGER&gt;\r\n\r\n#### db.dining\_all\_foods\r\n\r\n- *eatery*: &lt;STRING&gt;,\r\n- *food*: [ &lt;STRING&gt;, &lt;STRING&gt;, ... ]\r\n\r\n### WiFi\r\n\r\nThe WiFi API just forwards requests to another API run by Brown CIS. Their API is protected by a password (HTTP Basic Auth) and is nearly identical to the WiFi API that we expose. The response from the CIS API is returned back to the client.\r\n\r\n### Laundry\r\n\r\nThe Laundry API is updated manually with a scraper that pulls all the laundry rooms and stores them in the database. When a request is received, the API checks the request against the list of rooms in the database and optionally retrieves status information by scraping the laundry website in realtime.\r\n\r\n#### db.laundry\r\n- *room*\r\n - *name*: &lt;STRING&gt;\r\n - *id*: &lt;INT&gt;\r\n - *machines*: list of objects with:\r\n - *id*: &lt;INT&gt;\r\n - *type*: &lt;STRING&gt; (one of `washFL`, `washNdry`, `dry`)\r\n\r\n### Academic\r\n\r\nThe Academic API used to scrape course information from Banner and store it in the database. Since Banner has been deprecated for course selection, the Academic API scraper has stopped working, and we are no longer able to collect course data. Thus, the Academic API is unavailable for the foreseeable future. Contributions are especially welcome here.'
contents='The future of Brown APIs depends on you! All of our code is open source, and we rely heavily on contributions from the Brown community. You can view our code (along with open issues and future plans) [on Github](https://github.com/hackatbrown/apis).\r\n\r\n## How to Help\r\n\r\nThere are many ways to help further the development of Brown APIs. You can add new APIs, maintain and enhance current APIs, fix bugs, improve this website, or build better tools to help others contribute. Check the [issues](https://github.com/hackatbrown/apis/issues) on our Github for suggestions of what to do first. You don\'t need to be able to code to help either. Reach out to CIS and other university organizations to get easier and wider access to campus data.\r\n\r\n## General Development Information\r\n\r\nThe APIs are written in Python and run on a [Flask](http://flask.pocoo.org) server. This website is also served by the same server and uses [Jinja](http://jinja.pocoo.org) templates with the [Bootstrap](http://getbootstrap.com) framework.\r\n\r\nData is stored in a single [MongoDB](https://docs.mongodb.com/getting-started/python/introduction/) database hosted on [mLab.com](https://mlab.com/) (_Note: This was probably a bad decision that could really use some contributions to fix!_). Because there is only one copy of the database, developers must take care to avoid corrupting the data while testing fixes or new features.\r\n\r\n## Getting Started\r\n\r\nYou\'ll need the latest version of Python 3, along with `virtualenv` and `pip`. Go ahead and look up these programs if you aren\'t familiar with them. They\'re crucial to our development process.\r\n\r\n1. Clone this repository to your own machine:\r\n - `git clone https://github.com/hackatbrown/brown-apis.git`\r\n2. Open a terminal and navigate to the top level of the repository (_brown-apis/_).\r\n3. Create and activate a virtual environment (again, look up `virtualenv` online to understand what this does):\r\n - ``virtualenv -p `which python3` venv``\r\n - `source venv/bin/activate`\r\n4. Install all the required libraries in your virtual environment:\r\n - `pip install -r requirements.txt`\r\n5. Create a new branch for your changes. For example (while on the master branch):\r\n - `git checkout -b <descriptive-branch-name>`\r\n6. Make any changes you want to make.\r\n7. Commit your changes, push them to `origin/<branch-name>`, and open a new pull request.\r\n8. To test your code, you may merge them into the `stage` branch. These changes will be automatically reflected on our [staging server](http://brown-apis-staging.herokuapp.com/). You can merge changes from the develop branch into the staging branch with:\r\n - `git checkout stage`\r\n - `git fetch origin`\r\n - `git reset --hard origin/master`\r\n - `git rebase <your-branch-name>`\r\n - `git push --force`\r\n - Note: This won\'t work if multiple developers are doing this at the same time.\r\n9. You\'re code will be merged into `master` once your pull request is accepted.\r\n\r\n#### How to Run Scripts\r\n\r\n1. Navigate to the top-level directory (_brown-apis/_).\r\n2. Run the script from a package environment, allowing it to import the database from the _api_ package:\r\n - `python3 -m api.scripts.<scriptname>` where \'scriptname\' does NOT include the \'.py\' extension.\r\n3. You can include any script arguments after the command (just like you normally would).\r\n\r\n## Data Structures\r\n\r\nWe use MongoDB to store various menus and schedules, as well as client information. In MongoDB, all objects are stored as JSON, and there is no schema that forces all objects in a collection to share the same fields. Thus, we keep documentation of the different collections here (and in the API overviews below) to encourage an implicit schema. Objects added to the database should follow these templates. If you add a new collection to the database, remember to add a template here, too.\r\n\r\n#### db.clients ####\r\n\r\n- *username*: &lt;STRING&gt;,\r\n- *client_email*: &lt;STRING&gt;,\r\n- *client_id*: &lt;STRING&gt;,\r\n- *valid*: &lt;BOOLEAN&gt;, **<-- can this client make requests?**\r\n- *joined*: &lt;DATETIME&gt;, **<-- when did this client register?**\r\n- *requests*: &lt;INTEGER&gt; **<-- total number of requests made by this client (not included until this client makes their first request)**\r\n- *activity*: **list of activity objects which take the form:**\r\n * _timestamp_: &lt;DATETIME&gt;, **<-- time of request**\r\n * _endpoint_: &lt;STRING&gt; **<-- endpoint of request**\r\n- **DEPRECATED:** *client_name*: &lt;STRING&gt; **<-- replaced with _username_**\r\n\r\n#### db.api_documentations ####\r\n- *urlname*: &lt;STRING&gt;\r\n- *name*: &lt;STRING&gt;\r\n- *contents*: &lt;STRING&gt;\r\n- *imageurl*: &lt;IMAGE&gt;\r\n\r\n\r\n## High Level API Overviews\r\n\r\n### Dining\r\n\r\nThe Dining API is updated every day by a scraper that parses the menus from Brown Dining Services\' website. The hours for each eatery are entered manually inside of the scraper script before each semester. When the scraper is run, all this data is stored in the database. Calls to the API trigger various queries to the database and fetch the scraped data.\r\n\r\n#### db.dining\_menus\r\n\r\n- *eatery*: &lt;STRING&gt;,\r\n- *year*: &lt;INTEGER&gt;,\r\n- *month*: &lt;INTEGER&gt;,\r\n- *day*: &lt;INTEGER&gt;,\r\n- *start_hour*: &lt;INTEGER&gt;, **<-- these four lines describe a menu\'s start/end times**\r\n- *start_minute*: &lt;INTEGER&gt;, \r\n- *end_hour*: &lt;INTEGER&gt;, \r\n- *end_minute*: &lt;INTEGER&gt;,\r\n- *meal*: &lt;STRING&gt;,\r\n- *food*: [ &lt;STRING&gt;, &lt;STRING&gt;, ... ] **<-- list of all food items on menu**\r\n- *&lt;section&gt;*: [ &lt;STRING&gt;, &lt;STRING&gt;, ... ], **<-- category (e.g. "Bistro") mapped to list of food items**\r\n- ... (there can be multiple sections per menu)\r\n\r\n#### db.dining\_hours\r\n\r\n- *eatery*: &lt;STRING&gt;,\r\n- *year*: &lt;INTEGER&gt;,\r\n- *month*: &lt;INTEGER&gt;,\r\n- *day*: &lt;INTEGER&gt;,\r\n- *open_hour*: &lt;INTEGER&gt;,\r\n- *open_minute*: &lt;INTEGER&gt;, \r\n- *close_hour*: &lt;INTEGER&gt;, \r\n- *close_minute*: &lt;INTEGER&gt;\r\n\r\n#### db.dining\_all\_foods\r\n\r\n- *eatery*: &lt;STRING&gt;,\r\n- *food*: [ &lt;STRING&gt;, &lt;STRING&gt;, ... ]\r\n\r\n### WiFi\r\n\r\nThe WiFi API just forwards requests to another API run by Brown CIS. Their API is protected by a password (HTTP Basic Auth) and is nearly identical to the WiFi API that we expose. The response from the CIS API is returned back to the client.\r\n\r\n### Laundry\r\n\r\nThe Laundry API is updated manually with a scraper that pulls all the laundry rooms and stores them in the database. When a request is received, the API checks the request against the list of rooms in the database and optionally retrieves status information by scraping the laundry website in realtime.\r\n\r\n#### db.laundry\r\n- *room*\r\n - *name*: &lt;STRING&gt;\r\n - *id*: &lt;INT&gt;\r\n - *machines*: list of objects with:\r\n - *id*: &lt;INT&gt;\r\n - *type*: &lt;STRING&gt; (one of `washFL`, `washNdry`, `dry`)\r\n\r\n### Academic\r\n\r\nThe Academic API used to scrape course information from Banner and store it in the database. Since Banner has been deprecated for course selection, the Academic API scraper has stopped working, and we are no longer able to collect course data. Thus, the Academic API is unavailable for the foreseeable future. Contributions are especially welcome here.'
contents=Markup(markdown.markdown(contents))
return render_template('documentation_template.html',
api_documentations=[], name='How to Contribute', contents=contents)
api_documentations=list(api_documentations.find().sort("_id",1)),
name='How to Contribute', contents=contents)

@app.route('/admin/add-documentation', methods=['GET', 'POST'])
@requires_auth
Expand Down
1 change: 1 addition & 0 deletions api/templates/signup.html
Expand Up @@ -40,6 +40,7 @@ <h1>Sign up for a Client ID!</h1>
</div>

<div class="row">
<br/>
<p><em>Were you looking to contribute to our APIs, rather than use them? Check out how to <a href="https://api.students.brown.edu/contribute">contribute</a>!</em></p>
</div>
</div>
Expand Down

0 comments on commit b8fd0e4

Please sign in to comment.