Skip to content

vikiru/restasaurus

Repository files navigation


RESTasaurus is a RESTful API, leveraging Express, MongoDB, and Mongoose to deliver comprehensive data on almost 1200 dinosaurs!

demo.mp4

Important

The data within the API is taken directly from Wikipedia via its API, as is. Please note that the information may have been modified since the last retrieval. All images and text belong to their respective authors, and attribution is provided accordingly for both. After retrieval, the data undergoes processing to be transformed into a custom JSON object, referred to as MongooseData.

For a better understanding of the information provided by the API, please check out the models directory. The schemas used within the MongoDB database include:

  • Dinosaur: This is the main model which represents a dinosaur, including its unique properties such as name, temporal range, diet, locomotion type, and a description. Additionally, it also contains references to the sub-models below, which are populated with their relevant values when handling API requests.
  • ClassificationInfo: This model contains the classification information of a dinosaur, including details like its family, order, and genus.
  • DinosaurImage: This model is used to store the image data related to a dinosaur, including the image source and attribution details.
  • DinosaurSource: This model represents the source of the dinosaur data, which is the Wikipedia article for that particular dinosaur. This includes information such as the title, author, last revision date, revision history url, and more.

Additionally, if you would like to see an example of a response from the API, please see the Model Overview page to see the model structure present within the API.

📖 Table of Contents

📍 API Endpoints

Note

The API is currently configured to only support GET requests and responses from the API are only in json format. The current rate limit is set to 20 requests per hour.

A comprehensive overview detailing all available endpoints can be located within the documentation under the Endpoint Overview section. Each endpoint has a dedicated page detailing its url along with a general description, parameters (if any) and finally, a demonstration of the endpoint via Postman is also provided for clarity.

📄 General Endpoints

🦖 Dinosaur Endpoints

📷 Image Endpoints

OpenAPI Specification

To view more details about all endpoints such as the expected responses and status codes, please take a look at the OpenAPI Specification.

🛠️ Tech Stack

Backend:

Testing:

Documentation:

REST API

  • Render - the API can be accessed via the endpoint here

Please note that the API is hosted on Render, using the Free Tier and as such, is limited to the constraints of that free tier, such as spinning down on idle (no requests after 15 minutes) and 750 instance hours per month.

CI:

Dev Tools:

📝 Prerequisites

Ensure that the following dependencies are installed onto your machine by following the Setup Instructions.

⚡ Setup Instructions

Environment Setup

  1. Clone this repository to your local machine.
git clone https://github.com/vikiru/restasaurus.git
cd restasaurus
  1. Download and install all required dependencies.
npm install
  1. Setup your .env file with the required information.
PORT=YOUR-PORT-HERE
MONGODB_URI='YOUR-MONGODB-URI-HERE'
NODE_ENV='development'

Retrieving data from Wikipedia via its API

Run the retrieveData script to retrieve all dinosaur information.

npm run retrieveData

This script will retrieve information about dinosaurs from Wikipedia via its API and then process that data to construct a JSON object represented by MongooseData.

Please check the app/logs directory in the event of any errors. Specifically, you can check the errors.log or all.log to view the errors or all levels of logging, respectively.

Additionally, confirm that app/scripts contains the following JSON files:

  • allDinoNames.json: contains all dinosaur names (should be around 1427 names).
  • filteredNames.json: contains the names of the dinosaurs that passed the filtering process (should be around 1153 names).
  • htmlData.json: contains the raw HTML for each Wikipedia article as a String.
  • imageData.json: contains the image data for each Dinosaur.
  • pageData.json: contains the page data for each Wikipedia article.
  • dinosaurData.json: contains the processed data of all dinosaurs.

Saving the processed data to the MongoDB database

Run the postData script to save all dinosaurs to your MongoDB database, once retrieveData was successful.

npm run postData

Please check your MongoDB database collections and ensure that the dinosaurs were saved successfully.

There should be 5 collections:

  1. classificationinfos: This collection contains all of the ClassificationInfo documents.
  2. counters: This collection is auto-created and allows for auto-indexing of documents.
  3. dinosaurimages: This collection contains all of the DinosaurImage documents.
  4. dinosaurs: This is the main collection which contains all of the Dinosaur documents.
  5. dinosaursources: This collection contains all of the DinosaurSource documents.

After completing these steps, the API should be ready for launch, with all endpoints fully operational. 🎉

🚀 Run

The API can be started via one of the following commands:

  1. Start the API in development env, with nodemon.
npm run dev
  1. Start the API in production env, without nodemon.
npm start

🔍 Testing

Statements Branches Functions Lines
Statements Branches Functions Lines

The comprehensive suite of tests for this project is housed within the test directory. These tests are primarily designed to verify the functionality and reliability of the API and additionally, the scripts used to retrieve the information.

The tests can be run with the following command:

npm test

📜 Available Scripts

  1. Start the API in production env, without nodemon.
npm start
  1. Start the API in development env, with nodemon.
npm run dev
  1. Run all tests.
npm test
  1. Lint all files and check if there are any issues, with ESLint.
npm run lint
  1. Fix all ESLint issues then format the files with Prettier.
npm run prettier
  1. Retrieve all information needed for the API via Wikipedia directly from its API.
npm run retrieveData
  1. Save all dinosaur information to your MongoDB database.
npm run postData
  1. Create test coverage shields badges for README using istanbul-badges-readme.
npm run make-badges

✨ Acknowledgments

Additionally, this API would not be possible without the dinosaur information and image information retrieved from all of the Wikipedia articles accessed through the Wikipedia API. All images and text provided by this API belong to their respective authors.

©️ License

The contents of this repository are licensed under the terms and conditions of the MIT license.

MIT © 2024-present Visakan Kirubakaran.