Skip to content

morganhein/backend-takehome-telegraph

Repository files navigation

Morgan's thoughts on this solution:

  1. I used squirrel to learn the library a bit and use this as an educational exercise. I like it, but scanning values into objects needs some love.
  2. I tried to use the standard library for muxing/route handling/route arguments/query params. I won't do that again. Next time, i'll use something like https://gin-gonic.com/
  3. Some of the patterns here would need to be flushed out more for a real solution.
  4. Obv missing tests.
  5. None of the extra credit was completed.

Morgan's thoughts on the below:

  1. There is no sighting_data property. I piped through the ability to filter exactly on sighting_date, however what further discovery on making this actually usable would be required before implementing a system.
  2. It seems like there are duplicate Equipments for a single Waybill, which doesn't make sense to me, but that's what gets returned.

To run:

  1. Follow the below to get the postgres database up.
  2. Then run cmd/hydrate/main.go to run. go run cmd/hydrate/main.go.
  3. Then run go run cmd/server/main.go to run the server.

Telegraph Backend Take-home

This repo has everything you need to complete the take-home assignment. Know that we are excited about you as a candidate, and can't wait to see what you build!

Requirements

The Falcon project scaffold is inspired by falcon-sqlalchemy-template

Getting Started

Installation and setup

  1. Fork and clone this repo onto your own computer
  2. Start your database server OR
    1. Copy .env.sample to .env and set the values appropriately
    2. Run the database with the command docker-compose up -d
  3. Depending on the values you used in your .env file, set the SQLALCHEMY_DATABASE_URI environment variable to point to your database. For example,
export SQLALCHEMY_DATABASE_URI=postgresql://candidate:password123@localhost:5432/takehome
  1. Change directory to the webapp directory and run pip install -r requirements.txt to install required dependencies
  2. In the same directory, run gunicorn --reload api.wsgi:app to run the web application

The API will be exposed locally at http://127.0.0.1:8000

Run curl http://127.0.0.1:8000/health/ping to test your server. It should return the following JSON:

{"ping": "true"}

It is recommended you create a Python virtual environment for running your project

Migrations

Alembic example usage

Add new migrations with

alembic revision --autogenerate -m "migration name"

Upgrade your database with

alembic upgrade head

Expectations

  • you provide clear documentation
  • any code you write is clear and well organized
  • you spend no more than 3-4 hours total on the project

BONUS you provide tests

Data description

In the data/ are 4 files.

  • locations.csv - a list of locations. The id field is the internal, autogenerated ID for each location.
  • equipment.csv - a list of equipment (i.e., rail cars). The id field is the internal, autogenerated ID for each piece of equipment. The equipment_id field should be considered the primary key for creating relations to other files.
  • events.csv - a list of tracking events. The id field is the internal, autogenerated ID for each tracking event. The field waybill_id is a foreign key to the waybills file. The field location_id is a foreign key to the locations file. The field equipment_id is a foreign key to the equipment file.
  • waybills.csv - a list of waybills. A waybill is a list of goods being cariied on a rail car. The origin_id and destination_id are foreign keys to the locations file. The field equipment_id is a foreign key to the equipment file. The id field is the internal, autogenerated ID for each waybill. The route and parties fields contain JSON arrays of objects. The route field details the rail stations (AKA "scacs") the train will pass through. The parties field defines that various companies involved in shipping the item from its origin to its destination (e.g., shippers, etc.).

NOTE: All dates are in UTC.

User Stories

1. Ingestion pipeline

Implement a data ingestion pipeline that allows you to ingest the 4 CSV files into your database for use with your web application (see user story number 2). Provide clear documentation on how to invoke your pipeline (i.e., run this script, invoke this Makefile target, etc.). Assume that the pipeline can be run on demand and it should drop any existing data and reload it from the files.

2. Web application

Finish implementing the the scaffold Falcon app to read data from your database and provide the following routes:

  • /equipment - data from equipment.csv
  • /events - data from events.csv
  • /locations - data from locations.csv
  • /waybills - data from waybills.csv.
  • /waybills/{waybill id} - should return information about a specific waybill
  • /waybills/{waybill id}/equipment - should return the equipment associated with a specific waybill
  • /waybills/{waybill id}/events - should return the events associated with a specific waybill
  • /waybills/{waybill id}/locations - should return the locations associated with a specific waybill

All the routes should return JSON.

Any event route should allow for filtering by the sighting_data field

3. BONUS: Route endpoint

Note: This user story is optional, and on an "if-you-have-time" basis.

Provide a * /waybills/{waybill id}/route - should return information about the route associated with a specific waybill

4. BONUS: Parties endpoint

Note: This user story is optional, and on an "if-you-have-time" basis.

Provide a * /waybills/{waybill id}/parties - should return information about the parties associated with a specific waybill

About

Simple REST api hydrated by CSV files

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages