Skip to content

Ab-hinav/LogIngestor-QueryInterface

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


Logo

Log Ingestor & Query Interface

This is my solution to the SDE 1 assignment from Dyte

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. List Of Features
  5. Further Thoughts
  6. Contact
  7. Acknowledgments

About The Project

System Design of The Project

Untitled-2023-10-07-1753

About Each Component

1 Nginx endpoint at 3000 will function as a load balancer distributing the incoming req b/w the two express servers (we can add more as required)
2 Express web servers - there only job is to publish the log message to a message queue , in our case RabbitMQ
3 RabbitMq - its job is to fill the queue and provide consumers a steady flow to consume log messages
4 Message Consumer & REST server - its job is to consume the messages and post them to a datasource in this case Postgres RDBMS , here i am posting messages via a simple POST request
5 PostGres RDBMS - its job is to function as a datasource , provide queryable interface to other consumers

(back to top)

Built With

(back to top)

Getting Started

This is an example of how you may give instructions on setting up your project locally. To get a local copy up and running follow these simple example steps.

Prerequisites

First clone client-nginx folder.
Make sure you have Docker installed in your PC
Then checkout to the folder you just cloned and run the following

  • docker-compose
    docker-compose up --build

You should now see 1 nginx , 2 node servers, 1 posgres rds ,1 rabbitmq containers

Installation of Consumer-backend and front-end

  1. Clone the Consumer-backend & Consumer-frontend repos
  2. After cloning the Consumer-backend run (inside the Consumer-backend) -
     npm i 
    Ensure that all dependencies are installed. After that we need to run some db migrations
     npx sequelize-cli db:migrate  
  3. After running the migrations we can start up our consumer-backend server
    npm run dev
  4. To Verify if the consumer-backend is working fine check the following endpoints
    http://localhost:8081/
    http://localhost:8081/hello

These endpoints ensure your backend is up and connected to db

  1. Now we can run our frontEnd , clone the Consumer-fronend repo and simply run

    npm run dev
  2. Now you can view the frontend at http://localhost:5173/

  3. Hurray your system setup is complete

(back to top)

Usage

You can post log messages in the given format

    {
      "level": "problem",
      "message": " a new message to abhi",
      "resourceId": "server-1434",
      "timestamp": "2023-09-15T08:00:00Z",
      "traceId": "abc-xyz-123",
      "spanId": "span-456",
      "commit": "5e5342f",
      "metadata": {
        "parentResourceId": "server-0987"
      }
}

Use Postman to post these messages to

http://localhost:3000/logs

Now you can use the frontend to filter and search through the message with the filters provided
Filters work on [And] basis.

For more examples, please refer to the Documentation

(back to top)

List Of Features Present

  • Find all logs with the level set to "error".
  • Search for logs with the message containing the term "Failed to connect"
  • Retrieve all logs related to resourceId "server-1234"
  • Filter logs between the timestamp "2023-09-10T00:00:00Z" and "2023-09-15T23:59:59Z". (Bonus)
  • Allow combining multiple filters

(back to top)

Further Thoughts

There are many improvements that could be made to the system. Given the time constraint I could not explore and implement.

Some of the Improvements/Explorations that could be made -

  1. Implementing a full text search vector feature of postgres for faster search
  2. Implementing a role based login for the query UI
  3. Using a time-series DB or something like https://www.mongodb.com/docs/manual/core/timeseries-collections/ as a datasource
  4. Exploring a cloud based saving solution to save old logs to something like an S3
  5. Maybe using something other than http calls
  6. Exploring possibilities with elastic search

(back to top)

Contact

Abhinav Singh - abhinav16197@gmail.com

(back to top)

Acknowledgments

  • Thanks to the Dyte team for providing this problem statement .
  • It was an amazing learning experience for me building this system end to end
  • Would love to know what you guys think of my solution/submission

About

A POC for a log ingestor and queryInterface

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published