Skip to content
:shipit: Domain-specific search engine service backed by Django, Apache Solr, and Scrapy
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github
scripts
sleuth
sleuth_backend
sleuth_crawler
.coveragerc
.gitignore
.travis.yml
Dockerfile.solr
Dockerfile.web
LICENSE
README.md
docker-compose.yml
manage.py
requirements.txt

README.md

Sleuth 🔎

ZenHub Build Status Coverage Status

UBC's own search engine 🚀

Getting Started

Please see CONTRIBUTING for guidelines on how to contribute to this repo.

Useful Links

Getting Started: Docker, Django, Django Documentation, Apache Solr, Haystack

Installation

  • Clone this repository and the Sleuth frontend into the same directory
  • Install Docker
  • Run
$ docker-compose up --build
  • Once containers have started you can exec into bash in your web container and configure a Django admin user.
$ docker-compose exec web bash
# Create Django admin user
root@57d91373cdca:/home/sleuth# python3 manage.py createsuperuser

Accessing Solr

Accessing Django

Accessing the Sleuth Front-end App

The Sleuth front-end repository is here

Adding Test Data

Once you have started your containers you can populate the "test" core in Solr with some test data by running

$ bash scripts/populate.sh

For live data, you can currently run the BroadCrawler, which scrapes a few thousand pages and pipelines them into the appropriate cores based on their type.

$ bash sleuth_crawler/run_crawlers.sh

To empty a core, go to:

http://localhost:8983/solr/[CORE_NAME_HERE]/update?stream.body=<delete><query>*:*</query></delete>&commit=true
You can’t perform that action at this time.