Skip to content
GNARQL creates an aggregation of structured web data focused on a user's audio collection. It exposes this aggregation through a SPARQL enpoint.
Prolog Java Shell
Find file
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.
SeRQL
crawl
examples
facet
old
walkload
AUTHORS
README.md
TODO
collection_stats.pl
install.sh
namespaces.pl
start.pl

README.md

GNARQL Audio Collection Aggregator

About

GNARQL creates an aggregation of structured web data focused on a user's audio collection. It exposes this aggregation through a SPARQL enpoint. See also Yves Raimond's PhD thesis "A Distributed Music Information System" for a description of this tool.

Install

Run

  • Launch start.pl (modify the header of the script if your prolog install is elsewhere than in /usr/local)
  • That's it!

Use

Now, here are the things that you can do. GNARQL exposes a set of URI, that can be used to drive it.

First, you'll need to load a GNAT'ed audio collection (GNAT is a tool that drops some little RDF files throughout your music collection, linking its items towards corresponding identifiers out there on the Web, you can download it at http://sourceforge.net/projects/motools/).

  • http://localhost:3021/load?path=/path/to/your/collection&base=http://base_uri/

    The "path" parameter holds the path to your audio collection, and the optional "base" parameter holds a base_uri (in case your collection is also served through HTTP, and you want HTTP identifiers to be in use, for example)

  • http://localhost:3021/reload

    If your collection has been modified, or if you just GNAT'ed some new items, this will get through previously loaded collections and look for modifications.

  • http://localhost:3021/make

    This will reload every previously loaded/crawled changed RDF files.

Now we ingested some raw information about our audio collection, let's aggregate some data about it!

  • http://localhost:3021/crawl/init?n=10

    This will init 10 crawlers. The "n" parameter is the number of crawlers to be instantiated.

  • http://localhost:3021/crawl/start

    This will launch the crawling process. Note that it should resume fine (if you stop your GNARQL or in case of crash).

Additional Notes

At any time, you can query the GNARQL instance through the SPARQL end-point at:

  • http://localhost:3021/sparql/

There is also a Web interface available at:

  • http://localhost:3021/

PS

The old project repository location at SourceForge is now deprecated. All new developments will be pushed to this repository location here at GitHub.

Something went wrong with that request. Please try again.