Running bigdata as an embedded database is great, but sometimes you need to share your data across multiple processes. One easy way to do this is with the OpenRDF Sesame HTTP Server. A few simple steps will get you up and running:
1. Install Sesame 2.3. We have tested with Sesame 2.3.0, but 2.3.1 should probably work just fine. Download and unpack the SDK, and make a note of where you’ve installed it. We’ll refer back later to this directory as the SESAME_DIR.
2. Locate the SESAME_DIR/war directory and unpack the openrdf-sesame.war web application into TOMCAT_HOME/webapps/openrdf-sesame. We will refer back to this directory as the SESAME_SERVER_DIR.
3. Launch the Sesame command line console once to allow Sesame to create its working data directory, known as ADUNA_DATA . Locate this ADUNA_DATA directory, for us it gets created at: C:\Documents and Settings\mike\Application Data\Aduna\OpenRDF Sesame console. We’ll refer later to this directory as the ADUNA_DATA_DIR.
4. Check out the bigdata source tree from SVN and locate the “build.properties” file in the root directory of the source tree. Search the file for “OpenRDF Sesame HTTP Server”. You will find three properties that need to be set: sesame.dir, sesame.server.dir, and aduna.data.dir. Set these as appropriate to the directories from the first three steps.
5. You can now use the “install-sesame-server” ant task, which will compile the bigdata source, prepare a jar file, and install the various files necessary to get bigdata running behind the Sesame HTTP Server.
6. Start Tomcat to get the Sesame HTTP Server web application running.
7. Use the Sesame console application (located in SESAME_DIR/bin) to create a new bigdata repository instance . You will need to specify a repository ID (default is “bigdata”), a repository title, and the location of a bigdata properties file.
8. Once created, you can verify your installation by running the DemoSesameServer application located in bigdata-sails/src/samples/com/bigdata/samples/remoting. This application will create some statements and then perform a simple SPARQL query.