Skip to content

Tools for reading data from Solr as a Spark RDD and indexing objects from Spark into Solr using SolrJ.

License

Notifications You must be signed in to change notification settings

deVIAntCoDE/spark-solr

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Lucidworks Spark/Solr Integration

Getting started

Import jar File via spark-shell

cd $SPARK_HOME
./bin/spark-shell --packages "com.lucidworks.spark:spark-solr:2.0.1"
                          or
./bin/spark-shell --jars spark-solr-2.0.1.jar

Connect to your SolrCloud Instance

via DataFrame

val options = Map(
  "collection" -> "{solr_collection_name}",
  "zkhost" -> "{zk_connect_string}"
)
val df = sqlContext.read.format("solr")
  .options(options)
  .load

via RDD

import com.lucidworks.spark.rdd.SolrRDD
val solrRDD = new SolrRDD(zkHost, collectionName, sc)

SolrRDD is an RDD of SolrDocument

via RDD (Java)

import com.lucidworks.spark.rdd.SolrJavaRDD;
import org.apache.spark.api.java.JavaRDD;

SolrJavaRDD solrRDD = SolrJavaRDD.get(zkHost, collection, jsc.sc());
JavaRDD<SolrDocument> resultsRDD = solrRDD.queryShards(solrQuery);

Download/Build the jar Files

Maven Central

The released jar files (1.1.2, 2.0.0, etc..) can be downloaded from the Maven Central repository. Maven Central also holds the shaded, sources, and javadoc .jars for each release.

<dependency>
   <groupId>com.lucidworks.spark</groupId>
   <artifactId>spark-solr</artifactId>
   <version>2.0.1</version>
</dependency>

Snapshots

Snapshots of spark-solr are built for every commit on master branch. The snapshots can be accessed from OSS Sonatype.

Build from Source

mvn clean package -DskipTests

This will build 2 jars in the target directory:

  • spark-solr-${VERSION}.jar

  • spark-solr-${VERSION}-shaded.jar

${VERSION} will be something like 2.1.0-SNAPSHOT, for development builds.

The first .jar is what you’d want to use if you were using spark-solr in your own project. The second is what you’d use to submit one of the included example apps to Spark.

Features

  • Send objects from a Spark (Streaming or DataFrames) into Solr.

  • Read the results from a Solr query as a Spark RDD or DataFrame.

  • Stream documents from Solr using /export handler (only works for exporting fields that have docValues enabled).

  • Read large result sets from Solr using cursors or with /export handler.

  • Data locality. If Spark workers and Solr processes are co-located on the same nodes, the partitions are placed on the nodes where the replicas are located.

Querying

Cursors

Cursors are used by default to pull documents out of Solr. By default, the number of tasks allocated will be the number of shards available for the collection.

If your Spark cluster has more available executor slots than the number of shards, then you can increase parallelism when reading from Solr by splitting each shard into sub ranges using a split field. A good candidate for the split field is the version field that is attached to every document by the shard leader during indexing. See splits section to enable and configure intra shard splitting.

Cursors won’t work if the index changes during the query time. Constrain your query to a static index by using additional Solr parameters using [solr.params].

Streaming API (/export)

If the fields that are being queried have docValues enabled, then the Streaming API can be used to pull documents from Solr in a true Streaming fashion. This method is 8-10x faster than Cursors. The option use_export_handler allows you to enable Streaming API via DataFrame.

Indexing

Objects can be sent to Solr via Spark Streaming or DataFrames. The schema is inferred from the DataFrame and any fields that do not exist in Solr schema will be added via Schema API. See ManagedIndexSchemaFactory.

See Index parameters for configuration and tuning.

Configuration and Tuning

The Solr DataSource supports a number of optional parameters that allow you to optimize performance when reading data from Solr. The only required parameters for the DataSource are zkhost and collection.

Query Parameters

query

Probably the most obvious option is to specify a Solr query that limits the rows you want to load into Spark. For instance, if we only wanted to load documents that mention "solr", we would do:

Usage: option("query","body_t:solr")

Default: *:*

If you don’t specify the "query" option, then all rows are read using the "match all documents" query (*:*).

fields

You can use the fields option to specify a subset of fields to retrieve for each document in your results:

Usage: option("fields","id,author_s,favorited_b,…​")

By default, all fields for each document are pulled back from Solr.

rows

You can use the rows option to specify the number of rows to retrieve from Solr per request. Behind the scenes, the implementation uses either deep paging cursors or Streaming API and response streaming, so it is usually safe to specify a large number of rows.

By default, the implementation uses 1000 rows but if your documents are smaller, you can increase this to 10000. Using too large a value can put pressure on the Solr JVM’s garbage collector.

Usage: option("rows","10000") Default: 1000

use_export_handler

This option is disabled by default and can be used to export results from Solr via /export handler which streams data out of Solr. See Exporting Result Sets for more information.

The /export handler needs fields to be explicitly specified. Please use the fields option or specify the fields in the query.

Usage: option("use_export_handler", "true") Default: true

Increase Read Parallelism using Intra-Shard Splits

If your Spark cluster has more available executor slots than the number of shards, then you can increase parallelism when reading from Solr by splitting each shard into sub ranges using a split field. The sub range splitting enables faster fetching from Solr by increasing the number of tasks in Solr. This should only be used if there are enough computing resources in the Spark cluster.

Shard splitting is disabled by default.

splits

Enable shard splitting on default field _version_.

Usage: option("splits", "true")

Default: false

The above option is equivalent to option("split_field", "_version_")

split_field

The field to split on can be changed using split_field option.

Usage: option("split_field", "id") Default: _version_

splits_per_shard

Behind the scenes, the DataSource implementation tries to split the shard into evenly sized splits using filter queries. You can also split on a string-based keyword field but it should have sufficient variance in the values to allow for creating enough splits to be useful. In other words, if your Spark cluster can handle 10 splits per shard, but there are only 3 unique values in a keyword field, then you will only get 3 splits.

Keep in mind that this is only a hint to the split calculator and you may end up with a slightly different number of splits than what was requested.

Usage: option("splits_per_shard", "30")

Default: 20

flatten_multivalued

This option is enabled by default and flattens multi valued fields from Solr.

Usage: option("flatten_multivalued", "false")

Default: true

dv

The dv option will fetch the docValues that are indexed but not stored by using function queries. Should be used for Solr versions lower than 5.5.0.

Usage: option("dv", "true")

Default: false

Index parameters

soft_commit_secs

If specified, the soft_commit_secs option will be set via SolrConfig API during indexing

Usage: option("soft_commit_secs", "10")

Default: None

batch_size

The batch_size option determines the number of documents that are sent to Solr via a HTTP call during indexing. Set this option higher if the docs are small and memory is available.

Usage: option("batch_size", "10000")

Default: 500

gen_uniq_key

If the documents are missing the unique key (derived from Solr schema), then the gen_uniq_key option will generate a unique value for each document before indexing to Solr. Instead of this option, the UUIDUpdateProcessorFactory can be used to generate UUID values for documents that are missing the unique key field

Usage: option("gen_uniq_key", "true")

Default: false

solr.params

The solr.params option can be used to specify any arbitrary Solr parameters in the form of a Solr query.

Usage: option("solr.params", "fq=userId:[10 TO 1000]&sort=userId desc")

Developing a Spark Application

The com.lucidworks.spark.SparkApp provides a simple framework for implementing Spark applications in Java. The class saves you from having to duplicate boilerplate code needed to run a Spark application, giving you more time to focus on the business logic of your application.

To leverage this framework, you need to develop a concrete class that either implements RDDProcessor or extends StreamProcessor depending on the type of application you’re developing.

RDDProcessor

Implement the com.lucidworks.spark.SparkApp$RDDProcessor interface for building a Spark application that operates on a JavaRDD, such as one pulled from a Solr query (see SolrQueryProcessor as an example).

StreamProcessor

Extend the com.lucidworks.spark.SparkApp$StreamProcessor abstract class to build a Spark streaming application.

See com.lucidworks.spark.example.streaming.oneusagov.OneUsaGovStreamProcessor or com.lucidworks.spark.example.streaming.TwitterToSolrStreamProcessor for examples of how to write a StreamProcessor.

Authenticating with Kerberized Solr

For background on Solr security, see: https://cwiki.apache.org/confluence/display/solr/Security.

The SparkApp framework allows you to pass the path to a JAAS authentication configuration file using the -solrJaasAuthConfig option.

For example, if you need to authenticate using the "solr" Kerberos principal, you need to create a JAAS configuration file named jaas-client.conf that sets the location of your Kerberos keytab file, such as:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  keyTab="/keytabs/solr.keytab"
  storeKey=true
  useTicketCache=true
  debug=true
  principal="solr";
};

To use this configuration to authenticate to Solr, you simply need to pass the path to jaas-client.conf created above using the -solrJaasAuthConfig option, such as:

spark-submit --master yarn-server \
  --class com.lucidworks.spark.SparkApp \
  $SPARK_SOLR_PROJECT/target/spark-solr-${VERSION}-shaded.jar \
  hdfs-to-solr -zkHost $ZK -collection spark-hdfs \
  -hdfsPath /user/spark/testdata/syn_sample_50k \
  -solrJaasAuthConfig=/path/to/jaas-client.conf

About

Tools for reading data from Solr as a Spark RDD and indexing objects from Spark into Solr using SolrJ.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Java 52.2%
  • Scala 47.8%