Skip to content
Set of JUnit Rules/Extensions to easily load data to test your spring-data elasticsearch-based projects
Java Shell
Branch: develop
Clone or download


Build  Status codebeat badge GitHub

spring-esdata-loader is a Java testing library to help you write integration tests for your spring-data elasticsearch-based projects, by allowing you to easily load data into Elasticsearch, using entity mappings (i.e domain classes annotated with @Document, @Field, etc) and via specific Junit 4's Rules or JUnit Jupiter's Extensions.

The library reads all the metadata it needs from the entity classes (index name, index type, etc) , uses them to create/refresh the index on the ES server and feeds it with the data using the ElasticsearchOperations present in your test application context.


  • Simple API and no configuration required
  • Support for JUnit 4 via LoadEsDataRule, DeleteEsDataRule
  • Support for JUnit Jupiter via @LoadEsDataConfig / @LoadEsDataExtension or @DeleteEsDataConfig / @DeleteEsDataExtension
  • Built-in support for gzipped data
  • Multiple data formats(dump, manual)
  • Written in Java 8
  • Based on Spring (Data, Test)


spring-esdata-loader is based on dependencies that you already have in your Spring (Boot) project, if you are doing Elasticsearch with Spring :

Installation & Usage

The library is split into 2 independent sub-modules, both are available on JCenter and Maven Central:

  • spring-esdata-loader-junit4 for testing with JUnit 4
  • spring-esdata-loader-junit-jupiter for testing with JUnit Jupiter

To get started,

  1. add the appropriate dependency to your gradle or maven project
Gradle Maven
JUnit 4
dependencies {
    testImplementation 'com.github.spring-esdata-loader:spring-esdata-loader-junit4:2.0.0'
JUnit Jupiter
dependencies {
    testImplementation 'com.github.spring-esdata-loader:spring-esdata-loader-junit-jupiter:2.0.0'
  1. write your test class. You can have a look at:

Supported Data Formats

spring-esdata-loader currently supports 2 formats to load data into Elasticsearch: DUMP and MANUAL.

Dump data format

Here is an example:


You can use a tool like elasticdump (requires NodeJS) to extract existing data from your Elasticsearch server, and them dump them into a JSON file.

$ npx elasticdump --input=http://localhost:9200/my_index --output=my_index_data.json

The above command will run elasticdump to extract data from an index named my_index on a ES server located at http://localhost:9200 and then save the result into a file named my_index_data.json

If you change the --output part above into --output=$ | gzip my_data.json.gz the data will be automatically gzipped

Manual data format

In this format, you specify your target data directly (no metadata like _index, _source, ...), as an Array of JSON objects.

This is more suitable when you create test data from scratch (as opposed to dumping existing ones from a ES server) because it is easier to tweak later on to accommodate future modifications in tests. (Thanks to @DPorcheron for the idea 💡!)

Here is an example:



Contributions are always welcome! Just fork the project, work on your feature/bug fix, and submit it. You can also contribute by creating issues. Please read the contribution guidelines for more information.


Copyright (c) 2019 Tine Kondo. Licensed under the MIT License (MIT)

You can’t perform that action at this time.