Quick Reference

DuyHai DOAN edited this page Jan 15, 2017 · 26 revisions

Home

Clone this wiki locally

Let's consider the following entity for all the below examples:

    @Table
    public class User
    {
        @PartitionKey
        private Long userId;

        @Column
        private String firstname;

        @Column
        private String lastname;

        public User(){}

        public User(Long userId, String firstname, String lastname){...}

        //Getters & Setters
    }


Inserting an entity

    manager
        .crud()
        .insert(new User(10L,"John","DOE"))
        .execute();

Inserting an entity as JSON (from Cassandra 2.2.x and after)

    manager
        .crud()
        .insertJSON("{\"userid\": 10, \"firstname\": \"John\", \"lastname\": \"DOE\"}")
        .execute();


Updating all non-null fields for an entity

    manager
        .crud()
        .update(entity)
        .execute();


Updating a property for an entity

    manager
        .dsl()
        .update()
        .fromBaseTable()
        .firstName().Set("Jonathan")
        .where()
        .userId().Eq(10L)
        .execute();


Updating a property for an entity using JSON (from Cassandra 2.2.x and after)

    manager
        .dsl()
        .update()
        .fromBaseTable()
        .firstName().Set_FromJSON("\"Jonathan\"")
        .where()
        .userId().Eq_FromJSON("10")
        .execute();


Deleting an entity

Deleting an entity instance

    User user = new User(10L, null, null);
    manager
        .crud()
        .delete(user)
        .execute();

Deleting an entity by its id

    manager
        .crud()
        .deleteById(10L)
        .execute();

Deleting a whole partition

    manager
        .crud()
        .deleteByPartitionKeys(10L)
        .execute();


Deleting a property for an entity

Using the delete DSL

    manager
        .dsl()
        .delete()
        .biography()
        .fromBaseTable()
        .where()
        .userId().Eq(10L)
        .execute();

Using the delete DSL with JSON (from Cassandra 2.2.x and after)

    manager
        .dsl()
        .delete()
        .biography()
        .fromBaseTable()
        .where()
        .userId().Eq_FromJSON("10")
        .execute();


Using the update DSL

    manager
        .dsl()
        .update()
        .fromBaseTable()
        .biography.Set(null)
        .where()
        .userId().Eq(10L)
        .execute();

Do not forget that in CQL semantics setting a column to null means deleting it


Finding entities with clustering columns

For all examples in this section, let's consider the following clustered entity representing a tweet line

    @Table(table = "lines")
    public class TweetLine
    {

        @PartitionKey
        @Column("user_id")
        private Long userId;

        @ClusteringColumn(1)
        @Enumerated
        private LineType type;

        @ClusteringColumn(value = 2, asc = false) // Sort by descending order
        @TimeUUID //Time uuid type in Cassandra
        @Column("tweet_id")
        private UUID tweetId;

        @Column
        private String content;

        //Getters & Setters

        public static enum LineType
        { USERLINE, TIMELINE, FAVORITELINE, MENTIONLINE}
    }

Find by partition key and clustering columns

Get the last 10 tweets from timeline, starting from tweet with lastUUID

    // Generate SELECT * FROM lines WHERE user_id = ? AND (type, tweet_id) < (?,?) AND type >= ?  
    List<TweetLine> tweets = manager
        .dsl()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .userId().Eq(10L)
        .type_And_tweetId().type_And_tweetId_Lt_And_type_Gte(LineType.TIMELINE, lastUUID, LineType.TIMELINE) 
        .limit(10)
        .getList();

Indexed query using native secondary index

@Table
    public class User {

        @PartitionKey
        private UUID user_id;

        ...

        @Index
        @Column
        private int age;

        ... 
    }

    manager.
        .indexed()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .age().Eq(32)
        ....

Indexed query using SASI

@Table
    public class User {

        @PartitionKey
        private UUID user_id;

        ...

        @SASI(indexMode = IndexMode.CONTAINS, analyzed = true, analyzerClass = Analyzer.NON_TOKENIZING_ANALYZER, normalization = Normalization.LOWERCASE)
        @Column
        private String name;

        @SASI(indexMode = IndexMode.PREFIX, analyzed = false)
        @Column        
        private String country:

        @SASI
        @Column
        private int age;

        ... 
    }

    manager.
        .indexed()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .name().Contains("John")
        .age().Gte_And_Lte(25, 35)
        .country().Eq("USA")
        ....

Indexed query using DSE Search

@Table
    public class User {

        @PartitionKey
        private UUID user_id;

        ...

        @DSE_Search(fullTextSearchEnabled = true)
        @Column
        private String name;

        @DSE_Search
        @Column        
        private String country:

        @DSE_Search
        @Column
        private int age;

        ... 
    }

    //Standard usage
    manager.
        .indexed()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .name().Contains("John")
        .age().Gte_And_Lte(25, 35)
        .country().Eq("USA")
        ....


    //Raw Predicate    
    manager.
        .indexed()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .name().RawPredicate("*Jo??y*")
        ....

    //Raw Solr query with OR predicate
    manager.
        .indexed()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .rawSolrQuery("(name:*John* OR login:jdoe*) AND age:[25 TO 35]")
        ....        

Iterating through a large set of entities

Fetch all timeline tweets by batch of 100 tweets

    Iterator<TweetLine> iterator = manager
        .dsl()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .userId().Eq(10L)
        .type_And_tweetId().type_And_tweetId_Lt_And_type_Gte(LineType.TIMELINE, lastUUID, LineType.TIMELINE)
        .withFetchSize(100) // Fetch Size = 100 for each page
        .iterator();

    while(iterator.hasNext())
    {
        TweetLine timelineTweet = iterator.next();
        ...
    }       

Deleting entities with clustering columns

Deleting all timeline tweets

    // Generate DELETE * FROM lines WHERE user_id = ? AND tpe = ?
    manager.
        .dsl()
        .delete()
        .allColumns_FromBaseTable()
        .where()
        .userId().Eq(10L)
        .type().Eq(LineType.TIMELINE)
        .execute();

Deleting the whole partition using the CRUD API

    // Generate DELETE * FROM lines WHERE user_id = ?
    manager.
        .crud()
        .deleteByPartitionKeys(10L)
        .execute();


Mapping UDT

To declare a JavaBean as UDT

    @UDT(keyspace = "...", name = "user_udt")
    public class UserUDT
    {
        @Column
        private Long userId;

        @Column
        private String firstname;

        @Column
        private String lastname;

        //Getters & Setters
    }

Then you can re-use the UDT in another entity

    @Table
    public class Tweet
    {
        @PartitionKey
        @TimeUUID
        private UUID id

        @Column
        private String content;

        @Column
        @Frozen
        private UserUDT author;

        //Getters & Setters
    }   

Please notice that the @Frozen annotation is mandatory for UDT. Unfrozen UDT is only available for Cassandra 3.6 and after


Accessing Meta Classes for Encoding/Decoding functions

Achilles annotation processor will generate, for each entity:

  1. An EntityClassName_Manager class
  2. An EntityClassName_AchillesMeta class

The EntityClassName_AchillesMeta class provides the following methods for encoding/decoding:

  1. public T createEntityFrom(Row row): self-explanatory
  2. public ConsistencyLevel readConsistency(Optional<ConsistencyLevel> runtimeConsistency): retrieve read consistency from runtime value, static configuration and default consistency configuration in Achilles
  3. public ConsistencyLevel writeConsistency(Optional<ConsistencyLevel> runtimeConsistency): retrieve write consistency from runtime value, static configuration and default consistency configuration in Achilles
  4. public ConsistencyLevel serialConsistency(Optional<ConsistencyLevel> runtimeConsistency): retrieve serial consistency from runtime value, static configuration and default consistency configuration in Achilles
  5. public InsertStrategy insertStrategy(): determine insert strategy using static annotation and Achilles global configuration
  6. public void triggerInterceptorsForEvent(Event event, T instance) : trigger all registered interceptors for this entity type on the provided instance, given the event type

Each meta class contains a public static field for each property. For example, given the following entity:

    @Table
    public static User {

        @PartitionKey
        private Long userId;

        @Column
        private String firstname;

        @Column
        private String lastname;

        @Column
        private Set<String> favoriteTags;

        ... 
    }

The User_AchillesMeta class will expose the following static property metas:

  1. User_AchillesMeta.userId
  2. User_AchillesMeta.firstname
  3. User_AchillesMeta.lastname
  4. User_AchillesMeta.favoriteTags

Each property meta class will expose:

  1. public VALUETO encodeFromJava(VALUEFROM javaValue): encode the given Java value into CQL-compatible value using the Codec System
  2. public VALUEFROM decodeFromGettable(GettableData gettableData): decode the value of the current property from the GettableData object. The GettableData is the common interface for com.datastax.driver.core.Row, com.datastax.driver.core.UDTValue and com.datastax.driver.core.TupleValue


Querying Cassandra

Native query using the RAW API

    final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");
    List<TypedMap> rows = userManager
        .raw()
        .nativeQuery(statement, 100)
        .getList();

    for(TypedMap row : rows)
    {
        String firstname = row.getTyped("firstname");
        String lastname = row.getTyped("lastname");
        ...
    }

Typed query using the RAW API

    final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");

    List<User> users = userManager
        .raw()
        .typedQueryForSelect(statement, 100)
        .getList();

    for(User user : user)
    {
        ...
    }


Asynchronous execution

Asynchronous for the CRUD API

    final CompletableFuture<Empty> futureInsert = userManager
        .crud()
        .insert(new User(...))
        .executeAsync();


    final CompletableFuture<User> futureUser = userManager
        .crud()
        .findById(10L)
        .executeAsync();    


    final CompletableFuture<Empty> futureDelete = userManager
        .crud()
        .deleteById(10L)
        .executeAsync();

Note: Empty is a singleton enum to avoid returning a CompletableFuture of null

Asynchronous for the DSL API

    final CompletableFuture<List<TweetLine>> futureTweets = tweetManager
        .dsl()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .userId().Eq(10L)
        .type().Eq(LineType.TIMELINE)
        .limit(30)
        .getListAsync();


    final CompletableFuture<Empty> futureUpdate = userManager
        .dsl()
        .update()
        .fromBaseTable()
        .lastname_Set("new lastname")
        .where()
        .userId().Eq(10L)
        .executeAsync();    


    final CompletableFuture<Empty> futureDelete = tweetManager
        .dsl()
        .delete()
        .allColumns_FromBaseTable()
        .where()
        .userId().Eq(10L)
        .type().Eq(LineType.TIMELINE)
        .executeAsync();

Asynchronous for the RAW API

    final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");
    CompletableFuture<List<TypedMap>> futureTypedMaps = userManager
        .raw()
        .nativeQuery(statement, 100)
        .getListAsync();

    CompletableFuture<List<User>> futureUsers = userManager
        .raw()
        .typedQueryForSelect(statement, 100)
        .getListAsync();


Getting the ExecutionInfo back

For the CRUD API

    final ExecutionInfo executionInfo = userManager
        .crud()
        .insert(new User(...))
        .executeWithStats();

    final ExecutionInfo executionInfo = userManager
        .crud()
        .deleteById(10L)
        .executeWithStats();


    final Tuple2<User, ExecutionInfo> resultWithExecInfo = userManager
        .crud()
        .findById(10L)
        .getWithStats();

For the DSL API

    final Tuple2<List<TweetLine>, ExecutionInfo> tweetsWithStats = tweetManager
        .dsl()
        .select()
        .allColumns_FromBaseTable()
        .where()
        .userId().Eq(10L)
        .type().Eq(LineType.TIMELINE)
        .limit(30)
        .getListWithStats();


    final ExecutionInfo executionInfo = userManager
        .dsl()
        .update()
        .fromBaseTable()
        .lastname_Set("new lastname")
        .where()
        .userId().Eq(10L)
        .executeWithStats();    


    final ExecutionInfo executionInfo = tweetManager
        .dsl()
        .delete()
        .allColumns_FromBaseTable()
        .where()
        .userId().Eq(10L)
        .type().Eq(LineType.TIMELINE)
        .executeWithStats();

For the RAW API

    final Statement statement = session.newSimpleStatement("SELECT firstname,lastname FROM user LIMIT :lim");
    Tuple2<List<TypedMap>, ExecutionInfo> typedMapsWithStats = userManager
        .raw()
        .nativeQuery(statement, 100)
        .getListWithStats();

    Tuple2<List<User>, ExecutionInfo> usersWithStats = userManager
        .raw()
        .typedQueryForSelect(statement, 100)
        .getListWithStats();


Working with consistency level

Defining consistency statically

    @Table
    @Consistency(read=ConsistencyLevel.ONE, write=ConsistencyLevel.QUORUM, serial = ConsistencyLevel.SERIAL)
    public class User
    {
        ...
    }

Setting consistency level at runtime

    userManager
        .crud()
        ...
        .withConsistencyLevel(ConsistencyLevel.QUORUM)
        ...

    userManager
        .dsl()
        ...
        .withConsistencyLevel(ConsistencyLevel.QUORUM)
        ...         


Working with TTL

Defining TTL statically

    @Table
    @TTL(1000)
    public class User
    {
        ...
    }

Setting TTL at runtime

    userManager
        .crud()
        .insert(...)
        ...
        .usingTimeToLive(10)
        ...

    userManager
        .dsl()
        .update()
        ...
        .usingTimeToLive(10)
        ...


Working with Timestamp

    userManager
        .crud()
        .insert(...)
        ...
        .usingTimestamp(new Date().getTime())
        ...

    userManager
        .crud()
        .deleteById(...)
        ...
        .usingTimestamp(new Date().getTime())
        ...

    userManager
        .dsl()
        .update()
        ...
        .usingTimestamp(new Date().getTime())
        ...

    userManager
        .dsl()
        .delete()
        ...
        .usingTimestamp(new Date().getTime())
        ... 


Working with Lightweight Transaction

API

    userManager
        .crud()
        .insert(...)
        ...
        .ifNotExists()
        ...

    userManager
        .crud()
        .deleteById(...)
        ...
        .ifExists()
        ...

    userManager
        .dsl()
        .update()
        ...
        .ifExists()
        ...     

    userManager
        .dsl()
        .update()
        .fromBaseTable()
        .firstName().Set("new firstname")
        ...
        .if_Firstname().Eq("previous_firstname")
        ...     

    userManager
        .dsl()
        .delete()
        ...
        .ifExists()
        ...     

    userManager
        .dsl()
        .delete()
        ...
        .if_Firstname().Eq("previous_firstname")
        ...     

LWT Result Listener

To have tighter control on LWT updates, inserts or deletes, Achilles lets you inject a listener for LWT operations result.

    LWTResultListener lwtListener = new LWTResultListener() {

        @Override
        public void onSuccess() {
            // Do something on success
            // Default method does NOTHING
        }

        @Override
        public void onError(LWTResult lwtResult) {

            //Get type of LWT operation that fails
            LWTResult.Operation operation = lwtResult.operation();

            // Print out current values
            TypedMap currentValues = lwtResult.currentValues(); 
            for(Entry<String,Object> entry: currentValues.entrySet()) {
                System.out.println(String.format("%s = %s",entry.getKey(), entry.getValue()));          
            }
        }
    };

    userManager
        .crud()
        .insert(new User(...))
        .ifNotExists()
        .withLWTResultListener(lwtListener)
        .execute();

    //OR

    userManager
        .crud()
        .insert(new User(...))
        .ifNotExists()
        .withLWTResultListener(lwtResult -> logger.error("Error : " + lwtResult))
        .execute();


Using counter type

    @Table(table = "retweet_count")
    public class Retweets {

        @PartitionKey
        @Column("user_id")
        private Long userId;

        @ClusteringColumn(1)
        @Enumerated
        private LineType type;

        @ClusteringColumn(value = 2, asc = false)
        @TimeUUID
        @Column("tweet_id")
        private UUID tweetId;

        @Counter
        @Column("direct_retweets")      
        private Long directRetweets;

        @Counter
        @Column("total_retweets")       
        private Long totlRetweets;      

        //Getters & Setters
    }

Once the entity mapping is defined the CRUD API for counter tables is restricted to deleteById() and deleteByPartitionKeys() methods (no insert()).


Using materialized views (from Cassandra 3.0.X and after)

To declare a materialized view, use the @MaterializedView annotation:

@MaterializedView(baseEntity = EntitySensor.class, view = "sensor_by_type")
public class ViewSensorByType {

    @PartitionKey
    @Enumerated
    private SensorType type;

    @ClusteringColumn(1)
    private Long sensorId;

    @ClusteringColumn(2)
    private Long date;

    @Column
    private Double value;

    ...
    //Getters & setters
}

@Table(table = "sensor")
public class EntitySensor {

    @PartitionKey
    private Long sensorId;

    @ClusteringColumn
    private Long date;

    @Enumerated
    @Column
    private SensorType type;

    @Column
    private Double value;

    ...
    //Getters & setters
}                

The view should reference a base table using the attribute baseEntity. It should also re-use the same columns that belong to the base table primary key, possibly in a different order.

Achilles will generate only SELECT APIs for those views, UPDATE and DELETE operations are not possible.

See Materialized View Mapping for more details


Function mapping (from Cassandra 2.2.X and after)

You can declare the signature of your functions in a class/interface so that Achilles can generate type-safe API for you to be able to invoke them in the Select DSL API.

For this, use the @FunctionRegistry annotation:

For more details, see Functions Mapping

@FunctionRegistry
public interface MyFunctionRegistry {

    Long convertToLong(String longValue);
}

Please note that you'll need to declare your user-defined function by yourself with Cassandra, Achilles only takes care of the function signature for the code generation, not the function declaration.


Simple object mapping

You can use the Manager object for simple object mapping

    // Execution of custom query
    Row row = session.execute(...).one();

    User user = userManager.mapFromRow(row);


Getting native Session and Cluster object

You can retrieve the native Session and Cluster object from the Manager


    Session session = userManager.getNativeSession();

    Cluster cluster = userManager.getNativeCluster();


Generating bound statements/query string from the APIs

Generating com.datastax.driver.core.BoundStatement

    BoundStatement bs = userManager
        .crud()
        ...
        .generateAndGetBoundStatement();

    BoundStatement bs = userManager
        .dsl()
        ...
        .generateAndGetBoundStatement();

Generating query string

    String statement = userManager
        .crud()
        ...
        .getStatementAsString();

    String statement = userManager
        .dsl()
        ...
        .getStatementAsString();


Extract bound values from the APIs

Extract raw bound values

     List<Object> boundValues = userManager
        .crud()
        ...
        .getBoundValues();

    List<Object> boundValues = userManager
        .dsl()
        ...
        .getBoundValues();

Extract encoded bound values. The encoding relies on Achilles Codec System

     List<Object> encodedBoundValues = userManager
        .crud()
        ...
        .getEncodedBoundValues();

    List<Object> encodedBoundValues = userManager
        .dsl()
        ...
        .getEncodedBoundValues();


Injecting schema name at runtime

Normally you define the keyspace/table name statically using the @Table annotation. However, in a multi-tenant environment, the keyspace/table name is not known ahead of time but only during runtime. For this, Achilles defines an interface SchemaNameProvider:

    public interface SchemaNameProvider {

        /**
         * Provide keyspace name for entity class
         */
        <T> String keyspaceFor(Class<T> entityClass);

        /**
         * Provide table name for entity class
         */
        <T> String tableNameFor(Class<T> entityClass);
    }

You can implement this interface and inject the schema name provider at runtime. Both CRUD API and DSL API accept dynamic binding of schema name:

    final SchemaNameProvider dynamicProvider = ...;

    userManager
        .crud()
        .withSchemaNameProvider(dynamicProvider)
        ...
        .execute();

    userManager
        .dsl()
        .select()
        ...
        .from(dynamicProvider)
        .where()
        ...

    userManager
        .dsl()
        .update()
        .from(dynamicProvider)
        ...
        .where()
        ...

    userManager
        .dsl()
        .delete()
        ...
        .from(dynamicProvider)
        ...
        .where()
        ... 


Generating DDL scripts

Using DML logs

Sometime it is nice to let Achilles generate for you the CREATE TABLE script. To do that:

    <logger name="ACHILLES_DDL_SCRIPT">
        <level value="DEBUG" />
    </logger>

Using the SchemaGenerator

Achilles provides a module achilles-schema-generator to help you generate CQL schema scripts for your entities. More details here


Generating DML statements

To debug Achilles behavior, you can enable DML statements logging by setting DEBUG level on the logger ACHILLES_DML_STATEMENT

    <logger name="ACHILLES_DML_STATEMENT">
        <level value="DEBUG" />
    </logger>