Skip to content

Commit

Permalink
Feature/sasi docs (#701)
Browse files Browse the repository at this point in the history
* Integrating tut typechecking of documentation and increasing Cassandra version

* Adding docs and more tests for SASI indexes.

* Removing more unused code.

* Removing numeric operations.

* Adding table markdown syntax.

* Updating links.

* Fixing one more doc link

* More docs
  • Loading branch information
alexflav23 committed Jun 17, 2017
1 parent ab00eea commit 0ad1afd
Show file tree
Hide file tree
Showing 17 changed files with 312 additions and 83 deletions.
11 changes: 8 additions & 3 deletions build.sbt
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ import sbt.Defaults._
lazy val Versions = new {
val logback = "1.2.3"
val sbt = "0.13.13"
val util = "0.34.0"
val util = "0.36.0"
val json4s = "3.5.1"
val datastax = "3.2.0"
val scalatest = "3.0.1"
Expand Down Expand Up @@ -145,10 +145,15 @@ lazy val phantom = (project in file("."))
).settings(
name := "phantom",
moduleName := "phantom",
pgpPassphrase := Publishing.pgpPass
pgpPassphrase := Publishing.pgpPass,
tutSourceDirectory := {
val directory = baseDirectory.value / "docs"
println(directory.getAbsolutePath.toString)
directory
}
).aggregate(
fullProjectList: _*
).enablePlugins(CrossPerProjectPlugin)
).enablePlugins(CrossPerProjectPlugin).enablePlugins(TutPlugin)

lazy val phantomDsl = (project in file("phantom-dsl"))
.settings(sharedSettings: _*)
Expand Down
2 changes: 1 addition & 1 deletion build/install_cassandra.sh
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ jdk_version_8_or_more=$(check_java_version)

if [ "$jdk_version_8_or_more" = true ];
then
cassandra_version="3.2"
cassandra_version="3.8"
else
cassandra_version="2.2.9"
fi
Expand Down
8 changes: 4 additions & 4 deletions docs/basics/batches.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ phantom also supports `COUNTER` batch updates and `UNLOGGED` batch updates.
<a id="logged-batch-statements">LOGGED batch statements</a>
===========================================================

```scala
```tut
import com.outworkers.phantom.dsl._
Expand All @@ -35,7 +35,7 @@ Batch.logged
============================================================
<a href="#table-of-contents">back to top</a>

```scala
```tut
import com.outworkers.phantom.dsl._
Expand All @@ -48,7 +48,7 @@ Batch.counter
Counter operations also offer a standard overloaded operator syntax, so instead of `increment` and `decrement`
you can also use `+=` and `-=` to achieve the same thing.

```scala
```tut
import com.outworkers.phantom.dsl._
Expand All @@ -61,7 +61,7 @@ Batch.counter
<a id="unlogged-batch-statements">UNLOGGED batch statements</a>
============================================================

```scala
```tut
import com.outworkers.phantom.dsl._
Expand Down
31 changes: 23 additions & 8 deletions docs/basics/database.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@ However, from an app or service consumer perspective, when pulling in dependenci

That's why phantom comes with very concise levels of segregation between the various consumer levels. When we create a table, we mix in `RootConnector`.

```scala
```tut
case class Recipe(
url: String,
description: Option[String],
Expand Down Expand Up @@ -55,7 +56,7 @@ class Recipes extends CassandraTable[Recipes, Recipe] with RootConnector {
```
The whole purpose of `RootConnector` is quite simple, it's saying an implementor will basically specify the `session` and `keySpace` of choice. It looks like this, and it's available in phantom by default via the default import, `import com.outworkers.phantom.dsl._`.

```scala
```tut
import com.datastax.driver.core.Session
Expand All @@ -69,7 +70,10 @@ trait RootConnector {

Later on when we start creating databases, we pass in a `ContactPoint` or what we call a `connector` in more plain English, which basically fully encapsulates a Cassandra connection with all the possible details and settings required to run an application.

```scala
```tut
import com.outworkers.phantom.dsl._
class RecipesDatabase(
override val connector: CassandraConnection
) extends Database[RecipesDatabase](connector) {
Expand Down Expand Up @@ -98,7 +102,9 @@ Sometimes developers can choose to wrap a `database` further, into specific data

And this is why we offer another native construct, namely the `DatabaseProvider` trait. This is another really simple but really powerful trait that's generally used cake pattern style.

```scala
```tut
import com.outworkers.phantom.dsl._
trait DatabaseProvider[T <: Database[T]] {
def database: T
Expand All @@ -107,7 +113,10 @@ trait DatabaseProvider[T <: Database[T]] {

This is pretty simple in its design, it simply aims to provide a simple way of injecting a reference to a particular `database` inside a consumer. For the sake of argument, let's say we are designing a `UserService` backed by Cassandra and phantom. Here's how it might look like:

```scala
```tut
import scala.concurrent.Future
import com.outworkers.phantom.dsl._
class UserDatabase(
override val connector: CassandraConnection
Expand Down Expand Up @@ -158,7 +167,9 @@ Let's go ahead and create two complete examples. We are going to make some simpl

Let's look at the most basic example of defining a test connector, which will use all default settings plus a call to `noHearbeat` which will disable heartbeats by setting a pooling option to 0 inside the `ClusterBuilder`. We will go through that in more detail in a second, to show how we can specify more complex options using `ContactPoint`.

```scala
```tut
import com.outworkers.phantom.dsl._
object TestConnector {
val connector = ContactPoint.local
Expand All @@ -177,7 +188,9 @@ It may feel verbose or slightly too much at first, but the objects wrapping the

And this is how you would use that provider trait now. We're going to assume ScalaTest is the testing framework in use, but of course that doesn't matter.

```scala
```tut
import com.outworkers.phantom.dsl._
import org.scalatest.{BeforeAndAfterAll, OptionValues, Matchers, FlatSpec}
import org.scalatest.concurrent.ScalaFutures
Expand Down Expand Up @@ -245,7 +258,9 @@ To override the settings that will be used during schema auto-generation at `Dat

When you later call `database.create` or `database.createAsync` or any other flavour of auto-generation on a `Database`, the `autocreate` overriden below will be respected.

```scala
```tut
import com.outworkers.phantom.dsl._
class UserDatabase(
override val connector: CassandraConnection
Expand Down
22 changes: 10 additions & 12 deletions docs/basics/tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ case class Recipe(
uid: UUID
)

abstract class Recipes extends CassandraTable[Recipes, Recipe] with RootConnector {
abstract class Recipes extends Table[Recipes, Recipe] {

object url extends StringColumn with PartitionKey

Expand Down Expand Up @@ -72,13 +72,11 @@ implemented via `com.outworkers.phantom.NamingStrategy`. These control only the
not the columns or anything else.


```
| Strategy | Casing |
| =========================== | ============================= |
| NamingStrategy.CamelCase | lowCamelCase |
| NamingStrategy.SnakeCase | low_snake_case |
| NamingStrategy.Default | Preserves the user input |
```
| Strategy | Casing |
| ----------------------------- | ----------------------------- |
| `NamingStrategy.CamelCase` | lowCamelCase |
| `NamingStrategy.SnakeCase` | low_snake_case |
| `NamingStrategy.Default` | Preserves the user input |

All available imports will have two flavours. It's important to note they only work
when imported in the scope where tables are defined. That's where the macro will evaluate
Expand Down Expand Up @@ -110,7 +108,7 @@ case class ExampleModel (
test: Option[Int]
)

abstract class ExampleRecord extends CassandraTable[ExampleRecord, ExampleModel] with RootConnector {
abstract class ExampleRecord extends Table[ExampleRecord, ExampleModel] {
object id extends UUIDColumn with PartitionKey
object timestamp extends DateTimeColumn with ClusteringOrder with Ascending
object name extends StringColumn
Expand Down Expand Up @@ -232,7 +230,7 @@ case class Record(
email: String
)

abstract class MyTable extends CassandraTable[MyTable, Record] {
abstract class MyTable extends Table[MyTable, Record] {

object id extends UUIDColumn with PartitionKey
object name extends StringColumn
Expand Down Expand Up @@ -296,7 +294,7 @@ case class Record(
email: String
)

abstract class RecordsByCountry extends CassandraTable[RecordsByCountry, Record] {
abstract class RecordsByCountry extends Table[RecordsByCountry, Record] {
object countryCode extends StringColumn with PartitionKey
object id extends UUIDColumn with PrimaryKey
object name extends StringColumn
Expand Down Expand Up @@ -343,7 +341,7 @@ case class Record(
email: String
)

abstract class RecordsByCountryAndRegion extends CassandraTable[RecordsByCountryAndRegion, Record] {
abstract class RecordsByCountryAndRegion extends Table[RecordsByCountryAndRegion, Record] {
object countryCode extends StringColumn with PartitionKey
object region extends StringColumn with PartitionKey
object id extends UUIDColumn with PrimaryKey
Expand Down
26 changes: 19 additions & 7 deletions docs/migrate.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ Feedback and contributions are welcome, and we are happy to prioritise any cruci
- [x] Revert all Outworkers projects and all their dependencies to the Apache V2 License.
- [x] Publish `outworkers-util` and all sub modules to Maven Central.
- [x] Publish `outworkers-diesel` and all sub modules to Maven Central.
- [x] Drop all dependencies outside of `shapeless` and `datastax-java-driver` from `phantom-dsl`.
- [x] Remove all non standard resolvers from Phantom, all dependencies should build from JCenter and Maven Central by default with no custom resolvers required.
- [x] Change all package names and resolvers to reflect our business name change from `Websudos` to `Outworkers`.
- [x] Create a `1.30.x` release that allows users to transition to a no custom resolver version of Phantom 1.0.x even before 2.0.0 is stable.
Expand All @@ -33,6 +34,16 @@ Feedback and contributions are welcome, and we are happy to prioritise any cruci
- [x] Generate the `fromRow` if the fields match, they are in arbitrary order, but there are no duplicate types.
- [x] Allow arbitrary inheritance and usage patterns for Cassandra tables, and resolve inheritance resolutions with macros to correctly identify desired table structures.

#### Vast improvements

- [x] Re-implement primitive types using native macro derived marshallers/unmarshallers.
- [x] Re-implement prepared statement binds to use macro derived serializers.
- [x] Add debug strings to `BatchQuery`.
- [x] Use `AnyVal` in the `ImplicitMechanism` where possible.
- [x] Enforce `store` method typechecking at compile time.
- [x] Use `shapeless.HList` as the core primitive inside table store methods.
- [x] Add advanced debugging to the macro API.

#### Tech debt

- [x] Correctly implement Cassandra pagination using iterators, currently setting a `fetchSize` on a query does not correctly propagate or consume the resulting iterator, which leads to API inconsistencies and `PagingState` not being set on any `ResultSet`.
Expand All @@ -42,11 +53,12 @@ Feedback and contributions are welcome, and we are happy to prioritise any cruci
#### Features

- [ ] Native support for multi-tenanted environments via cached sessions.
- [ ] Case sensitive CQL.
- [ ] Materialized views.
- [ ] SASI index support
- [x] Case sensitive CQL.
- [ ] Materialized views.(phantom pro)
- [x] SASI index support
- [ ] Support for `PER PARTITION LIMIT` in `SelectQuery`.
- [ ] Support for `GROUP BY` in `SelectQuery`.
- [x] Implement a compact table DSL that does not require passing in `this` to columns.

#### Scala 2.12 support

Expand All @@ -55,7 +67,7 @@ Feedback and contributions are welcome, and we are happy to prioritise any cruci
- [x] Add support for Scala 2.12 in `phantom-dsl`
- [x] Add support for Scala 2.12 in `phantom-connectors`
- [x] Add support for Scala 2.12 in `phantom-example`
- [ ] Add support for Scala 2.12 in `phantom-streams`
- [x] Add support for Scala 2.12 in `phantom-streams`
- [x] Add support for Scala 2.12 in `phantom-thrift`
- [x] Add support for Scala 2.12 in `phantom-finagle`

Expand Down Expand Up @@ -115,19 +127,19 @@ As of phantom 2.5.0, if you have a manually defined method to insert records int
For a full set of details on how the `store` method is generated, refer to [the store method](basics/tables#store-methods) docs.
This is because phantom successfully auto-generates a basic store method that looks like this below.

```scala
```tut
import com.outworkers.phantom.dsl._
import scala.concurrent.duration._
case class Record(
id: java.util.UUID,
id: UUID,
name: String,
firstName: String,
email: String
)
abstract class MyTable extends CassandraTable[MyTable, Record] {
abstract class MyTable extends Table[MyTable, Record] {
object id extends UUIDColumn with PartitionKey
object name extends StringColumn
Expand Down
Loading

0 comments on commit 0ad1afd

Please sign in to comment.