Skip to content

Commit

Permalink
Updated documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
seratch committed Mar 7, 2014
1 parent 279fd2d commit c285f73
Show file tree
Hide file tree
Showing 9 changed files with 261 additions and 35 deletions.
6 changes: 6 additions & 0 deletions config.rb
Expand Up @@ -58,6 +58,12 @@

I18n.enforce_available_locales = false

# Latest Skinny Framework version
@version = "1.7.4"
set :version, @version
@latest_version = "[1.7,)"
set :latest_version, @latest_version

# Build-specific configuration
configure :build do
# For example, change the Compass output style for deployment
Expand Down
23 changes: 23 additions & 0 deletions source/documentation/connection-pool.html.md
Expand Up @@ -92,6 +92,29 @@ def finalize() = {
### Replacing ConnectionPool on Runtime
<hr/>
You can replace ConnectionPool settings safely on runtime.
The old pool won't be abandoned until all the borrwoed connections are closed.

```java
def doSomething = {
ConnectionPool.singleton("jdbc:h2:mem:db1", "user", "pass")
DB localTx { implicit s =>
// long transaction...

// overwrite singleton CP
ConnectionPool.singleton("jdbc:h2:mem:db2", "user", "pass")

// db1 connection pool is still available until this trancation is commited.
// Newly borrowed connections will access db2.
}
}
```

<hr/>
### Using Another ConnectionPool Implementation
<hr/>

If you want to use another one which is not Commons DBCP as the connection provider, You can also specify your own `ConnectionPoolFactory` as follows:

```java
Expand Down
90 changes: 75 additions & 15 deletions source/documentation/operations.html.md
Expand Up @@ -8,12 +8,10 @@ title: Operations - ScalikeJDBC
### Query API
<hr/>

There are various query APIs.

All of them (single, first, list and foreach) will execute `java.sql.PreparedStatement#executeQuery()`.
There are various query APIs. All of them (`single`, `first`, `list` and `foreach`) will execute `java.sql.PreparedStatement#executeQuery()`.

<hr/>
#### single
#### Single / Optional Result for Query
<hr/>

`single` returns matched single row as an `Option` value. If matched rows is not single, Exception will be thrown.
Expand All @@ -25,56 +23,86 @@ val id = 123

// simple example
val name: Option[String] = DB readOnly { implicit session =>
sql"select * from emp where id = ${id}".map(rs => rs.string("name")).single.apply()
sql"select name from emp where id = ${id}".map(rs => rs.string("name")).single.apply()
}

// defined mapper as a function
val nameOnly = (rs: WrappedResultSet) => rs.string("name")
val name: Option[String] = DB readOnly { implicit session =>
sql"select * from emp where id = ${id}".map(nameOnly).single.apply()
sql"select name from emp where id = ${id}".map(nameOnly).single.apply()
}

// define a class to map the result
case class Emp(id: String, name: String)
val emp: Option[Emp] = DB readOnly { implicit session =>
sql"select * from emp where id = ${id}"
sql"select id, name from emp where id = ${id}"
.map(rs => Emp(rs.string("id"), rs.string("name"))).single.apply()
}

// QueryDSL
object Emp extends SQLSyntaxSupport[Emp] {
def apply(e: ResultName[Emp])(rs: WrappedResultSEt): Emp =
new Emp(id = rs.get(e.id), name = rs.get(e.name))
}
val e = Emp.syntax("e")
val emp: Option[Emp] = DB readOnly { implicit session =>
withSQL { select.from(Emp as e).where.eq(e.id, id) }.map(Emp(e)).single.apply()
}
```

You can learn about QueryDSL in defail here:

[/documentation/query-dsl](documentation/query-dsl.html)


<hr/>
#### first
#### First Result from Multiple Results
<hr/>

`first` returns the first row of matched rows as an `Option` value.

```java
val name: Option[String] = DB readOnly { implicit session =>
sql"select * from emp".map(rs => rs.string("name")).first.apply()
sql"select name from emp".map(rs => rs.string("name")).first.apply()
}

val e = Emp.syntax("e")
val name: Option[String] = DB readOnly { implicit session =>
withSQL { select(e.result.name).from(Emp as e) }.map(_.string(e.name)).first.apply()
}
```

<hr/>
#### list
#### List Results
<hr/>

`list` returns matched multiple rows as `scala.collection.immutable.List`.

```java
val name: List[String] = DB readOnly { implicit session =>
sql"select * from emp".map(rs => rs.string("name")).list.apply()
sql"select name from emp".map(rs => rs.string("name")).list.apply()
}

val e = Emp.syntax("e")
val name: Option[String] = DB readOnly { implicit session =>
withSQL { select(e.result.name).from(Emp as e) }.map(_.string(e.name)).list.apply()
}
```

<hr/>
#### foreach
#### Foreach Operation
<hr/>

`foreach` allows you to make some side-effect in iterations. This API is useful for handling large `ResultSet`.

```java
DB readOnly { implicit session =>
sql"select * from emp" foreach { rs => out.write(rs.string("name")) }
sql"select name from emp" foreach { rs => out.write(rs.string("name")) }
}

val e = Emp.syntax("e")
DB readOnly { implicit session =>
withSQL { select(e.result.name).from(Emp as e) }.foreach { rs => out.write(rs.string(e.name)) }
}
```

Expand All @@ -85,6 +113,8 @@ DB readOnly { implicit session =>
`update` executes `java.sql.PreparedStatement#executeUpdate()`.

```java
import scalikejdbc._, SQLInterpolation._

DB localTx { implicit session =>
sql"""insert into emp (id, name, created_at) values (${id}, ${name}, ${DateTime.now})"""
.update.apply()
Expand All @@ -93,6 +123,24 @@ DB localTx { implicit session =>
sql"update emp set name = ${newName} where id = ${id}".update.apply()
sql"delete emp where id = ${id}".update.apply()
}

val column = Emp.column
DB localTx { implicit s =>
withSQL {
insert.into(Emp).namedValues(
column.id -> id,
column.name -> name,
column.createdAt -> DateTime.now)
}.update.apply()

val id: Long = withSQL {
insert.into(Empy).namedValues(column.name -> name, column.createdAt -> sqls.currentTimestamp)
}.updateAndReturnGeneratedKey.apply()

withSQL { update(Emp).set(column.name -> newName).where.eq(column.id, id) }.update.apply()
withSQL { delete.from(Emp).where.eq(column.id, id) }.update.apply()
}

```

<hr/>
Expand All @@ -105,6 +153,8 @@ DB localTx { implicit session =>
DB autoCommit { implicit session =>
sql"create table emp (id integer primary key, name varchar(30))".execute.apply()
}

// QueryDSL doesn't support DDL yet.
```

<hr/>
Expand All @@ -114,8 +164,18 @@ DB autoCommit { implicit session =>
`batch` and `batchByName` executes `java.sql.PreparedStatement#executeBatch()`.

```java
import scalikejdbc._, SQLInterpolation._

DB localTx { implicit session =>
val batchParams: Seq[Seq[Any]] = (2001 to 3000).map(i => Seq(i, "name" + i))
sql"insert into emp (id, name) values (?, ?)".batch(batchParams: _*).apply()
}

val column = Emp.column
DB localTx { implicit session =>
val batchParams1: Seq[Seq[Any]] = (2001 to 3000).map(i => Seq(i, "name" + i))
sql"insert into emp (id, name) values (?, ?)".batch(batchParams1: _*).apply()
val batchParams: Seq[Seq[Any]] = (2001 to 3000).map(i => Seq(i, "name" + i))
withSQL {
insert.into(Emp).namedValues(column.id -> sqls.?, column.name -> sqls.?)
}.batch(batchParams: _*).apply()
}
```
131 changes: 131 additions & 0 deletions source/documentation/playframework-support.html.md
@@ -1 +1,132 @@
---
title: Play Framework Support - ScalikeJDBC
---

## Play Framework Support

<hr/>
### How to setup

See [/documentation/setup](/documentation/setup.html).

<hr/>
### Configuration

Here is some configuration examples. Basically it's very simple:

#### conf/application.conf

```sh
# Database configuration
# ~~~~~
# You can declare as many datasources as you want.
# By convention, the default datasource is named `default`
db.default.driver=org.h2.Driver
db.default.url="jdbc:h2:mem:play"
db.default.user="sa"
db.default.password="sa"

db.secondary.driver=org.h2.Driver
db.secondary.url="jdbc:h2:mem:play2"
db.secondary.user="sa"
db.secondary.password="sa"

# ScalikeJDBC original configuration

#db.default.poolInitialSize=10
#db.default.poolMaxSize=10
#db.default.poolValidationQuery=

scalikejdbc.global.loggingSQLAndTime.enabled=true
scalikejdbc.global.loggingSQLAndTime.logLevel=debug
scalikejdbc.global.loggingSQLAndTime.warningEnabled=true
scalikejdbc.global.loggingSQLAndTime.warningThresholdMillis=1000
scalikejdbc.global.loggingSQLAndTime.warningLogLevel=warn

#scalikejdbc.play.closeAllOnStop.enabled=true

# You can disable the default DB plugin
dbplugin=disabled
evolutionplugin=disabled

# scalikejdbc logging
logger.scalikejdbc=DEBUG
```

#### Fixtures

Fixtures are optional. If you don't nee, no need to use them.

##### conf/application.conf

```sh
db.default.fixtures.test=[ "project.sql", "project_member.sql" ]
```

##### conf/db/fixtures/default/project.sql

```sql
# --- !Ups

INSERT INTO project (id, name, folder) VALUES (1, 'Play 2.0', 'Play framework');
INSERT INTO project (id, name, folder) VALUES (2, 'Play 1.2.4', 'Play framework');
INSERT INTO project (id, name, folder) VALUES (3, 'Website', 'Play framework');
INSERT INTO project (id, name, folder) VALUES (4, 'Secret project', 'Zenexity');
INSERT INTO project (id, name, folder) VALUES (5, 'Playmate', 'Zenexity');
INSERT INTO project (id, name, folder) VALUES (6, 'Things to do', 'Personal');
INSERT INTO project (id, name, folder) VALUES (7, 'Play samples', 'Zenexity');
INSERT INTO project (id, name, folder) VALUES (8, 'Private', 'Personal');
INSERT INTO project (id, name, folder) VALUES (9, 'Private', 'Personal');
INSERT INTO project (id, name, folder) VALUES (10, 'Private', 'Personal');
INSERT INTO project (id, name, folder) VALUES (11, 'Private', 'Personal');
ALTER SEQUENCE project_seq RESTART WITH 12;

# --- !Downs
ALTER SEQUENCE project_seq RESTART WITH 1;
DELETE FROM project;
```

##### conf/db/fixtures/defaut/project_member.sql

```sql
# --- !Ups

INSERT INTO project_member (project_id, user_email) VALUES (1, 'guillaume@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (1, 'maxime@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (1, 'sadek@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (1, 'erwan@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (2, 'guillaume@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (2, 'erwan@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (3, 'guillaume@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (3, 'maxime@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (4, 'guillaume@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (4, 'maxime@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (4, 'sadek@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (4, 'erwan@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (5, 'maxime@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (6, 'guillaume@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (7, 'guillaume@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (7, 'maxime@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (8, 'maxime@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (9, 'guillaume@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (10, 'erwan@sample.com');
INSERT INTO project_member (project_id, user_email) VALUES (11, 'sadek@sample.com');

# --- !Downs

DELETE FROM project_member;
```

<hr/>
### More Examples

Take a look at Typesafe Activator template:

![Typesafe](images/typesafe.png)

You can try a [Play framework](http://www.playframework.com/) sample app which uses ScalikeJDBC on [Typesafe Activator](http://typesafe.com/activator).

Activator page: [Hello ScalikeJDBC!](http://typesafe.com/activator/template/scalikejdbc-activator-template)

See on GitHub: [scalikejdbc/hello-scalikejdbc](https://github.com/scalikejdbc/hello-scalikejdbc)

2 changes: 2 additions & 0 deletions source/documentation/query-inspector.html.md
Expand Up @@ -83,6 +83,8 @@ In this case, logging as follows:
You can use hooks such as `GlobalSettings.queryCompletionListener` and `GlobalSettings.queryFailureListener`.
For instance, the following example will send information about slow queries to Fluentd.
```java
import org.fluentd.logger.scala._
val logger = FluentLoggerFactory.getLogger("scalikejdbc")
Expand Down

0 comments on commit c285f73

Please sign in to comment.