Skip to content

Commit

Permalink
Docs
Browse files Browse the repository at this point in the history
  • Loading branch information
adamw committed Feb 18, 2019
1 parent a6b18c1 commit 5e7c3cc
Show file tree
Hide file tree
Showing 9 changed files with 53 additions and 38 deletions.
29 changes: 16 additions & 13 deletions doc/endpoint/codecs.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@ supported by client/server interpreters, include `String`s, byte arrays, `File`s
There are built-in codecs for most common types such as `String`, `Int` etc. Codecs are usually defined as implicit
values and resolved implicitly when they are referenced.

For example, a `query[Int]("quantity")` specifies an input parameter which will be read from the `quantity` query
parameter and decoded into an `Int`. There's an implicit `Codec[Int]` parameter that is referenced by the `query`
method. In a server setting, if the value cannot be parsed to an int, a decoding failure is reported, and the endpoint
For example, a `query[Int]("quantity")` specifies an input parameter which corresponds to the `quantity` query
parameter and will be mapped as an `Int`. There's an implicit `Codec[Int]` value that is referenced by the `query`
method (which is defined in the `tapir` package).

In a server setting, if the value cannot be parsed as an int, a decoding failure is reported, and the endpoint
won't match the request, or a `400 Bad Request` response is returned (depending on configuration).

## Optional and multiple parameters
Expand Down Expand Up @@ -41,15 +43,16 @@ For primitive types, the schema values are built-in, and include values such as
`Schema.SBinary` etc.

For complex types, it is possible to define the schema by hand and apply it to a codec (using the `codec.schema`
method), however usually e.g. json codecs lookup the schema implicitly by requiring an implicit value of type
method), however usually the schema is looked up by codecs by requiring an implicit value of type
`SchemaFor[T]`. A schema-for value contains a single `schema: Schema` field.

`SchemaFor[T]` values are automatically derived for case classes using
[Magnolia](https://propensive.com/opensource/magnolia/). It is possible to configure the automatic derivation to use
snake-case, kebab-case or a custom field naming policy, by providing an implicit `tapir.generic.Configuration` value:
`SchemaFor[T]` values are automatically derived for case classes using [Magnolia](https://propensive.com/opensource/magnolia/).
It is possible to configure the automatic derivation to use snake-case, kebab-case or a custom field naming policy,
by providing an implicit `tapir.generic.Configuration` value:

```scala
implicit val customConfiguration: Configuration = Configuration.defaults.snakeCaseTransformation
implicit val customConfiguration: Configuration =
Configuration.defaults.snakeCaseTransformation
```

## Media types
Expand All @@ -63,8 +66,8 @@ specifies how to serialize a case class to plain text, and a different `Codec[My
how to serialize a case class to json. Both can be implicitly available without implicit resolution conflicts.

Different media types can be used in different contexts. When defining a path, query or header parameter, only a codec
with the `TextPlain` media type can be used. However, for bodies, any media types is allowed. For example, the io
described by `jsonBody[T]` requires a json codec.
with the `TextPlain` media type can be used. However, for bodies, any media types is allowed. For example, the
input/output described by `jsonBody[T]` requires a json codec.

## Custom types

Expand Down Expand Up @@ -94,13 +97,13 @@ On the other hand, when building composite types out of many values, or when an
## Validation

While codecs support reporting decoding failures, this is not meant as a validation solution, as it only works on single
values, while validation often involves multiple combines values.
values, while validation often involves multiple combined values.

Decoding failures should be reported when the input is in an incorrect low-level format, when parsing a "raw value"
fails. In other words, decoding failures should be reported for format errors, not business validation failures.
fails. In other words, decoding failures should be reported for format failures, not business validation errors.

Any validation should be done as part of the "business logic" methods provided to the server interpreters. In case
validation fails, the result of the method can be an error, which is one of the mappings defined in an endpoint
validation fails, the result can be an error, which is one of the mappings defined in an endpoint
(the `E` in `Endpoint[I, E, O, S]`).

## Next
Expand Down
6 changes: 4 additions & 2 deletions doc/endpoint/forms.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,11 +3,12 @@
## URL-encoded forms

An URL-encoded form input/output can be specified in two ways. First, it is possible to map all form fields as a
`Seq[(String, String)]` or `Map[String, String]` (which is more convenient if fields can't have multiple values):
`Seq[(String, String)]`, or `Map[String, String]` (which is more convenient if fields can't have multiple values):

```scala
formBody[Seq[(String, String)]]: EndpointIO[Seq[(String, String)],
MediaType.XWwwFormUrlencoded, _]

formBody[Map[String, String]]: EndpointIO[Map[String, String],
MediaType.XWwwFormUrlencoded, _]
```
Expand All @@ -34,7 +35,8 @@ multipartBody[Seq[AnyPart]]: EndpointIO[Seq[AnyPart], MediaType.MultipartFormDat
```

where `type AnyPart = Part[_]`. `Part` is a case class containing the `name` of the part, disposition parameters,
headers, and the body.
headers, and the body. The bodies will be mappes as byte arrays (`Array[Byte]`), unless a custom multipart codec
is defined using the `Codec.multipartCodec` method.

As with URL-encoded forms, multipart bodies can be mapped directly to case classes, however without the restriction
on codecs for individual fields. Given a field of type `T`, first a plain text codec is looked up, and if one isn't
Expand Down
27 changes: 17 additions & 10 deletions doc/endpoint/ios.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Defining endpoint's input/output

An input is represented as an instance of the `EndpointInput` trait, and an output as an instance of the `EndpointIO`
An input is described by an instance of the `EndpointInput` trait, and an output by an instance of the `EndpointIO`
trait, as all outputs can also be used as inputs. Each input or output can yield/accept a value. For example,
`query[Int]("age"): EndpointInput[Int]` describes an input, which maps to a query parameter `age`, which should be
parsed (using the string-to-integer [codec](codecs.html)) as an `Int`.
`query[Int]("age"): EndpointInput[Int]` describes an input, which is the `age` query parameter, and which should be
mapped (using the string-to-integer [codec](codecs.html)) as an `Int`.

The `tapir` package contains a number of convenience methods to define an input or an output for an endpoint.
These are:
Expand Down Expand Up @@ -36,7 +36,8 @@ endpoint input/output descriptions are immutable. For example, an input specifyi
(mandatory) and `limit` (optional) can be written down as:

```scala
val paging: EndpointInput[(UUID, Option[Int])] = query[UUID]("start").and(query[Option[Int]]("limit"))
val paging: EndpointInput[(UUID, Option[Int])] =
query[UUID]("start").and(query[Option[Int]]("limit"))

// we can now use the value in multiple endpoints, e.g.:
val listUsersEndpoint: Endpoint[(UUID, Option[Int]), Unit, List[User], Nothing] =
Expand All @@ -49,13 +50,15 @@ parameters, but also to define template-endpoints, which can then be further spe
base endpoint for our API, where all paths always start with `/api/v1.0`, and errors are always returned as a json:

```scala
val baseEndpoint: Endpoint[Unit, ErrorInfo, Unit, Nothing] = endpoint.in("api" / "v1.0").errorOut(jsonBody[ErrorInfo])
val baseEndpoint: Endpoint[Unit, ErrorInfo, Unit, Nothing] =
endpoint.in("api" / "v1.0").errorOut(jsonBody[ErrorInfo])
```

Thanks to the fact that inputs/outputs accumulate, we can use the base endpoint to define more inputs, for example:

```scala
val statusEndpoint: Endpoint[Unit, ErrorInfo, Status, Nothing] = baseEndpoint.in("status").out(jsonBody[Status])
val statusEndpoint: Endpoint[Unit, ErrorInfo, Status, Nothing] =
baseEndpoint.in("status").out(jsonBody[Status])
```

The above endpoint will correspond to the `api/v1.0/status` path.
Expand All @@ -70,17 +73,21 @@ which accepts functions which provide the mapping in both directions. For examp

```scala
case class Paging(from: UUID, limit: Option[Int])
val paging: EndpointInput[Paging] = query[UUID]("start").and(query[Option[Int]]("limit"))
.map((from, limit) => Paging(from, limit))(paging => (paging.from, paging.limit))

val paging: EndpointInput[Paging] =
query[UUID]("start").and(query[Option[Int]]("limit"))
.map((from, limit) => Paging(from, limit))(paging => (paging.from, paging.limit))
```

Creating a mapping between a tuple and a case class is a common operation, hence there's also a
`mapTo(CaseClassCompanion)` method, which automatically provides the mapping functions:

```scala
case class Paging(from: UUID, limit: Option[Int])
val paging: EndpointInput[Paging] = query[UUID]("start").and(query[Option[Int]]("limit"))
.mapTo(Paging)

val paging: EndpointInput[Paging] =
query[UUID]("start").and(query[Option[Int]]("limit"))
.mapTo(Paging)
```

Mapping methods can also be called on an endpoint (which is useful if inputs/outputs are accumulated, for example).
Expand Down
2 changes: 1 addition & 1 deletion doc/endpoint/json.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import tapir.json.circe._
```

This will bring into scope `Codec`s which, given an in-scope circe `Encoder`/`Decoder`, will create a codec using the
json media type. Circe includes a couple of codec generating methods (manual, semi-auto and auto), so you may choose
json media type. Circe includes a couple of approaches to generating encoders/decoders (manual, semi-auto and auto), so you may choose
whatever suits you.

For example, to automatically generate a JSON codec for a case class:
Expand Down
2 changes: 1 addition & 1 deletion doc/mytapir.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ of this data (turning endpoints into a server or a client). Importing these pack
may be tedious, that's why each package object inherits all of its functionality from a trait.

Hence, it is possible to create your own object which combines all of the required functionalities and provides
a single-import whenever you want to use Tapir. For example:
a single-import whenever you want to use tapir. For example:

```scala
object MyTapir extends Tapir
Expand Down
6 changes: 3 additions & 3 deletions doc/openapi.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,8 @@ println(docs.toYaml)
Exposing the OpenAPI documentation can be very application-specific. For example, to expose the docs using the
Swagger UI and akka-http:

1. add `libraryDependencies += "org.webjars" % "swagger-ui" % "3.20.5"` to `build.sbt` (or newer)
2. generate the yaml content to serve as a `String` using tapir:
* add `libraryDependencies += "org.webjars" % "swagger-ui" % "3.20.5"` to `build.sbt` (or newer)
* generate the yaml content to serve as a `String` using tapir:

```scala
import tapir.docs.openapi._
Expand All @@ -46,7 +46,7 @@ import tapir.openapi.circe.yaml._
val docsAsYaml: String = myEndpoints.toOpenAPI("My App", "1.0").toYaml
```

3. add the following routes to your server:
* add the following routes to your server:

```scala
val SwaggerYml = "swagger.yml"
Expand Down
2 changes: 1 addition & 1 deletion doc/server/akkahttp.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ val countCharactersEndpoint: Endpoint[String, Unit, Int, Nothing] =
val countCharactersRoute: Route = countCharactersEndpoint.toRoute(countCharacters _)
```

The created `Route`/`Directive` can then be further combined with other akka-http directives, for example nest within
The created `Route`/`Directive` can then be further combined with other akka-http directives, for example nested within
other routes. The Tapir-generated `Route`/`Directive` captures from the request only what is described by the endpoint.

It's completely feasible that some part of the input is read using akka-http directives, and the rest
Expand Down
14 changes: 8 additions & 6 deletions doc/server/common.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,8 @@ By default, successful responses are returned with the `200 OK` status code, and
this can be customised when interpreting an endpoint as a directive/route, by providing implicit values of
`type StatusMapper[T] = T => StatusCode`, where `type StatusCode = Int`.

This can be especially useful for error responses.
This can be especially useful for error responses, in which case having an `Endpoint[I, E, O, S]`, you'd need to provide
an implicit `StatusMapper[E]`.

## Server options

Expand All @@ -22,14 +23,15 @@ Quite often user input will be malformed and decoding will fail. Should the requ
`400 Bad Request` response, or should the request be forwarded to another endpoint? By default, tapir follows OpenAPI
conventions, that an endpoint is uniquely identified by the method and served path. That's why:

* an "endpoint doesn't match" result is returned if the request method or path doesn't match
* otherwise, we assume that this is the correct endpoint to server the request, but the parameters are somehow
* an "endpoint doesn't match" result is returned if the request method or path doesn't match. The http library should
attempt to serve this request with the next endpoint.
* otherwise, we assume that this is the correct endpoint to serve the request, but the parameters are somehow
malformed. A `400 Bad Request` response is returned if a query parameter, header or body is missing / decoding fails,
or if the decoding a path capture fails with an error (but not a "missing" decode result).

This can be customised by providing an instance of `tapir.server.DecodeFailureHandler`, which basing on the request,
failing input and failure description can decide, whether to return a "no match", an endpoint-specific error value,
or a specific response.
This can be customised by providing an implicit instance of `tapir.server.DecodeFailureHandler`, which basing on the
request, failing input and failure description can decide, whether to return a "no match", an endpoint-specific error
value, or a specific response.

Only the first failure is passed to the `DecodeFailureHandler`. Inputs are decoded in the following order: method,
path, query, header, body.
3 changes: 2 additions & 1 deletion doc/server/http4s.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,8 @@ import org.http4s.HttpRoutes
import cats.effect.ContextShift

// will probably come from somewhere else
implicit val cs: ContextShift[IO] = IO.contextShift(scala.concurrent.ExecutionContext.global)
implicit val cs: ContextShift[IO] =
IO.contextShift(scala.concurrent.ExecutionContext.global)

def countCharacters(s: String): IO[Either[Unit, Int]] =
IO.pure(Right[Unit, Int](s.length))
Expand Down

0 comments on commit 5e7c3cc

Please sign in to comment.