diff --git a/antora.yml b/antora.yml index ef1d7e27d..e0236e975 100644 --- a/antora.yml +++ b/antora.yml @@ -1,6 +1,5 @@ name: sync-gateway title: Sync Gateway version: '2.1' -start_page: ROOT:index.adoc nav: -- modules/ROOT/nav.adoc \ No newline at end of file +- modules/ROOT/nav.adoc diff --git a/modules/ROOT/_attributes.adoc b/modules/ROOT/_attributes.adoc deleted file mode 100644 index 3b366f9e8..000000000 --- a/modules/ROOT/_attributes.adoc +++ /dev/null @@ -1,2 +0,0 @@ -:moduledir: .. -include::{moduledir}/_attributes.adoc[] \ No newline at end of file diff --git a/modules/ROOT/assets/images/images.003.png b/modules/ROOT/assets/images/client-auth.png similarity index 100% rename from modules/ROOT/assets/images/images.003.png rename to modules/ROOT/assets/images/client-auth.png diff --git a/modules/ROOT/assets/images/export-update.gif b/modules/ROOT/assets/images/export-update.gif new file mode 100644 index 000000000..99d924d39 Binary files /dev/null and b/modules/ROOT/assets/images/export-update.gif differ diff --git a/modules/ROOT/pages/_attributes.adoc b/modules/ROOT/pages/_attributes.adoc deleted file mode 100644 index d1c411f18..000000000 --- a/modules/ROOT/pages/_attributes.adoc +++ /dev/null @@ -1,3 +0,0 @@ -:version: 2.1 -:idprefix: -:idseparator: - \ No newline at end of file diff --git a/modules/ROOT/pages/admin-rest-api.adoc b/modules/ROOT/pages/admin-rest-api.adoc index f46da39ee..a06232062 100644 --- a/modules/ROOT/pages/admin-rest-api.adoc +++ b/modules/ROOT/pages/admin-rest-api.adoc @@ -1,7 +1,9 @@ -include::_attributes.adoc[] += Admin REST API +:idprefix: +:idseparator: - The API explorer below groups all the endpoints by functionality. -You can click on a label to expand the list of endpoints and also generate a curl request for each endpoint. +You can click on a label to expand the list of endpoints and also generate a curl request for each endpoint. == API Explorer diff --git a/modules/ROOT/pages/authentication.adoc b/modules/ROOT/pages/authentication.adoc index 4f45ad64f..e68262226 100644 --- a/modules/ROOT/pages/authentication.adoc +++ b/modules/ROOT/pages/authentication.adoc @@ -1,34 +1,39 @@ = Authentication +:idprefix: +:idseparator: - +:url-openid: https://openid.net/specs/openid-connect-core-1_0.html -Sync Gateway supports the following authentication methods: +Sync Gateway supports the following authentication methods: -* link:authentication.html#basic-authentication[Basic Authentication]: provide a username and password to authenticate users. -* link:authentication.html#custom-authentication[Custom Authentication]: use an App Server to handle the authentication yourself and create user sessions on the Sync Gateway Admin REST API. -* link:authentication.html#openid-connect[OpenID Connect Authentication]: use OpenID Connect providers (Google+, Paypal, etc.) to authenticate users. Static providers: Sync Gateway currently supports authentication endpoints for Facebook, Google+ and OpenID Connect providers +<>:: +Provide a username and password to authenticate users. +<>:: +Use an App Server to handle the authentication yourself and create user sessions on the Sync Gateway Admin REST API. +<>:: +Use OpenID Connect providers (Google+, Paypal, etc.) to authenticate users. +Static providers: Sync Gateway currently supports authentication endpoints for Facebook, Google+ and OpenID Connect providers -[[_user_registration]] == User Registration The user must be created on Sync Gateway before it can be used for authentication. -Users can be created through the Sync Gateway Admin REST API or configuration file. +Users can be created through the Sync Gateway Admin REST API or configuration file. -* *Admin REST API:* the user credentials (**username**/**password**) are passed in the request body to the link:admin-rest-api.html#/user/post\__db___user_[+POST /{db}/_user+] endpoint. +Admin REST API:: +The user credentials (**username**/**password**) are passed in the request body to the xref:admin-rest-api.adoc#/user/post\__db___user_[POST /+{db}+/_user] endpoint. + - [source,bash] ---- - $ curl -vX POST "http://localhost:4985/justdoit/_user/" -H "accept: application/json" -H "Content-Type: application/json" -d '{"name": "john", "password": "pass"}' ---- + Note that the Admin REST API is *not* accessible from the clients directly. -To allow users to sign up, it is recommended to have an app server sitting alongside Sync Gateway that performs the user validation, creates a new user on the Sync Gateway admin port and then returns the response to the application. -* *Configuration file:* user credentials are hardcoded in the configuration. This method is convenient for testing but we generally recommend to use the *Admin REST API* in a Sync Gateway deployment. -+ +To allow users to sign up, it is recommended to have an app server sitting alongside Sync Gateway that performs the user validation, creates a new user on the Sync Gateway admin port and then returns the response to the application. + +Configuration file:: +User credentials are hardcoded in the configuration. This method is convenient for testing but we generally recommend to use the *Admin REST API* in a Sync Gateway deployment. [source,javascript] ---- - { "databases": { "mydatabase": { @@ -41,37 +46,34 @@ To allow users to sign up, it is recommended to have an app server sitting along } ---- - == Basic Authentication Once the user has been created on Sync Gateway, you can provide the same **username**/**password** to the `BasicAuthenticator` class of Couchbase Lite. Under the hood, the replicator will send the credentials in the first request to retrieve a `SyncGatewaySession` cookie and use it for all subsequent requests during the replication. -This is the recommended way of using basic authentication. - -Example: +This is the recommended way of using basic authentication. -* xref:2.1@couchbase-lite:ROOT::swift.adoc#basic-authentication[Swift] -* xref:2.1@couchbase-lite:ROOT::java.adoc#basic-authentication[Java] -* xref:2.1@couchbase-lite:ROOT::csharp.adoc#basic-authentication[C#] -* xref:2.1@couchbase-lite:ROOT::objc.adoc#basic-authentication[Objective-C] +Example: +* xref:2.1@couchbase-lite::swift.adoc#basic-authentication[Swift] +* xref:2.1@couchbase-lite::java.adoc#basic-authentication[Java] +* xref:2.1@couchbase-lite::csharp.adoc#basic-authentication[C#] +* xref:2.1@couchbase-lite::objc.adoc#basic-authentication[Objective-C] == Custom Authentication It's possible for an application server associated with a remote Couchbase Sync Gateway to provide its own custom form of authentication. -Generally this will involve a particular URL that the app needs to post some form of credentials to; the App Server will verify those, then tell the Sync Gateway to create a new session for the corresponding user, and return session credentials in its response to the client app. +Generally this will involve a particular URL that the app needs to post some form of credentials to; +the App Server will verify those, then tell the Sync Gateway to create a new session for the corresponding user, and return session credentials in its response to the client app. The following diagram shows an example architecture to support Google SignIn in a Couchbase Mobile application, the client sends an access token to the App Server where a server side validation is done with the Google API and a corresponding Sync Gateway user is then created if it's the first time the user logs in. -The last request creates a session: - +The last request creates a session: image::custom-auth-flow.png[] -Given a user has already been created, the request to create a new session for that user name is the following: +Given a user has already been created, the request to create a new session for that user name is the following: [source,bash] ---- - $ curl -vX POST -H 'Content-Type: application/json' \ -d '{"name": "john", "ttl": 180}' \ http://localhost:4985/sync_gateway/_session @@ -83,53 +85,55 @@ $ curl -vX POST -H 'Content-Type: application/json' \ } ---- -The HTTP response body contains the credentials of the session. +The HTTP response body contains the credentials of the session. * *name* corresponds to the `cookie_name` * *value* corresponds to the `session_id` -* *path* is the hostname of the Sync Gateway +* *path* is the hostname of the Sync Gateway * *expirationDate* corresponds to the `expires` -* *secure* Whether the cookie should only be sent using a secure protocol (e.g. HTTPS) -* *httpOnly* Whether the cookie should only be used when transmitting HTTP, or HTTPS, requests thus restricting access from other, non-HTTP APIs +* *secure* Whether the cookie should only be sent using a secure protocol (e.g. HTTPS) +* *httpOnly* Whether the cookie should only be used when transmitting HTTP, or HTTPS, requests thus restricting access from other, non-HTTP APIs -It is recommended to return the session details to the client application in the same form and to use the `SessionAuthenticator` class to authenticate with that session id. +It is recommended to return the session details to the client application in the same form and to use the `SessionAuthenticator` class to authenticate with that session id. -Example: - -* xref:2.1@couchbase-lite:ROOT::swift.adoc#session-authentication[Swift] -* xref:2.1@couchbase-lite:ROOT::java.adoc#session-authentication[Java] -* xref:2.1@couchbase-lite:ROOT::csharp.adoc#session-authentication[C#] -* xref:2.1@couchbase-lite:ROOT::objc.adoc#session-authentication[Objective-C] +Example: +* xref:2.1@couchbase-lite::swift.adoc#session-authentication[Swift] +* xref:2.1@couchbase-lite::java.adoc#session-authentication[Java] +* xref:2.1@couchbase-lite::csharp.adoc#session-authentication[C#] +* xref:2.1@couchbase-lite::objc.adoc#session-authentication[Objective-C] == OpenID Connect Sync Gateway supports OpenID Connect. -This allows your application to use Couchbase for data synchronization and delegate the authentication to a 3rd party server (known as the Provider). There are two implementation methods available with OpenID Connect: - -* link:authentication.html#implicit-flow[Implicit Flow]: with this method, the retrieval of the ID token takes place on the device. You can then create a user session using the POST `+/{db}/_session+` endpoint on the Public REST API with the ID token. -* link:authentication.html#authorization-code-flow[Authorization Code Flow]: this method relies on Sync Gateway to retrieve the ID token. +This allows your application to use Couchbase for data synchronization and delegate the authentication to a 3rd party server (known as the Provider). +There are two implementation methods available with OpenID Connect: +<>:: +With this method, the retrieval of the ID token takes place on the device. +You can then create a user session using the POST `/+{db}+/_session` endpoint on the Public REST API with the ID token. +<>:: +This method relies on Sync Gateway to retrieve the ID token. -[[_implicit_flow]] === Implicit Flow -http://openid.net/specs/openid-connect-core-1_0.html#ImplicitFlowAuth[Implicit Flow] has the key feature of allowing clients to obtain their own Open ID token and use it to authenticate against Sync Gateway. -The implicit flow with Sync Gateway is as follows: +{url-openid}#ImplicitFlowAuth[Implicit Flow] has the key feature of allowing clients to obtain their own Open ID token and use it to authenticate against Sync Gateway. +The implicit flow with Sync Gateway is as follows: -. The client obtains a *signed* Open ID token directly from an OpenID Connect provider. Note that only signed tokens are supported. To verify that the Open ID token being sent is indeed signed, you can use the https://jwt.io/#debugger-io[jwt.io Debugger]. In the algorithm dropdown, make sure to select `RS256` as the signing algorithm (other options such as `HS256` are not yet supported by Sync Gateway). -. The client includes the Open ID token as an `Authorization: Bearer ` header on requests made against the Sync Gateway REST API. -. Sync Gateway matches the token to a provider in its configuration file based on the issuer and audience in the token. -. Sync Gateway validates the token, based on the provider definition. -. Upon successful validation, Sync Gateway authenticates the user based on the subject and issuer in the token. +. The client obtains a *signed* Open ID token directly from an OpenID Connect provider. Note that only signed tokens are supported. +To verify that the Open ID token being sent is indeed signed, you can use the https://jwt.io/#debugger-io[jwt.io Debugger]. +In the algorithm dropdown, make sure to select `RS256` as the signing algorithm (other options such as `HS256` are not yet supported by Sync Gateway). +. The client includes the Open ID token as an `Authorization: Bearer ` header on requests made against the Sync Gateway REST API. +. Sync Gateway matches the token to a provider in its configuration file based on the issuer and audience in the token. +. Sync Gateway validates the token, based on the provider definition. +. Upon successful validation, Sync Gateway authenticates the user based on the subject and issuer in the token. -Since Open ID tokens are typically large, the usual approach is to use the Open ID token to obtain a Sync Gateway session id (using the link:rest-api.html#!/session/post_db_session[POST /db/_session] endpoint), and then use the returned session id for subsequent authentication requests. +Since Open ID tokens are typically large, the usual approach is to use the Open ID token to obtain a Sync Gateway session id (using the xref:rest-api.adoc#!/session/post_db_session[POST /db/_session] endpoint), and then use the returned session id for subsequent authentication requests. -Here is a sample Sync Gateway config file, configured to use the Implicit Flow. +Here is a sample Sync Gateway config file, configured to use the Implicit Flow. [source,javascript] ---- - { "interface":":4984", "log":["*"], @@ -155,23 +159,22 @@ Here is a sample Sync Gateway config file, configured to use the Implicit Flow. With the implicit flow, the mechanism to refresh the token and Sync Gateway session must be handled in the application code. On launch, the application should check if the token has expired. -If it has then you must request a new token (Google SignIn for iOS has a method called `signInSilently` for this purpose). By doing this, the application can then use the token to create a Sync Gateway session. - - -image::images.003.png[] +If it has then you must request a new token (Google SignIn for iOS has a method called `signInSilently` for this purpose). +By doing this, the application can then use the token to create a Sync Gateway session. +image::client-auth.png[] -. The Google SignIn SDK prompts the user to login and if successful it returns an ID token to the application. -. The ID token is used to create a Sync Gateway session by sending a POST `+/{db}/_session+` request. -. Sync Gateway returns a cookie session in the response header. -. The Sync Gateway cookie session is used on the replicator object. +. The Google SignIn SDK prompts the user to login and if successful it returns an ID token to the application. +. The ID token is used to create a Sync Gateway session by sending a POST `/+{db}+/_session` request. +. Sync Gateway returns a cookie session in the response header. +. The Sync Gateway cookie session is used on the replicator object. Sync Gateway sessions also have an expiration date. -The replication `lastError` property will return a *401 Unauthorized* when it's the case and then the application must retrieve create a new Sync Gateway session and set the new cookie on the replicator. +The replication `lastError` property will return a *401 Unauthorized* when it's the case and then the application must retrieve create a new Sync Gateway session and set the new cookie on the replicator. -You can configure your application for Google SignIn by following https://developers.google.com/identity/[this guide]. +You can configure your application for Google SignIn by following https://developers.google.com/identity/[this guide]. === Authorization Code Flow -Whilst Sync Gateway supports http://openid.net/specs/openid-connect-core-1_0.html#CodeFlowAuth[Authorization Code Flow], there is considerable work involved to implement the *Authorization Code Flow* on the client side. -Couchbase Lite 1.x has an API to hide this complexity called `OpenIDConnectAuthenticator` but since it is not available in the 2.0 API we recommend to use the **Implicit Flow**. +Whilst Sync Gateway supports {url-openid}#CodeFlowAuth[Authorization Code Flow], there is considerable work involved to implement the *Authorization Code Flow* on the client side. +Couchbase Lite 1.x has an API to hide this complexity called `OpenIDConnectAuthenticator` but since it is not available in the 2.0 API we recommend to use the *Implicit Flow*. diff --git a/modules/ROOT/pages/authorizing-users.adoc b/modules/ROOT/pages/authorizing-users.adoc index 63e90882a..05a87de45 100644 --- a/modules/ROOT/pages/authorizing-users.adoc +++ b/modules/ROOT/pages/authorizing-users.adoc @@ -1,87 +1,109 @@ = Authorizing Users -Sync Gateway provides the following REST APIs: +Sync Gateway provides the following REST APIs: -* The Public REST API is used for client replication. The default port for the Public REST API is 4984. -* The Admin REST API is used to administer user accounts and roles. It can also be used to look at the contents of databases in superuser mode. The default port for the Admin REST API is 4985. By default, the Admin REST API is reachable only from localhost for safety reasons. +* The Public REST API is used for client replication. +The default port for the Public REST API is 4984. +* The Admin REST API is used to administer user accounts and roles. +It can also be used to look at the contents of databases in superuser mode. +The default port for the Admin REST API is 4985. +By default, the Admin REST API is reachable only from localhost for safety reasons. -More information regarding the APIs themselves can be found in the API Reference section. +More information regarding the APIs themselves can be found in the API Reference section. == Managing API Access -The APIs are accessed on different TCP ports, which makes it easy to expose the Public REST API on port 4984 to endpoints while keeping the Admin REST API on port 4985 secure behind your firewall. +The APIs are accessed on different TCP ports, which makes it easy to expose the Public REST API on port 4984 to endpoints while keeping the Admin REST API on port 4985 secure behind your firewall. -If you want to change the ports, you can do that in the configuration file. +If you want to change the ports, you can do that in the configuration file. -* To change the Public REST API port, set the `interface` property in the configuration file. -* To change the Admin REST API port, set the `adminInterface` property in the configuration file. +* To change the Public REST API port, set the `interface` property in the configuration file. +* To change the Admin REST API port, set the `adminInterface` property in the configuration file. -The value of the property is a string consisting of a colon followed by a port number (for example, ``:4985``). You can also prepend a host name or numeric IP address before the colon to bind only to the network interface with that address. +The value of the property is a string consisting of a colon followed by a port number (for example, `:4985`). +You can also prepend a host name or numeric IP address before the colon to bind only to the network interface with that address. As a useful special case, the IP address 127.0.0.1 binds to the loopback interface, making the port unreachable from any other host. -This is the default setting for the admin interface. +This is the default setting for the admin interface. == Managing Guest Access Sync Gateway does not allow anonymous or guest access by default. A new server is accessible through the Public REST API only after you enable guest access or create some user accounts. You can do this either by editing the configuration file before starting the server or by using the Admin REST API. -For more information, see Anonymous Access. +For more information, see Anonymous Access. == Authorizing Users You can authorize users and control their access to your database by creating user accounts and assigning roles to users. -This article focuses on how to authorize users to be able to access the Sync Gateway and their remote databases. +This article focuses on how to authorize users to be able to access the Sync Gateway and their remote databases. === Accounts You manage accounts by using the Admin REST API.This interface is privileged and for administrator use only. -To manage accounts, you need to have some other server-side mechanism that calls through to this API. - -The URL for a user account is ``/databasename/_user/name``, where databasename is the configured name of the database and name is the user name. -The content of the resource is a JSON document with the following properties: - -* ``admin_channels``: Lists the channels that the user is granted access to by the administrator. The value is an array of channel name strings. -* ``admin_roles``: The roles that the user is explicitly granted access to through the Admin REST API. It contains an array of role name strings. -* ``all_channels``: Like the `admin_channels` property, but also includes channels the user is given access to by other documents via a sync function. This is a derived property and changes to it are ignored. -* ``disabled``: This property is usually not included. if the value is set to true, access for the account is disabled. -* ``email``: The user's email address. This property is optional. -* ``name``: The user name (the same name used in the URL path). The valid characters for a user name are alphanumeric ASCII characters and the underscore character. The name property is required in a POST request. You don't need to include it in a PUT request because the user name is specified in the URL. -* ``password``: In a PUT or POST request, you can set the user's password with this property. It is not returned by a GET request. -* ``roles``: Like the `admin_roles` property, but also includes roles the user is given access to by other documents via a sync function. This is a derived property and changes to it are ignored. It contains an array of role name strings. - -You can create a new user by sending a PUT request to its URL or by sending a POST request to ``/$DB/_user/``. +To manage accounts, you need to have some other server-side mechanism that calls through to this API. + +The URL for a user account is `/databasename/_user/name`, where databasename is the configured name of the database and name is the user name. +The content of the resource is a JSON document with the following properties: + +`admin_channels`:: +Lists the channels that the user is granted access to by the administrator. +The value is an array of channel name strings. +`admin_roles`:: +The roles that the user is explicitly granted access to through the Admin REST API. +It contains an array of role name strings. +`all_channels`:: +Like the `admin_channels` property, but also includes channels the user is given access to by other documents via a sync function. +This is a derived property and changes to it are ignored. +`disabled`:: +This property is usually not included. +If the value is set to true, access for the account is disabled. +`email`:: +The user's email address. +This property is optional. +`name`:: +The user name (the same name used in the URL path). +The valid characters for a user name are alphanumeric ASCII characters and the underscore character. +The name property is required in a POST request. +You don't need to include it in a PUT request because the user name is specified in the URL. +`password`:: +In a PUT or POST request, you can set the user's password with this property. It is not returned by a GET request. +`roles`:: +Like the `admin_roles` property, but also includes roles the user is given access to by other documents via a sync function. +This is a derived property and changes to it are ignored. +It contains an array of role name strings. + +You can create a new user by sending a PUT request to its URL or by sending a POST request to `/$DB/_user/`. === Anonymous Access A special user account named `GUEST` applies to unauthenticated requests. Any request to the Public REST API that does not have an `Authorization` header or a session cookie is treated as coming from the `GUEST` account. -This account and all anonymous access is disabled by default. +This account and all anonymous access is disabled by default. To enable the GUEST account, set its `disabled` property to false. You might also want to give it access to some channels. If you don't assign some channels to the GUEST account, anonymous requests won't be able to access any documents. -The following sample command enables the GUEST account and allows it access to a channel named public. +The following sample command enables the GUEST account and allows it access to a channel named public. [source,bash] ---- - $ curl -X PUT localhost:4985/$DB/_user/GUEST --data \ '{"disabled":false, "admin_channels":["public"]}' ---- === Admin Access -When sending a change to Sync Gateway through the Admin REST API, the Sync Function is executed with admin privileges: calls to ``requireUser``, `requireAccess` and `requireRole` are no-ops (i.e will always be successful). +When sending a change to Sync Gateway through the Admin REST API, the Sync Function is executed with admin privileges: calls to `requireUser`, `requireAccess` and `requireRole` are no-ops (i.e will always be successful). === Roles Roles are named collections of channels. A user account can be assigned to zero or more roles. A user inherits the channel access of all roles it belongs to. -This is very much like Unix groups, except that roles do not form a hierarchy. +This is very much like Unix groups, except that roles do not form a hierarchy. -You access roles through the Admin REST API much like users are accessed, through URLs of the form ``/dbname/_role/name``. -Role resources have a subset of the properties that users do: ``name``, ``admin_channels``, ``all_channels``. +You access roles through the Admin REST API much like users are accessed, through URLs of the form `/dbname/_role/name`. +Role resources have a subset of the properties that users do: `name`, `admin_channels`, `all_channels`. -Roles have a separate namespace from users, so it's legal to have a user and a role with the same name. \ No newline at end of file +Roles have a separate namespace from users, so it's legal to have a user and a role with the same name. diff --git a/modules/ROOT/pages/command-line-options.adoc b/modules/ROOT/pages/command-line-options.adoc index 567782021..c20fa5bf9 100644 --- a/modules/ROOT/pages/command-line-options.adoc +++ b/modules/ROOT/pages/command-line-options.adoc @@ -1,105 +1,88 @@ -= Command line options += Command Line Options To configure Sync Gateway, you can specify command-line options when you start Sync Gateway. Command-line options can only specify a limited set of configuration properties, and cannot be used to configure multiple databases. For more comprehensive configuration, use a JSON configuration file. -For information about using a configuration file, see the configuration file guide. +For information about using a configuration file, see the configuration file guide. -Configuration determines the runtime behavior of Sync Gateway, including server configuration and the database or set of databases with which a Sync Gateway instance can interact. +Configuration determines the runtime behavior of Sync Gateway, including server configuration and the database or set of databases with which a Sync Gateway instance can interact. -[quote] -*Note:* Note:You can only specify a small subset of configuration properties as command-line options. -Two command-line options do not have corresponding configuration properties: `-help` and ``-verbose``. +NOTE: You can only specify a small subset of configuration properties as command-line options. +Two command-line options do not have corresponding configuration properties: `-help` and `-verbose`. -When specifying command-line options, the format of the `sync_gateway` command is: +When specifying command-line options, the format of the `sync_gateway` command is: [source,bash] ---- - sync_gateway [command-line options] ---- == Command-line options -You can prefix command-line options with one hyphen (-) or with two hyphens (--). For command-line options that take an argument, you specify the argument after an equal sign (=), for example, ``-bucket=db``, or as a following item on the command line, for example, ``-bucket db``. +You can prefix command-line options with one hyphen (-) or with two hyphens (--). For command-line options that take an argument, you specify the argument after an equal sign (=), for example, `-bucket=db`, or as a following item on the command line, for example, `-bucket db`. Command-line options are case-insensitive. -Here we use lower camel case. +Here we use lower camel case. -Following are the command-line options that you can specify when starting Sync Gateway. +Following are the command-line options that you can specify when starting Sync Gateway. -[cols="1,1,1", options="header"] +[cols="1,1,2"] |=== -| - Option - -| - Default - -| - Description - - - -|``‑adminInterface`` -|``127.0.0.1:4985`` -| - Port or TCP network address (IP address and the port) that the Admin REST API listens on. - -|``-bucket`` -|``sync_gateway`` -| - Name of the Couchbase Server bucket. - -|``-dbname`` -|``sync_gateway`` -| - Name of the Couchbase Server database to serve through the Public REST API. +|Option |Default |Description + +|`‑adminInterface` +|`127.0.0.1:4985` +|Port or TCP network address (IP address and the port) that the Admin REST API listens on. + +|`-bucket` +|`sync_gateway` +|Name of the Couchbase Server bucket. + +|`-dbname` +|`sync_gateway` +|Name of the Couchbase Server database to serve through the Public REST API. |`-defaultLogFilePath` |none -|Path to log files, as a fallback default value when `logFilePath` is not specified. This option is generally used in service scripts. -|``-help`` -| - none -| - Lists the available options and exits. - -|``-interface`` -|``:4984`` -| - Port or TCP network address (IP address and the port) that the Public REST API listens on. - -|``-log`` -|``HTTP`` -| - Comma-separated list of log keywords to enable. The log keyword `HTTP` is enabled by default, which means that HTTP requests and error responses are always logged. Omitting `HTTP` from your list does not disable HTTP logging. HTTP logging can be disabled through the Admin API. - -|``-logFilePath`` -| - none -| - Path to log files. - -|``-pool`` -|``default`` -| - Name of the Couchbase Server pool in which to find buckets. - -|``-pretty`` -|``false`` -| - Pretty-print JSON responses to improve readability. This is useful for debugging, but reduces performance. - -|``-url`` -|``walrus:`` -| - URL of the database server. An HTTP URL implies Couchbase Server. A `walrus:` URL implies the built-in Walrus database. A combination of a Walrus URL and a file-style URI (for example, ``walrus:///tmp/walrus``) implies the built-in Walrus database and persisting the database to a file. - -|``-verbose`` -| - Non-verbose logging -| - Logs more information about requests. +|Path to log files, as a fallback default value when `logFilePath` is not specified. +This option is generally used in service scripts. + +|`-help` +|none +|Lists the available options and exits. + +|`-interface` +|`:4984` +|Port or TCP network address (IP address and the port) that the Public REST API listens on. + +|`-log` +|`HTTP` +|Comma-separated list of log keywords to enable. +The log keyword `HTTP` is enabled by default, which means that HTTP requests and error responses are always logged. +Omitting `HTTP` from your list does not disable HTTP logging. HTTP logging can be disabled through the Admin API. + +|`-logFilePath` +|none +|Path to log files. + +|`-pool` +|`default` +|Name of the Couchbase Server pool in which to find buckets. + +|`-pretty` +|`false` +|Pretty-print JSON responses to improve readability. +This is useful for debugging, but reduces performance. + +|`-url` +|`walrus:` +|URL of the database server. +An HTTP URL implies Couchbase Server. +A `walrus:` URL implies the built-in Walrus database. +A combination of a Walrus URL and a file-style URI (for example, `walrus:///tmp/walrus`) implies the built-in Walrus database and persisting the database to a file. + +|`-verbose` +|Non-verbose logging +|Logs more information about requests. |=== [[_examples]] @@ -107,35 +90,31 @@ Following are the command-line options that you can specify when starting Sync G The following command does not include any parameters and just uses the default values. It connects to the bucket named `sync_gateway` in the pool named `default` of the built-in Walrus database. -It is served from port 4984, with the Admin interface on port 4985. +It is served from port 4984, with the Admin interface on port 4985. [source,bash] ---- - $ sync_gateway ---- -The following command creates an ephemeral, in-memory Walrus database, served as ``db``, and specifies use of pretty-printed JSON responses. -Because Walrus is the default database, the option `-url` could be omitted. +The following command creates an ephemeral, in-memory Walrus database, served as `db`, and specifies use of pretty-printed JSON responses. +Because Walrus is the default database, the option `-url` could be omitted. [source,bash] ---- - $ sync_gateway -url=walrus: -bucket=db -pretty ---- -The following command starts Sync Gateway and specifies the address of a Couchbase Server instance (instead of using the default database server, which is Walrus): +The following command starts Sync Gateway and specifies the address of a Couchbase Server instance (instead of using the default database server, which is Walrus): [source,bash] ---- - $ ./sync_gateway -url http://cbserver:8091 ---- -The following command accomplishes the same things as the prior command, persists the Walrus database to a file named ``/tmp/walrus/db.walrus``, and turns on additional logging for the log keys `HTTP+` and ``CRUD``. +The following command accomplishes the same things as the prior command, persists the Walrus database to a file named `/tmp/walrus/db.walrus`, and turns on additional logging for the log keys `HTTP+` and `CRUD`. [source,bash] ---- - $ sync_gateway -url=walrus:///tmp/walrus -bucket=db -log=HTTP+,CRUD ----- \ No newline at end of file +---- diff --git a/modules/ROOT/pages/compatibility-matrix.adoc b/modules/ROOT/pages/compatibility-matrix.adoc index 7be503dfb..748e29fce 100644 --- a/modules/ROOT/pages/compatibility-matrix.adoc +++ b/modules/ROOT/pages/compatibility-matrix.adoc @@ -1,8 +1,8 @@ -== Compatibility Matrix += Compatibility Matrix The table below summarizes the compatible versions of Sync Gateway with Couchbase Server. -[cols="4,1,1,1,1,1,1,1", options="header"] +[cols="4,1,1,1,1,1,1,1",options="header"] |=== | 7+|Couchbase Server → @@ -118,4 +118,5 @@ The table below summarizes the compatible versions of Sync Gateway with Couchbas |✔ |=== -For all of the above, the Couchbase Server https://developer.couchbase.com/documentation/server/current/architecture/core-data-access-buckets.html[bucket type] must be *Couchbase*. Usage of *Ephemeral* and *Memcached* buckets with Couchbase Mobile is not supported. \ No newline at end of file +For all of the above, the Couchbase Server xref:server:understanding-couchbase:buckets-memory-and-storage/buckets.adoc[bucket type] must be *Couchbase*. +Usage of *Ephemeral* and *Memcached* buckets with Couchbase Mobile is not supported. diff --git a/modules/ROOT/pages/config-properties.adoc b/modules/ROOT/pages/config-properties.adoc index 70211e24f..7235dc811 100644 --- a/modules/ROOT/pages/config-properties.adoc +++ b/modules/ROOT/pages/config-properties.adoc @@ -1,21 +1,20 @@ -include::_attributes.adoc[] - == Configuration File +:idprefix: +:idseparator: - -A configuration file determines the runtime behavior of Sync Gateway, including server configuration and the database or set of databases with which a Sync Gateway instance can interact. +A configuration file determines the runtime behavior of Sync Gateway, including server configuration and the database or set of databases with which a Sync Gateway instance can interact. -Using a configuration file is the recommended approach for configuring Sync Gateway, because you can provide values for all configuration properties and you can define configurations for multiple Couchbase Server databases. +Using a configuration file is the recommended approach for configuring Sync Gateway, because you can provide values for all configuration properties and you can define configurations for multiple Couchbase Server databases. -When specifying a configuration file, the format to run Sync Gateway is: +When specifying a configuration file, the format to run Sync Gateway is: [source] ---- - $ sync_gateway sync-gateway-config.json ---- -Configuration files have one syntactic feature that is not standard JSON: any text between back ticks (`) is treated as a string, even if it spans multiple lines or contains double-quotes. -This makes it easy to embed JavaScript code, such as the sync and filter functions. +Configuration files have one syntactic feature that is not standard JSON: any text between back ticks (`++`++`) is treated as a string, even if it spans multiple lines or contains double-quotes. +This makes it easy to embed JavaScript code, such as the sync and filter functions. == Configuration Reference diff --git a/modules/ROOT/pages/configuring-ssl.adoc b/modules/ROOT/pages/configuring-ssl.adoc index 0ef48ac4c..707072157 100644 --- a/modules/ROOT/pages/configuring-ssl.adoc +++ b/modules/ROOT/pages/configuring-ssl.adoc @@ -1,51 +1,51 @@ = Configuring SSL Sync Gateway supports serving SSL. -To enable SSL, you need to add two properties to the config file: +To enable SSL, you need to add two properties to the config file: -* ``"SSLCert"``: A path to a PEM-format file containing an X.509 certificate or a certificate chain. -* ``"SSLKey"``: A path to a PEM-format file containing the certificate's matching private key. +`"SSLCert"`:: +A path to a PEM-format file containing an X.509 certificate or a certificate chain. +`"SSLKey"`:: +A path to a PEM-format file containing the certificate's matching private key. -If both properties are present, the server will respond to SSL (and only SSL) over both the public and admin ports. +If both properties are present, the server will respond to SSL (and only SSL) over both the public and admin ports. == How to create an SSL certificate Certificates are a complex topic. -There are basically two routes you can go: request a certificate from a Certificate Authority (CA), or create your own "self-signed" certificate. +There are basically two routes you can go: request a certificate from a Certificate Authority (CA), or create your own "self-signed" certificate. === Requesting a certificate from a CA You can obtain a certificate from a trusted https://en.wikipedia.org/wiki/Certificate_authority[Certificate Authority] (CA). Examples of trusted CAs include https://letsencrypt.org/[Let's Encrypt], Thawte or GoDaddy. -What this means is that their own root certificates are known and trusted by operating systems, so any certificate that they sign will also be trusted. +What this means is that their own root certificates are known and trusted by operating systems, so any certificate that they sign will also be trusted. -Hence, the benefit of a certificate obtained from a trusted CA is that it will be trusted by any SSL client. +Hence, the benefit of a certificate obtained from a trusted CA is that it will be trusted by any SSL client. === Creating your own self-signed certificate Unlike a CA-signed cert, a self-signed cert isn't intrinsically trustworthy: a client can't tell who you are by examining the cert, because no recognized authority has vouched for it. -But a self-signed cert is still unique (only you, as the holder of the private key, can operate a server using that cert), and it still allows the connection to be encrypted. +But a self-signed cert is still unique (only you, as the holder of the private key, can operate a server using that cert), and it still allows the connection to be encrypted. It's easy to create a self-signed certificate using the openssl command-line tool and these directions. -In a nutshell, you just need to run these commands: +In a nutshell, you just need to run these commands: [source,bash] ---- - $ openssl genrsa -out privkey.pem 2048 $ openssl req -new -x509 -sha256 -key privkey.pem -out cert.pem -days 1095 ---- The second command is interactive and will ask you for information like country and city name that goes into the X.509 certificate. You can put whatever you want there; the only important part is the field `Common Name (e.g. server FQDN or YOUR name)` which needs to be the exact _hostname_ that clients will reach your server at. -The client will verify that this name matches the hostname in the URL it's trying to access, and will reject the connection if it doesn't. +The client will verify that this name matches the hostname in the URL it's trying to access, and will reject the connection if it doesn't. -The tool will then create two files: `privkey.pem` (the private key) and `cert.pem` (the public certificate.) +The tool will then create two files: `privkey.pem` (the private key) and `cert.pem` (the public certificate.) -To create a copy of the cert in binary DER format (often stored in a ".cer" file), do this: +To create a copy of the cert in binary DER format (often stored in a ".cer" file), do this: [source,bash] ---- - $ openssl x509 -inform PEM -in cert.pem -outform DER -out cert.cer ---- @@ -53,13 +53,12 @@ $ openssl x509 -inform PEM -in cert.pem -outform DER -out cert.cer Whichever way you obtained the certificate, you will now have a private key and an X.509 certificate. Ensure that they're in separate files and in PEM format, and put them in a directory that's readable by the Sync Gateway process. -The private key is very sensitive (it's not encrypted) so make sure the file isn't readable by unauthorized processes. +The private key is very sensitive (it's not encrypted) so make sure the file isn't readable by unauthorized processes. -Then just add the `"SSLCert"` and `"SSLKey"` properties to your Sync Gateway configuration file. +Then just add the `"SSLCert"` and `"SSLKey"` properties to your Sync Gateway configuration file. [source,javascript] ---- - { "SSLCert": "cert.pem", "SSLKey": "privkey.pem", @@ -72,4 +71,4 @@ Then just add the `"SSLCert"` and `"SSLKey"` properties to your Sync Gateway con } ---- -Start Sync Gateway and access the public port over `https` on https://localhost:4984. \ No newline at end of file +Start Sync Gateway and access the public port over `https` on \https://localhost:4984. diff --git a/modules/ROOT/pages/data-routing.adoc b/modules/ROOT/pages/data-routing.adoc index a3a6d9e0a..9aef773f6 100644 --- a/modules/ROOT/pages/data-routing.adoc +++ b/modules/ROOT/pages/data-routing.adoc @@ -1,129 +1,136 @@ -= Data routing += Data Routing +:idprefix: +:idseparator: - +:url-httpie: https://github.com/jakubroztocil/httpie Sync Gateway uses channels to make it easy to share a database between a large number of users and control access to the database. Channels are the intermediaries between documents and users. Every document in the database belongs to a set of channels, and every user is allowed to access a set of channels. -You use channels to: +You use channels to: -* Partition the data set. -* Authorize users to access documents. -* Minimize the amount of data synced down to devices. +* Partition the data set. +* Authorize users to access documents. +* Minimize the amount of data synced down to devices. - -[[_introduction_to_channels]] == Introduction to channels In Sync Gateway, a *channel* is like a combination of a tag and a message-queue. -Channels have relationships to both documents and users: - -* Every document is associated with a set of channels. From the document's perspectives, the channels are like tags that can be used to identify its type, purpose, accessibility, etc. The app-defined sync function is responsible for assigning every incoming document revision a set of channels as it's saved to the database. -* Every user and role has a set of channels that they're allowed to read. A user can only read documents that are in at least one of the user's channels (or the channels of roles that user has.) User/role channel access can be assigned directly through the admin API, or via the sync function when a document is updated. -* A Couchbase Lite "pull" replication can optionally specify what channels it wants to receive documents from. (If it doesn't, it gets all channels the user has access to.) Client apps can use this ability to intelligently sync with a subset of the available documents from the database. +Channels have relationships to both documents and users: + +* Every document is associated with a set of channels. +From the document's perspectives, the channels are like tags that can be used to identify its type, purpose, accessibility, etc. +The app-defined sync function is responsible for assigning every incoming document revision a set of channels as it's saved to the database. +* Every user and role has a set of channels that they're allowed to read. +A user can only read documents that are in at least one of the user's channels (or the channels of roles that user has.) +User/role channel access can be assigned directly through the admin API, or via the sync function when a document is updated. +* A Couchbase Lite "pull" replication can optionally specify what channels it wants to receive documents from. +(If it doesn't, it gets all channels the user has access to.) +Client apps can use this ability to intelligently sync with a subset of the available documents from the database. There's not much to a channel besides its name. Channels come into existence as documents are assigned to them. -Channels with no documents assigned to them are empty. +Channels with no documents assigned to them are empty. -Valid channel names consist of text letters ``[A–Z, a–z]``, digits ``[0–9]``, and a few special characters ``[= + / . , _ @]``. -Channel names are case-sensitive. +Valid channel names consist of text letters `[A–Z, a–z]`, digits `[0–9]`, and a few special characters `[= + / . , _ @]`. +Channel names are case-sensitive. == Developing channels -Now that you've learned what a channel is as a concept, let's put it to practice and see how a channel definition is applied and how it affects the system. +Now that you've learned what a channel is as a concept, let's put it to practice and see how a channel definition is applied and how it affects the system. === Mapping documents to channels You assign documents to `channels` either by adding a channels property to the document or by using a sync function. -No matter which option you choose, the channel assignment is implicit--the content of the document determines what channels it belongs to. +No matter which option you choose, the channel assignment is implicit--the content of the document determines what channels it belongs to. ==== Using a channels property Adding a `channels` property to each document is the easiest way to map documents to channels. The `channels` property is an array of strings that contains the names of the channels to which the document belongs. -If you do not include a `channels` property in a document, the document does not appear in any channels. +If you do not include a `channels` property in a document, the document does not appear in any channels. ==== Using a sync function Creating a sync function is a more flexible way to map documents to channels. A sync function is a JavaScript function that takes a document body as input and, based on the document content, decides what channels to assign the document to. -The sync function cannot reference any external state and must return the same results every time it's called on the same input. +The sync function cannot reference any external state and must return the same results every time it's called on the same input. You specify the sync function in the configuration file for the database. -Each sync function applies to one database. +Each sync function applies to one database. -To add the current document to a channel, the sync function calls the special function ``channel``, which takes one or more channel names (or arrays of channel names) as arguments. -For convenience, `channel` ignores `null` or `undefined` argument values. +To add the current document to a channel, the sync function calls the special function `channel`, which takes one or more channel names (or arrays of channel names) as arguments. +For convenience, `channel` ignores `null` or `undefined` argument values. -Defining a sync function overrides the default channel mapping mechanism (the document's `channels` property is ignored). The default mechanism is equivalent to the following simple sync function: +Defining a sync function overrides the default channel mapping mechanism (the document's `channels` property is ignored). +The default mechanism is equivalent to the following simple sync function: [source,javascript] ---- - function (doc) { channel (doc.channels); } ---- -Learn more about defining a sync function in our link:sync-function-api.html[Sync Function API guide]. +Learn more about defining a sync function in our xref:sync-function-api.adoc[Sync Function API guide]. === Replicating channels to Couchbase Lite If Couchbase Lite doesn't specify any channels to replicate, it gets all the channels to which its user account has access. -Due to this behavior, most apps do not have to specify a channels filter--instead they can just do the default sync configuration on the specify the Sync Gateway database URL with no filter in order to replicate the channels of interest. +Due to this behavior, most apps do not have to specify a channels filter--instead they can just do the default sync configuration on the specify the Sync Gateway database URL with no filter in order to replicate the channels of interest. -To replicate channels to Couchbase Lite, you configure the replication to use a filter named `sync_gateway/bychannel` with a filter parameter named ``channels``. +To replicate channels to Couchbase Lite, you configure the replication to use a filter named `sync_gateway/bychannel` with a filter parameter named `channels`. The value of the `channels` parameter is a comma-separated list of channels to fetch. -The replication from Sync Gateway now pulls only documents tagged with those channels. +The replication from Sync Gateway now pulls only documents tagged with those channels. === Removing a replicated document A document can be removed from a channel without being deleted. -For example, this can happen when a new revision is not added to one or more channels that the previous revision was in. +For example, this can happen when a new revision is not added to one or more channels that the previous revision was in. Subscribers (downstream databases pulling from this database) automatically handle the change. -Sync Gateway's `\_changes` feed includes one more revision of a document after it stops matching a channel. -Couchbase Lite creates a special tombstone revision, adding `\_removed:true` property to the entry when this happens. +Sync Gateway's `_changes` feed includes one more revision of a document after it stops matching a channel. +Couchbase Lite creates a special tombstone revision, adding `_removed:true` property to the entry when this happens. This algorithm ensures that any views running in Couchbase Lite do not include an obsolete revision. -The app code should use views to filter the results rather than just assuming that all documents in its local database are relevant. +The app code should use views to filter the results rather than just assuming that all documents in its local database are relevant. -If a user's access to a channel is revoked or Couchbase Lite stops syncing with a channel, documents that have already been synced are not removed from the user's device. +If a user's access to a channel is revoked or Couchbase Lite stops syncing with a channel, documents that have already been synced are not removed from the user's device. === Authorizing user access The `all_channels` property of a user account determines which channels the user can access. -Its value is derived from the union of: +Its value is derived from the union of: -* The user's `admin_channels` property, which is settable via the admin REST API. [//]: # "TODO: Link to Programmatic Authorization section." -* The channels that user has been given access to by `access()` calls from sync functions invoked for current revisions of documents (see Programmatic Authorization). -* The `all_channels` properties of all roles the user belongs to, which are themselves computed according to the above two rules. +* The user's `admin_channels` property, which is settable via the admin REST API. +* The channels that user has been given access to by `access()` calls from sync functions invoked for current revisions of documents (see <>. +* The `all_channels` properties of all roles the user belongs to, which are themselves computed according to the above two rules. -The only documents a user can access are those whose current revisions are assigned to one or more channels the user has access to: +The only documents a user can access are those whose current revisions are assigned to one or more channels the user has access to: -* A GET request to a document not assigned to one or more of the user's available channels fails with a 403 error. -* The `\_all_docs` property is filtered to return only documents that are visible to the user. -* The `\_changes` property ignores requests (via the `channels` parameter) for channels not visible to the user. +* A GET request to a document not assigned to one or more of the user's available channels fails with a 403 error. +* The `_all_docs` property is filtered to return only documents that are visible to the user. +* The `_changes` property ignores requests (via the `channels` parameter) for channels not visible to the user. -Write protection -- access control of document PUT or DELETE requests--is done by document validation. -This is handled in the sync function rather than a separate validation function. +Write protection--access control of document PUT or DELETE requests--is done by document validation. +This is handled in the sync function rather than a separate validation function. After a user is granted access to a new channel, the changes feed incorporates all existing documents in that channel, even those from earlier sequences than the current request's `since` parameter. -That way the next pull request retrieves all documents to which the user now has access. +That way the next pull request retrieves all documents to which the user now has access. ==== Programmatic Authorization Documents can grant users access to channels. -This is done by writing a sync function that recognizes such documents and calls a special `access()` function to grant access. +This is done by writing a sync function that recognizes such documents and calls a special `access()` function to grant access. The `access()` function takes the following parameters: a user name or array of user names and a channel name or array of channel names. -For convenience, null values are ignored (treated as empty arrays). +For convenience, null values are ignored (treated as empty arrays). -A typical example is a document that represents a shared resource (like a chat room or photo gallery). The document has a `members` property that lists the users who can access the resource. -If the documents belonging to the resource are all tagged with a specific channel, then the following sync function can be used to detect the membership property and assign access to the users listed in it: +A typical example is a document that represents a shared resource (like a chat room or photo gallery). +The document has a `members` property that lists the users who can access the resource. +If the documents belonging to the resource are all tagged with a specific channel, then the following sync function can be used to detect the membership property and assign access to the users listed in it: [source,javascript] ---- - function(doc) { if (doc.type == "chatroom") { access (doc.members, doc.channel_id) @@ -131,64 +138,60 @@ function(doc) { } ---- -In the example, a chat room is represented by a document with a `type` property set to ``chatroom``. -The `channel_id` property names the associated channel (with which the actual chat messages are tagged) and the `members` property lists the users who have access. +In the example, a chat room is represented by a document with a `type` property set to `chatroom`. +The `channel_id` property names the associated channel (with which the actual chat messages are tagged) and the `members` property lists the users who have access. The `access()` function can also operate on roles. If a user name string begins with `role:` then the remainder of the string is interpreted as a role name. -There's no ambiguity here, because ":" is an illegal character in a user or role name. +There's no ambiguity here, because ":" is an illegal character in a user or role name. -Because anonymous requests are authenticated as the user "GUEST", you can make a channel and its documents public by calling `access` with a username of ``GUEST``. +Because anonymous requests are authenticated as the user "GUEST", you can make a channel and its documents public by calling `access` with a username of `GUEST`. ==== Authorizing Document Updates Sync functions can also authorize document updates. -A sync function can reject the document by throwing an exception: +A sync function can reject the document by throwing an exception: [source,javascript] ---- - throw ({forbidden: "error message"}) ---- -A 403 Forbidden status and the given error string is returned to the client. +A 403 Forbidden status and the given error string is returned to the client. To validate a document you often need to know which user is changing it, and sometimes you need to compare the old and new revisions. -To get access to the old revision, declare the sync function like this: +To get access to the old revision, declare the sync function like this: [source,javascript] ---- - function(doc, oldDoc) { ... } ---- -`oldDoc` is the old revision of the document (or empty if this is a new document). +`oldDoc` is the old revision of the document (or empty if this is a new document). -You can validate user privileges by using the helper functions: ``requireUser``, ``requireRole``, or ``requireAccess``. -Here's some examples of how you can use the helper functions: +You can validate user privileges by using the helper functions: `requireUser`, `requireRole`, or `requireAccess`. +Here's some examples of how you can use the helper functions: [source,javascript] ---- - // throw an error if username is not "snej" requireUser("snej") // throw if username is not in the list -requireUser(["snej", "jchris", "tleyden"]) +requireUser(["snej", "jchris", "tleyden"]) // throw an error unless the user has the "admin" role -requireRole("admin") +requireRole("admin") // throw an error unless the user has one of those roles -requireRole(["admin", "old-timer"]) +requireRole(["admin", "old-timer"]) // throw an error unless the user has access to read the "events" channel -requireAccess("events") +requireAccess("events") // throw an error unless the can read one of these channels requireAccess(["events", "messages"]) ---- -Here's a simple sync function that validates whether the user is modifying a document in the old document's `owner` list: +Here's a simple sync function that validates whether the user is modifying a document in the old document's `owner` list: [source,javascript] ---- - function (doc, oldDoc) { if (oldDoc) { requireUser(oldDoc.owner); // may throw({forbidden: "wrong user"}) @@ -198,18 +201,25 @@ function (doc, oldDoc) { == Special channels -There are a two special channels that are automatically created when Sync Gateway starts: +There are a two special channels that are automatically created when Sync Gateway starts: -* The **public channel**, written as `!` in the sync function. Documents added to this channel are visible to any user (i.e all users are automatically granted access to the `!` channel). This channel can be used as a public distribution channel. -* The *star* channel, written as `\*` in the sync function. All documents are added to this channel. So any user that is granted access to the `\*` channel can access all the documents in the database. A user can be given access to the *star* channel through the sync function or in the link:config-properties.html#foo_user[configuration file]. -** *Note 1:* Sync Gateway automatically assigns documents to the all docs channel. Explicitly assigning a document to it in the Sync Function (i.e ``channel('*')``) will result in unexpected behavior such as receiving the document twice on the client side. -** *Note 2:* The *star* channel doesn't mean that the user is granted access to all channels. It is only being granted access to 1 channel which contains **all documents**. This distinction is important when using the link:sync-function-api.html#requireaccesschannels[requireAccess()] Sync Function method. +* The *public channel*, written as `!` in the sync function. +Documents added to this channel are visible to any user (i.e all users are automatically granted access to the `!` channel). +This channel can be used as a public distribution channel. +* The *star* channel, written as `+*+` in the sync function. +All documents are added to this channel. +So any user that is granted access to the `+*+` channel can access all the documents in the database. +A user can be given access to the *star* channel through the sync function or in the xref:config-properties.adoc#databases-foo_db-users-foo_user[configuration file]. +** *Note 1:* Sync Gateway automatically assigns documents to the all docs channel. +Explicitly assigning a document to it in the Sync Function (i.e `channel('*')`) will result in unexpected behavior such as receiving the document twice on the client side. +** *Note 2:* The *star* channel doesn't mean that the user is granted access to all channels. +It is only being granted access to 1 channel which contains *all documents*. +This distinction is important when using the xref:sync-function-api.html#requireaccess-channels[`requireAccess()`] Sync Function method. -The following Sync Function maps the document to the public channel if it contains an `isPublic` property set to true and grants users with the 'admin' role access to the all docs channel. +The following Sync Function maps the document to the public channel if it contains an `isPublic` property set to true and grants users with the 'admin' role access to the all docs channel. [source,javascript] ---- - function (doc, oldDoc) { if (doc.isPublic) { channel('!'); @@ -226,12 +236,11 @@ function (doc, oldDoc) { === Inspecting document channels You can use the admin REST API to see the channels that documents are assigned to. -Issue an `\_all_docs` request, and add the query parameter `?channels=true` to the URL. -Here's a command-line example that uses the https://github.com/jkbrzt/httpie[HTTPie] tool (like a souped-up curl) to look at the channels of the document ``foo``: +Issue an `_all_docs` request, and add the query parameter `?channels=true` to the URL. +Here's a command-line example that uses the {url-httpie}[HTTPie] tool (like a souped-up curl) to look at the channels of the document `foo`: [source,bash] ---- - $ http POST 'http://localhost:4985/db/_all_docs?channels=true' keys:='["foo"]' HTTP/1.1 200 OK @@ -243,32 +252,30 @@ Server: Couchbase Sync Gateway/1.00 { "rows": [ { - "id": "foo", - "key": "foo", + "id": "foo", + "key": "foo", "value": { "channels": [ - "short", + "short", "word" - ], + ], "rev": "1-86effb929acbf953905dd0e3974f6051" } } - ], - "total_rows": 16, + ], + "total_rows": 16, "update_seq": 26 } ---- -The output shows that the document is in the channels `short` and ``word``. +The output shows that the document is in the channels `short` and `word`. -[[_inspecting_userrole_channels]] === Inspecting user/role channels -You can use the admin REST API to see what channels a user or role has access to: +You can use the admin REST API to see what channels a user or role has access to: [source,javascript] ---- - $ curl http://localhost:4985/db/_user/pupshaw { @@ -289,4 +296,4 @@ $ curl http://localhost:4985/db/_user/pupshaw } ---- -The output shows that the user `pupshaw` has access to channels `all` and ``hoopy``. \ No newline at end of file +The output shows that the user `pupshaw` has access to channels `all` and `hoopy`. diff --git a/modules/ROOT/pages/database-offline.adoc b/modules/ROOT/pages/database-offline.adoc index cc21066cf..bb3b2e078 100644 --- a/modules/ROOT/pages/database-offline.adoc +++ b/modules/ROOT/pages/database-offline.adoc @@ -1,52 +1,48 @@ -= Taking databases offline and online += Taking Databases Offline and Online -Sync Gateway 1.2 introduces functionality that permits a specific database to be taken offline and brought back online, without requiring that the Sync Gateway instance be stopped and without affecting other databases that are served by the instance. +Sync Gateway 1.2 introduces functionality that permits a specific database to be taken offline and brought back online, without requiring that the Sync Gateway instance be stopped and without affecting other databases that are served by the instance. This online/offline status for a database is with respect to a specific Sync Gateway instance. -The status does not apply to other Sync Gateway instances, unless coordinated operations there have brought the databases on those instances into the same state. +The status does not apply to other Sync Gateway instances, unless coordinated operations there have brought the databases on those instances into the same state. == Motivation and use cases -Specific uses for the database offline/online functionality include: - -* Taking a database offline, without affecting other databases. -* Changing configuration properties for a database (while it is offline), without needing to restart Sync Gateway. -* Resynchronizing a database while it is offline. -* Detecting a lost DCP or TAP feed, and taking the database offline automatically. -* Creating a database in an offline state, so that the start of service delivery for the database can be postponed or coordinated across Sync Gateway instances. -* Performing a Couchbase Server upgrade. +Specific uses for the database offline/online functionality include: +* Taking a database offline, without affecting other databases. +* Changing configuration properties for a database (while it is offline), without needing to restart Sync Gateway. +* Resynchronizing a database while it is offline. +* Detecting a lost DCP or TAP feed, and taking the database offline automatically. +* Creating a database in an offline state, so that the start of service delivery for the database can be postponed or coordinated across Sync Gateway instances. +* Performing a Couchbase Server upgrade. == Sync Gateway -* Taking a database offline: link:admin-rest-api.html#!/database/post_db_offline[+POST /{db}/_offline+] -* Taking a database online: link:admin-rest-api.html#!/database/post_db_online[+POST /{db}/_online+] +* Taking a database offline: xref:admin-rest-api.adoc#/database/post\__db___offline[POST /+{db}+/_offline] +* Taking a database online: xref:admin-rest-api.adoc#/database/post\__db___online[POST /+{db}+/_online] By default, when Sync Gateway starts, it brings all databases that are defined in the configuration file online. -To keep a database offline when Sync Gateway starts, you can add the `offline` configuration property to the database configuration properties for the database, with the value `true` (see link:config-properties.html#foo_db[database properties]). +To keep a database offline when Sync Gateway starts, you can add the `offline` configuration property to the database configuration properties for the database, with the value `true` (see xref:config-properties.adoc#databases-foo_db[database properties]). -Later, to bring the database online, you can use the `+POST /{db}/_online+` Admin REST API request. +Later, to bring the database online, you can use the `POST /+{db}+/_online` Admin REST API request. === Offline state triggers Sync Gateway will take a database offline automatically if specific conditions occur. Specifically, if Sync Gateway detects that the DCP feed or TAP feed for a database has been lost, then Sync Gateway takes the database offline automatically, so that the problem can be investigated. -When the cause is known and has been corrected, you can use an Admin REST API request to bring the database back online. - -{% if site.version == '1.4' %} +When the cause is known and has been corrected, you can use an Admin REST API request to bring the database back online. -The exception to this behavior is when running with a link:accelerator.html[Sync Gateway Accelerator] architecture, in which case the Sync Gateway will not open a DCP or TAP feed, and therefore not go into the offline state. - -{% endif %} +// TODO locate migrated accelerator file +The exception to this behavior is when running with a https://developer.couchbase.com/documentation/mobile/current/guides/sync-gateway/accelerator.html[Sync Gateway Accelerator] architecture, in which case the Sync Gateway will not open a DCP or TAP feed, and therefore not go into the offline state. === State diagram -This state diagram represents the states for Sync Gateway and for the connection between Sync Gateway and a Couchbase Server database. - +This state diagram represents the states for Sync Gateway and for the connection between Sync Gateway and a Couchbase Server database. image::state-diagram-offline-12.png[] -In the state diagram: +In the state diagram: -* To the left of the gray dashed line, starting or stopping a Sync Gateway instance affects the connections to all of the databases that the instance serves. -* To the right of the gray dashed line, you perform operations on specific databases. For example, two databases could be online, while a third database could be taken offline, resynchronized, and then brought back online. +* To the left of the gray dashed line, starting or stopping a Sync Gateway instance affects the connections to all of the databases that the instance serves. +* To the right of the gray dashed line, you perform operations on specific databases. +For example, two databases could be online, while a third database could be taken offline, resynchronized, and then brought back online. diff --git a/modules/ROOT/pages/deployment-considerations.adoc b/modules/ROOT/pages/deployment-considerations.adoc index 7cf1ad7d7..a90856800 100644 --- a/modules/ROOT/pages/deployment-considerations.adoc +++ b/modules/ROOT/pages/deployment-considerations.adoc @@ -1,84 +1,88 @@ -= Deployment considerations += Deployment Considerations +:url-curl: https://curl.haxx.se/ +:url-httpie: https://github.com/jakubroztocil/httpie -The following guide describes a set of recommended best practices for a production deployment of Couchbase Mobile. +The following guide describes a set of recommended best practices for a production deployment of Couchbase Mobile. == Security *Authentication* In a Couchbase Mobile production deployment, administrators typically perform operations on the Admin REST API. -If Sync Gateway is deployed on an internal network, you can bind the link:config-properties.html#server[adminInterface] of Sync Gateway to the internal network. -In this case, the firewall should also be configured to allow external connections to the public link:config-properties.html#server[interface] port. +If Sync Gateway is deployed on an internal network, you can bind the xref:config-properties.adoc#server[adminInterface] of Sync Gateway to the internal network. +In this case, the firewall should also be configured to allow external connections to the public xref:config-properties.adoc#server[interface] port. -To access the Admin REST API from an entirely different network or from a remote desktop we recommend to use https://whatbox.ca/wiki/SSH_Tunneling[SSH tunnelling]. +To access the Admin REST API from an entirely different network or from a remote desktop we recommend to use https://whatbox.ca/wiki/SSH_Tunneling[SSH tunneling]. *Authorization* In addition to the Admin REST API, a user can be assigned to a role with additional privileges. The role and the user assigned to it can be created in the configuration file. -Then, the Sync Function's link:sync-function-api.html#requirerolerolename[requireRole()] method can be used to allow certain operations only if the user has that role. +Then, the Sync Function's xref:sync-function-api.adoc#requirerole-rolename[requireRole()] method can be used to allow certain operations only if the user has that role. *Data Model Validation* In a NoSQL database, it is the application's responsibility to ensure that the documents are created in accordance with the data model adopted throughout the system. -As an additional check, the Sync Function's link:sync-function-api.html#throw[throw()] method can be used to reject documents that do not follow the pre-defined data model. +As an additional check, the Sync Function's xref:sync-function-api.adoc#throw[throw()] method can be used to reject documents that do not follow the pre-defined data model. *HTTPS* You can run Sync Gateway behind a reverse proxy, such as NGINX, which supports HTTPS connections and route internal traffic to Sync Gateway over HTTP. -The advantage of this approach is that NGINX can proxy both HTTP and HTTPS connections to a single Sync Gateway instance. +The advantage of this approach is that NGINX can proxy both HTTP and HTTPS connections to a single Sync Gateway instance. -Alternatively, Sync Gateway can be link:configuring-ssl.html[configured] to only allow secure HTTPS connections, if you want to support both HTTP and HTTPS connections you will need to run two separate instances of Sync Gateway. +Alternatively, Sync Gateway can be xref:configuring-ssl.adoc[configured] to only allow secure HTTPS connections, if you want to support both HTTP and HTTPS connections you will need to run two separate instances of Sync Gateway. *Database Encryption* Database Encryption is not currently supported in 2.0. -This feature will be available in the next version of Couchbase Lite. +This feature will be available in the next version of Couchbase Lite. == Managing Tombstones -By design, when a document is deleted in Couchbase Mobile, they are not actually deleted from the database, but simply marked as deleted (by setting the `\_deleted` property). The reason that documents are not immediately removed is to allow all devices to see that they have been deleted - particularly in the case of devices that may not be online continuously and therefore not syncing regularly. +By design, when a document is deleted in Couchbase Mobile, they are not actually deleted from the database, but simply marked as deleted (by setting the `_deleted` property). The reason that documents are not immediately removed is to allow all devices to see that they have been deleted - particularly in the case of devices that may not be online continuously and therefore not syncing regularly. To actually remove the documents permanently, you need to _purge_ them. -This can be done in both Couchbase Lite and via the link:admin-rest-api.html#!/document/post_db_purge[Sync Gateway REST API]. +This can be done in both Couchbase Lite and via the xref:admin-rest-api.adoc#!/document/post_db_purge[Sync Gateway REST API]. Beginning in Couchbase Mobile 1.3, documents can have a Time To Live (TTL) value - when the TTL expires the document will be purged from the local database. Note that this does not affect the copy of the document on any other device. -This concept is covered in more detail in the link:../../couchbase-lite/native-api/document/index.html#document-expiration-ttl[Document] guide. +This concept is covered in more detail in the +// This is the original link -> link:../../couchbase-lite/native-api/document/index.html#document-expiration-ttl[Document] guide. It redirects to the URL used below. (Unless you're in version 1.3, then the link does take you to an exact location with a matching section title) +// TODO: determine correct target or migrate page if missing +https://developer.couchbase.com/documentation/mobile/2.0/couchbase-lite/index.html[Document] guide. Depending on the use case, data model and many more variables, there can be a need to proactively manage these tombstones as they are created. For example, you might decide that if a document is deleted on a Couchbase Lite client, that you want to purge the document (on that device) as soon as the delete has been successfully replicated out to Sync Gateway. Then later on Sync Gateway, set an expiration so that they are automatically purged after a set period (perhaps a week, or a month, to allow for all other devices to sync and receive the delete notifications) - more on this later. -If a document is deleted on the Sync Gateway itself (say by a batch process or REST API client), you may similarly want to set a TTL, and on the Couchbase Lite devices you can monitor the link:../../couchbase-lite/native-api/database/index.html#database-notifications[Database Change Notifications] and purge locally whenever a document is marked as deleted. +If a document is deleted on the Sync Gateway itself (say by a batch process or REST API client), you may similarly want to set a TTL, and on the Couchbase Lite devices you can monitor the +//This is the original link -> link:../../couchbase-lite/native-api/database/index.html#database-notifications[Database Change Notifications]. It redirects to the URL used below. (Unless you're in version 1.3, then the link does take you to an exact location with a matching section title) +// TODO: determine correct target or migrate page if missing +https://developer.couchbase.com/documentation/mobile/2.0/couchbase-lite/index.html[Database Change Notifications] and purge locally whenever a document is marked as deleted. == Log Rotation -{% if site.version == '1.5' %} - === Built-in log rotation By default, Sync Gateway outputs the logs to standard out with the "HTTP" log key and can also output logs to a file. -Prior to 1.4, the two main configuration options were `log` and `logFilePath` at the root of the configuration file. +Prior to 1.4, the two main configuration options were `log` and `logFilePath` at the root of the configuration file. [source,javascript] ---- - { "log": ["*"], "logFilePath": "/var/log/sync_gateway/sglogfile.log" } ---- -In Couchbase Mobile 1.4, Sync Gateway can now be configured to perform log rotation in order to minimize disk space usage. +In Couchbase Mobile 1.4, Sync Gateway can now be configured to perform log rotation in order to minimize disk space usage. ==== Log rotation configuration The log rotation configuration is specified under the `logging` key. -The following example demonstrates where the log rotation properties reside in the configuration file. +The following example demonstrates where the log rotation properties reside in the configuration file. [source,javascript] ---- - { "logging": { "default": { @@ -103,18 +107,17 @@ The following example demonstrates where the log rotation properties reside in t } ---- -As shown above, the `logging` property must contain a single named logging appender called ``default``. -Note that if the "logging" property is specified, it will override the top level `log` and `logFilePath` properties. +As shown above, the `logging` property must contain a single named logging appender called `default`. +Note that if the "logging" property is specified, it will override the top level `log` and `logFilePath` properties. -The descriptions and default values for each logging property can be found on the link:config-properties.html[Sync Gateway configuration] page. +The descriptions and default values for each logging property can be found on the xref:config-properties.adoc[Sync Gateway configuration] page. ==== Example Output -If Sync Gateway is running with the configuration shown above, after a total of 3.5 MB of log data, the contents of the `/var/log/sync_gateway` directory would have 3 files because `maxsize` is set to 1 MB. +If Sync Gateway is running with the configuration shown above, after a total of 3.5 MB of log data, the contents of the `/var/log/sync_gateway` directory would have 3 files because `maxsize` is set to 1 MB. [source,bash] ---- - /var/log/sync_gateway ├── sglogfile.log ├── sglogfile-2017-01-25T23-35-23.671.log @@ -123,33 +126,30 @@ If Sync Gateway is running with the configuration shown above, after a total of ==== Windows Configuration -On MS Windows `logFilePath` supports the following path formats. +On MS Windows `logFilePath` supports the following path formats. [source,javascript] ---- - "C:/var/tmp/sglogfile.log" `C:\var\tmp\sglogfile.log` `/var/tmp/sglogfile.log` "/var/tmp/sglogfile.log" ---- -Log rotation will not work if `logFilePath` is set to the path below as it is reserved for use by the Sync Gateway Windows service wrapper. +Log rotation will not work if `logFilePath` is set to the path below as it is reserved for use by the Sync Gateway Windows service wrapper. [source,bash] ---- - C:\Program Files (x86)\Couchbase\var\lib\couchbase\logs\sync_gateway_error.log ---- ==== Deprecation notice The current proposal is to remove the top level `log` and `logFilePath` properties in Sync Gateway 2.0. -For users that want to migrate to the new logging config to write to a log file but do not need log rotation they should use a default logger similar to the following: +For users that want to migrate to the new logging config to write to a log file but do not need log rotation they should use a default logger similar to the following: [source,javascript] ---- - { "logging": { "default": { @@ -161,91 +161,83 @@ For users that want to migrate to the new logging config to write to a log file } ---- -{% endif %} - === OS log rotation -In production environments it is common to rotate log files to prevent them from taking too much disk space, and to support log file archival. +In production environments it is common to rotate log files to prevent them from taking too much disk space, and to support log file archival. -By default Sync gateway will write log statements to stderr, normally stderr is redirected to a log file by starting Sync Gateway with a command similar to the following: +By default Sync gateway will write log statements to stderr, normally stderr is redirected to a log file by starting Sync Gateway with a command similar to the following: [source,bash] ---- - sync_gateway sync_gateway.json 2>> sg_error.log ---- On Linux the logrotate tool can be used to monitor log files and rotate them at fixed time intervals or when they reach a certain size. -Below is an example of a logrotate configuration that will rotate the Sync Gateway log file once a day or if it reaches 10M in size. +Below is an example of a logrotate configuration that will rotate the Sync Gateway log file once a day or if it reaches 10M in size. [source] ---- - -/home/sync_gateway/logs/*.log { - daily - rotate 1 - size 10M - delaycompress - compress - notifempty +/home/sync_gateway/logs/*.log { + daily + rotate 1 + size 10M + delaycompress + compress + notifempty missingok ---- The log rotation is achieved by renaming the log file with an appended timestamp. The idea is that Sync Gateway should recreate the default log file and start writing to it again. The problem is Sync Gateway will follow the renamed file and keep writing to it until Sync gateway is restarted. -By adding the copy truncate option to the logrotate configuration, the log file will be rotated by making a copy of the log file, and then truncating the original log file to zero bytes. +By adding the copy truncate option to the logrotate configuration, the log file will be rotated by making a copy of the log file, and then truncating the original log file to zero bytes. [source] ---- - -/home/sync_gateway/logs/*.log { - daily - rotate 1 +/home/sync_gateway/logs/*.log { + daily + rotate 1 size 10M copytruncate - delaycompress - compress - notifempty - missingok + delaycompress + compress + notifempty + missingok } ---- -Using this approach there is a possibility of loosing log entries between the copy and the truncate, on a busy Sync Gateway instance or when verbose logging is configured the number of lost entries could be large. +Using this approach there is a possibility of loosing log entries between the copy and the truncate, on a busy Sync Gateway instance or when verbose logging is configured the number of lost entries could be large. -In Sync Gateway 1.1.0 a new configuration option has been added that gives Sync Gateway control over the log file rather than relying on **stderr**. -To use this option call Sync Gateway as follows: +In Sync Gateway 1.1.0 a new configuration option has been added that gives Sync Gateway control over the log file rather than relying on *stderr*. +To use this option call Sync Gateway as follows: [source,bash] ---- - sync_gateway -logFilePath=sg_error.log sync_gateway.json ---- -The *logFilePath* property can also be set in the configuration file at the link:config-properties.html#server-configuration[server level]. +The *logFilePath* property can also be set in the configuration file at the xref:config-properties.adoc#server-configuration[server level]. -If the option is not used then Sync Gateway uses the existing stderr logging behaviour. +If the option is not used then Sync Gateway uses the existing stderr logging behavior. When the option is passed Sync Gateway will attempt to open and write to a log file at the path provided. -If a Sync Gateway process is sent the `SIGHUP` signal it will close the open log file and then reopen it, on Linux the `SIGHUP` signal can be manually sent using the following command: +If a Sync Gateway process is sent the `SIGHUP` signal it will close the open log file and then reopen it, on Linux the `SIGHUP` signal can be manually sent using the following command: [source,bash] ---- - pkill -HUP sync_gateway ---- -This command can be added to the logrotate configuration using the 'postrotate' option: +This command can be added to the logrotate configuration using the 'postrotate' option: [source] ---- - /home/sync_gateway/logs/*.log { - daily - rotate 1 + daily + rotate 1 size 10M - delaycompress - compress - notifempty + delaycompress + compress + notifempty missingok postrotate /usr/bin/pkill -HUP sync_gateway > /dev/null @@ -253,97 +245,92 @@ This command can be added to the logrotate configuration using the 'postrotate' } ---- -After renaming the log file logrotate will send the `SIGHUP` signal to the `sync_gateway` process, Sync Gateway will close the existing log file and open a new file at the original path, no log entries will be lost. +After renaming the log file logrotate will send the `SIGHUP` signal to the `sync_gateway` process, Sync Gateway will close the existing log file and open a new file at the original path, no log entries will be lost. == Troubleshooting === Troubleshooting and Fine-Tuning -In general, https://curl.haxx.se/[curl], a command-line HTTP client, is your friend. -You might also want to try https://github.com/jkbrzt/httpie[httpie], a human-friendly command-line HTTP client. -By using these tools, you can inspect databases and documents via the Public REST API, and look at user and role access privileges via the Admin REST API. +In general, {url-curl}[curl], a command-line HTTP client, is your friend. +You might also want to try {url-httpie}[HTTPie], a human-friendly command-line HTTP client. +By using these tools, you can inspect databases and documents via the Public REST API, and look at user and role access privileges via the Admin REST API. -An additional useful tool is the admin-port URL **/databasename/_dump/channels**, which returns an HTML table that lists all active channels and the documents assigned to them. Similarly,**/databasename/_dump/access** shows which documents are granting access to which users and channels. +An additional useful tool is the admin-port URL */databasename/_dump/channels*, which returns an HTML table that lists all active channels and the documents assigned to them. +Similarly, */databasename/_dump/access* shows which documents are granting access to which users and channels. We encourage Sync Gateway users to also reach back out to our engineering team and growing developer community for help and guidance. -You can get in touch with us on our mailing list at our https://forums.couchbase.com/c/mobile[Couchbase Mobile forum]. +You can get in touch with us on our mailing list at our https://forums.couchbase.com/c/mobile[Couchbase Mobile forum]. === How to file a bug -If you're pretty sure you've found a bug, please https://github.com/couchbase/sync_gateway/issues?q=is%3Aopen[file a bug report] at our GitHub repository and we can follow-up accordingly. +If you're pretty sure you've found a bug, please https://github.com/couchbase/sync_gateway/issues?q=is%3Aopen[file a bug report] at our GitHub repository and we can follow-up accordingly. == Enterprise Customer Support === Couchbase Technical Support -Support email: support@couchbase.com +Support email: support@couchbase.com -Support phone number: +1-650-417-7500, option #1 +Support phone number: +1-650-417-7500, option #1 -Support portal: http://support.couchbase.com +Support portal: https://support.couchbase.com To speed up the resolution of your issue, we will need some information to troubleshoot what is going on. -The more information you can provide in the questions below the faster we will be able to identify your issue and propose a fix: +The more information you can provide in the questions below the faster we will be able to identify your issue and propose a fix: -* Priority and impact of the issue (P1 and production impacting versus a P2 question) -* What versions of the software are you running - Membase/Couchbase Server, moxi, and client drivers? -* Operating system version, architecture (32-bit or 64-bit) and deployment (physical hardware, Amazon EC2, RightScale, etc.) -* Number of nodes in the cluster, how much physical RAM in each node, and per-node RAM allocated to Couchbase Server -* What steps led to the failure or error? -* Information around whether this is something that has worked successfully in the past and if so what has changed in the environment since the last successful operation? -* Provide us with a current snapshot of logs taken from each node of the system and uploaded to our support system via the instructions below +* Priority and impact of the issue (P1 and production impacting versus a P2 question) +* What versions of the software are you running - Membase/Couchbase Server, moxi, and client drivers? +* Operating system version, architecture (32-bit or 64-bit) and deployment (physical hardware, Amazon EC2, RightScale, etc.) +* Number of nodes in the cluster, how much physical RAM in each node, and per-node RAM allocated to Couchbase Server +* What steps led to the failure or error? +* Information around whether this is something that has worked successfully in the past and if so what has changed in the environment since the last successful operation? +* Provide us with a current snapshot of logs taken from each node of the system and uploaded to our support system via the instructions below If your issue is urgent, please make a phone call as well as send an e-mail. -The phone call will ensure that an on-call engineer is notified. +The phone call will ensure that an on-call engineer is notified. === Sync Gateway Logs -The Sync Gateway logs will give us further detail around the issue itself and the health of your environment. +The Sync Gateway logs will give us further detail around the issue itself and the health of your environment. Sync Gateway 1.3.x includes a command line utility `sgcollect_info` that provides us with detailed statistics for a specific node. -Run `sgcollect_info` on each node individually, not on all simultaneously. +Run `sgcollect_info` on each node individually, not on all simultaneously. -Example usage: +Example usage: -Linux (run as root or use sudo as below) +Linux (run as root or use sudo as below) [source,bash] ---- - sudo /opt/couchbase/bin/sgcollect_info .zip ---- -Windows (run as an administrator) +Windows (run as an administrator) [source] ---- - C:\Program Files\Couchbase\Server\bin\sgcollect_info .zip ---- -Run `sgcollect_info` on all nodes in the cluster, and upload all of the resulting files to us. +Run `sgcollect_info` on all nodes in the cluster, and upload all of the resulting files to us. === Sharing Files with Us The `sgcollect_info` tool can result in large files. -Simply run the command below, replacing `` and ````, to upload a file to our cloud storage on Amazon AWS. -Make sure you include the last slash (``/``) character after the company name. +Simply run the command below, replacing `` and ``, to upload a file to our cloud storage on Amazon AWS. +Make sure you include the last slash (`/`) character after the company name. [source,bash] ---- - curl --upload-file FILE NAME https://s3.amazonaws.com/customers.couchbase.com// ---- -[quote] -*Note:* we ship `curl` with Couchbase Server, on Linux this is located in `/opt/couchbase/bin/` +NOTE: We ship `curl` with Couchbase Server, on Linux this is located in `/opt/couchbase/bin/`. -Firewalled Sync Gateway Nodes +==== Firewalled Sync Gateway Nodes If your Sync Gateway nodes do not have internet access, the best way to provide the logs to us is to copy the files then run `curl` from a machine with internet access. We ship a Windows `curl` binary as part of Couchbase Server, so if you have Couchbase Server installed on a laptop or other system which has an Internet connection you can upload from there. -Alternatively you can download standalone Curl for Windows: - -http://curl.haxx.se/download.html +Alternatively you can download standalone Curl for Windows: {url-curl}download.html. -Once uploaded, please send an e-mail to support@couchbase.com letting us know what files have been uploaded. \ No newline at end of file +Once uploaded, please send an e-mail to support@couchbase.com letting us know what files have been uploaded. diff --git a/modules/ROOT/pages/getting-started.adoc b/modules/ROOT/pages/getting-started.adoc index 9a223f371..fa3f647f9 100644 --- a/modules/ROOT/pages/getting-started.adoc +++ b/modules/ROOT/pages/getting-started.adoc @@ -1,19 +1,21 @@ -:sg_download_link: http://packages.couchbase.com/releases/couchbase-sync-gateway/2.0.0/ += Getting Started +:idprefix: +:idseparator: - +:url-downloads: https://www.couchbase.com/downloads +:sg_download_link: https://packages.couchbase.com/releases/couchbase-sync-gateway/2.0.0/ :sg_package_name: couchbase-sync-gateway-community_2.0.0_x86_64 :sg_accel_package_name: couchbase-sg-accel-centos_enterprise_2.0.0-beta1_x86_64 -= Sync Gateway - == Installation -Install Sync Gateway on the operating system of your choice: +Install Sync Gateway on the operating system of your choice: [.tabs] ===== .Ubuntu [.tab] ==== -Download Sync Gateway from the http://www.couchbase.com/nosql-databases/downloads#couchbase-mobile[Couchbase downloads page] or using `wget`. +Download Sync Gateway from the {url-downloads}#couchbase-mobile[Couchbase downloads page] or using `wget`. [source,bash,subs="attributes"] ---- @@ -39,11 +41,12 @@ sudo service sync_gateway stop The config file and logs are located in `/home/sync_gateway`. -You can also run the *sync_gateway* binary directly from the command line. The binary is installed at `/opt/couchbase-sync-gateway/bin/sync_gateway`. +You can also run the *sync_gateway* binary directly from the command line. +The binary is installed at `/opt/couchbase-sync-gateway/bin/sync_gateway`. ==== .Red Hat/CentOS ==== -Download Sync Gateway from the http://www.couchbase.com/nosql-databases/downloads#couchbase-mobile[Couchbase downloads page] or using the `wget`. +Download Sync Gateway from the {url-downloads}#couchbase-mobile[Couchbase downloads page] or using the `wget`. [source,bash,subs="attributes"] ---- @@ -90,7 +93,7 @@ The config file and logs are located in `/home/sync_gateway`. ==== .Debian ==== -Download Sync Gateway from the http://www.couchbase.com/nosql-databases/downloads#couchbase-mobile[Couchbase downloads page] or using the ``wget``. +Download Sync Gateway from the {url-downloads}#couchbase-mobile[Couchbase downloads page] or using the `wget`. [source,bash,subs="attributes"] ---- @@ -116,20 +119,20 @@ The config file and logs are located in `/home/sync_gateway`. ==== .Windows ==== -Download Sync Gateway from the http://www.couchbase.com/nosql-databases/downloads#couchbase-mobile[Couchbase downloads page]. +Download Sync Gateway from the {url-downloads}#couchbase-mobile[Couchbase downloads page]. Open the installer and follow the instructions. If the installation was successful you will see the following. image::windows-installation-complete.png[] -Sync Gateway runs as a service (reachable on http://localhost:4985/). To stop/start the service, you can use the Services application (**Control Panel --> Admin Tools --> Services**). +Sync Gateway runs as a service (reachable on `+http://localhost:4985+`). To stop/start the service, you can use the Services application (*Control Panel -> Admin Tools -> Services*). -* The configuration file is located under **C:FilesGateway.json**. -* Logs are located under **C:FilesGateway*. +* The configuration file is located under *C:FilesGateway.json*. +* Logs are located under *C:FilesGateway*. ==== .macOS ==== -Download Sync Gateway from the http://www.couchbase.com/nosql-databases/downloads#couchbase-mobile[Couchbase downloads page] or using the `wget`. +Download Sync Gateway from the {url-downloads}#couchbase-mobile[Couchbase downloads page] or using the `wget`. [source,bash,subs="attributes"] ---- @@ -172,49 +175,42 @@ The config file and logs are located in `/Users/sync_gateway`. ==== ===== -The following sections describe how to install and configure Sync Gateway to run with Couchbase Server. +The following sections describe how to install and configure Sync Gateway to run with Couchbase Server. === Network configuration Sync Gateway uses specific ports for communication with the outside world, mostly Couchbase Lite databases replicating to and from Sync Gateway. -The following table lists the ports used for different types of Sync Gateway network communication: +The following table lists the ports used for different types of Sync Gateway network communication: -[cols="1,1", options="header"] +[cols="1,3"] |=== -| - Port - -| - Description - - - -| - 4984 -| - Public port. External HTTP port used for replication with Couchbase Lite databases and other applications accessing the REST API on the Internet. - -| - 4985 -| - Admin port. Internal HTTP port for unrestricted access to the database and to run administrative tasks. +|Port |Description + +|4984 +|Public port. +External HTTP port used for replication with Couchbase Lite databases and other applications accessing the REST API on the Internet. + +|4985 +|Admin port. +Internal HTTP port for unrestricted access to the database and to run administrative tasks. |=== == Configure Couchbase Server -To configure Couchbase Server before connecting Sync Gateway, run through the following. +To configure Couchbase Server before connecting Sync Gateway, run through the following. -. https://www.couchbase.com/nosql-databases/downloads[Download] and install Couchbase Server. -. Open the Couchbase Server Admin Console on http://localhost:8091 and log on using your administrator credentials. +. {url-downloads}[Download] and install Couchbase Server. +. Open the Couchbase Server Admin Console on `+http://localhost:8091+` and log on using your administrator credentials. . In the toolbar, select the *Buckets* tab and click the *Add Bucket* button. + image::cb-create-bucket.png[] -+ -. Provide a bucket name, for example **staging**, and leave the other options to their defaults. -. Next, we must create an RBAC user with specific privileges for Sync Gateway to connect to Couchbase Server. Open the *Security* tab and click the *Add User* button. + +. Provide a bucket name, for example *staging*, and leave the other options to their defaults. +. Next, we must create an RBAC user with specific privileges for Sync Gateway to connect to Couchbase Server. +Open the *Security* tab and click the *Add User* button. + image::create-user.png[] -+ + . The steps to create the RBAC user differ slightly depending on the version of Couchbase Server that you have installed. We explain the differences below. + [.tabs] @@ -222,7 +218,9 @@ image::create-user.png[] .Couchbase Server 5.1 [.tab] ==== -In the pop-up window, provide a *Username* and **Password**, those credentials will be used by Sync Gateway to connect later on. Next, you must grant RBAC roles to that user. If you are using Couchbase Server 5.1, you must enable the *Bucket Full Access* and *Read Only Admin* roles. +In the pop-up window, provide a *Username* and *Password*, those credentials will be used by Sync Gateway to connect later on. +Next, you must grant RBAC roles to that user. +If you are using Couchbase Server 5.1, you must enable the *Bucket Full Access* and *Read Only Admin* roles. image::user-settings.png[] @@ -230,26 +228,29 @@ image::user-settings.png[] .Couchbase Server 5.5 [.tab] ==== -In the pop-up window, provide a *Username* and **Password**, those credentials will be used by Sync Gateway to connect later on. Next, you must grant RBAC roles to that user. If you are using Couchbase Server 5.5, you must enable the *Application Access* and *Read Only Admin* roles. +In the pop-up window, provide a *Username* and *Password*, those credentials will be used by Sync Gateway to connect later on. +Next, you must grant RBAC roles to that user. +If you are using Couchbase Server 5.5, you must enable the *Application Access* and *Read Only Admin* roles. image::user-settings-5-5.png[] ==== ===== -+ -. If you're installing Couchbase Server on the cloud, make sure that network permissions (or firewall settings) allow incoming connections to Couchbase Server ports. In a typical mobile deployment on premise or in the cloud (AWS, RedHat etc), the following ports must be opened on the host for Couchbase Server to operate correctly: 8091, 8092, 8093, 8094, 11207, 11210, 11211, 18091, 18092, 18093. You must verify that any firewall configuration allows communication on the specified ports. If this is not done, the Couchbase Server node can experience difficulty joining a cluster. You can refer to the http://developer.couchbase.com/documentation/server/current/install/install-ports.html[Couchbase Server Network Configuration] guide to see the full list of available ports and their associated services. +. If you're installing Couchbase Server on the cloud, make sure that network permissions (or firewall settings) allow incoming connections to Couchbase Server ports. +In a typical mobile deployment on premise or in the cloud (AWS, RedHat, etc.), the following ports must be opened on the host for Couchbase Server to operate correctly: 8091, 8092, 8093, 8094, 11207, 11210, 11211, 18091, 18092, 18093. +You must verify that any firewall configuration allows communication on the specified ports. +If this is not done, the Couchbase Server node can experience difficulty joining a cluster. +You can refer to the xref:server:install:install-ports.adoc[Couchbase Server Network Configuration] guide to see the full list of available ports and their associated services. == Start Sync Gateway -The following steps explain how to connect Sync Gateway to the Couchbase Server instance that was configured in the previous section. +The following steps explain how to connect Sync Gateway to the Couchbase Server instance that was configured in the previous section. -* Open a new file called *sync-gateway-config.json* with the following. +* Open a new file called *sync-gateway-config.json* with the following. + - [source,javascript] ---- - { log: [*], databases: { @@ -272,54 +273,31 @@ The following steps explain how to connect Sync Gateway to the Couchbase Server ---- + This configuration contains the user credentials of the *sync_gateway* user you created previously. -It also enables link:shared-bucket-access.html[shared bucket access]; this feature was introduced in Sync Gateway 1.5 to allow Couchbase Server SDKs to also perform operation on this bucket. -* Start Sync Gateway from the command line, or if Sync Gateway is running in a service replace the configuration file and restart the service. -+ +It also enables xref:shared-bucket-access.adoc[shared bucket access]; this feature was introduced in Sync Gateway 1.5 to allow Couchbase Server SDKs to also perform operation on this bucket. +* Start Sync Gateway from the command line, or if Sync Gateway is running in a service replace the configuration file and restart the service. ++ [source,bash] ---- - ~/Downloads/couchbase-sync-gateway/bin/sync_gateway ~/path/to/sync-gateway-config.json ---- -* Run the application where Couchbase Lite is installed. You should then see the documents that were replicated on the Sync Gateway admin UI at http://localhost:4985/_admin/. - -{% include experimental-label.html %} +* Run the application where Couchbase Lite is installed. You should then see the documents that were replicated on the Sync Gateway admin UI at `+http://localhost:4985/_admin/+`. +//{% include experimental-label.html %} // -center-image /> == Supported Platforms -Sync Gateway is supported on the following operating systems: +Sync Gateway is supported on the following operating systems: + +[cols="1,1,1,1,1"] +|=== +|Ubuntu |CentOS/RedHat |Debian |Windows |macOS -[cols="1,1,1,1,1", options="header"] +|12, 14, 16 +|5, 6, 7 +|8 +|Windows 8, Windows 10, Windows Server 2012 +|Yosemite, El Capitan |=== -| - Ubuntu - -| - CentOS/RedHat - -| - Debian - -| - Windows - -| - macOS - - - -| - 12, 14, 16 -| - 5, 6, 7 -| - 8 -| - Windows 8, Windows 10, Windows Server 2012 -| - Yosemite, El Capitan -|=== \ No newline at end of file diff --git a/modules/ROOT/pages/index.adoc b/modules/ROOT/pages/index.adoc index dcc437b60..8bc856993 100644 --- a/modules/ROOT/pages/index.adoc +++ b/modules/ROOT/pages/index.adoc @@ -1,6 +1,8 @@ -== What's New += What's New +:idprefix: +:idseparator: - -=== Views to GSI +== Views to GSI Sync Gateway has been using system views for a variety of internal functions, including authentication and replication. Starting in 2.1, Sync Gateway will use GSI and N1QL to perform those tasks. @@ -10,75 +12,74 @@ Note that this only impacts system views. Users can continue to define views through the xref:admin-rest-api.adoc#/query[Sync Gateway Admin REST API]. This capability is enabled by default and there are 2 properties in the configuration file which can be adjusted: -* link:config-properties.html#databases-foo_db-use_views[databases.$db.use_views] -* link:config-properties.html#databases-foo_db-num_index_replicas[databases.$db.num_index_replicas] +* xref:config-properties.adoc#databases-foo_db-use_views[`databases.$db.use_views`] +* xref:config-properties.adoc#databases-foo_db-num_index_replicas[`databases.$db.num_index_replicas`] -=== X.509 Authentication against Couchbase Server +== X.509 Authentication against Couchbase Server Sync Gateway adds the ability to use x.509 certificates to authenticate against Couchbase Server 5.5 or higher. Certificate based authentication provides an additional layer of security; it relies on a certificate authority, CA, to validate identities and issue certificates. To enable certificate based authentication on Sync Gateway, paths to the client certificate, private key and root CA must be set in the configuration file: -* link:config-properties.html#databases-foo_db-certpath[databases.$db.certpath] -* link:config-properties.html#databases-foo_db-keypath[databases.$db.keypath] -* link:config-properties.html#databases-foo_db-cacertpath[databases.$db.cacertpath] +* xref:config-properties.adoc#databases-foo_db-certpath[databases.$db.certpath] +* xref:config-properties.adoc#databases-foo_db-keypath[databases.$db.keypath] +* xref:config-properties.adoc#databases-foo_db-cacertpath[databases.$db.cacertpath] -Instructions to generate the client certificate, private key and root CA can be found in https://developer.couchbase.com/documentation/server/current/security/security-x509certsintro.html[x.509 for TLS]. +Instructions to generate the client certificate, private key and root CA can be found in xref:5.5@server:security:security-x509certsintro.adoc[x.509 for TLS]. This functionality provides an alternative method of authentication to the existing password-based authentication method. If the **username**/**password** properties are also specified in the configuration file then Sync Gateway will use password-based authentication and also include the client certificate in the TLS handshake. -=== Continuous Logging +== Continuous Logging Continuous logging is a new feature in Sync Gateway 2.1 that allows the console log output to be separated from log files collected by Couchbase Support. This allows system administrators running Sync Gateway to tweak log level, and log keys for the console output to suit their needs, whilst maintaining the level of logging required by Couchbase Support for investigation of issues. -The previous logging configuration (``logging.default``) is being deprecated, and Sync Gateway 2.1 will display warnings on startup of what is required to update your configuration. -Detailed information about continuous logging can be found in the link:logging.html[Logging guide]. +The previous logging configuration (`logging.default`) is being deprecated, and Sync Gateway 2.1 will display warnings on startup of what is required to update your configuration. +Detailed information about continuous logging can be found in the xref:logging.adoc[Logging guide]. -==== SGCollect Info +=== SGCollect Info -link:sgcollect-info.html[sgcollect_info] has been updated to use the continuous logging feature introduced in 2.1, and collects the four levelled files (**sg_error.log**, **sg_warn.log**, *sg_info.log* and **sg_debug.log**). +xref:sgcollect-info.adoc[`sgcollect_info`] has been updated to use the continuous logging feature introduced in 2.1, and collects the four leveled files (*sg_error.log*, *sg_warn.log*, *sg_info.log* and *sg_debug.log*). These new log files are rotated and compressed by Sync Gateway, so `sgcollect_info` decompresses these rotated logs, and concatenates them back into a single file upon collection. -For example, if you have **sg_debug.log**, and *sg_debug-2018-04-23T16-57-13.218.log.gz* and then run `sgcollect_info` as normal, both of these files get put into a *sg_debug.log* file inside the zip output folder. +For example, if you have *sg_debug.log*, and *sg_debug-2018-04-23T16-57-13.218.log.gz* and then run `sgcollect_info` as normal, both of these files get put into a *sg_debug.log* file inside the zip output folder. -=== Log Redaction +== Log Redaction All log outputs can be redacted, this means that user-data, considered to be private, is removed. -This feature is optional and can be enabled in the configuration with the link:config-properties.html#logging-redaction_level[logging.redaction_level] property. +This feature is optional and can be enabled in the configuration with the xref:config-properties.adoc#logging-redaction_level[`logging.redaction_level`] property. -==== SGCollect Info +=== SGCollect Info `sgcollect_info` now supports log redaction post-processing. -In order to utilise this, Sync Gateway needs to be run with the `logging.redaction_level` property set to "partial". +In order to utilize this, Sync Gateway needs to be run with the `logging.redaction_level` property set to "partial". -Two new command line options have been added to ``sgcollect_info``: +Two new command line options have been added to `sgcollect_info`: -* ``--log-redaction-level=REDACT_LEVEL``: redaction level for the logs collected, `none` and `partial` supported. Defaults to ``none``. +* `--log-redaction-level=REDACT_LEVEL`: redaction level for the logs collected, `none` and `partial` supported. Defaults to `none`. + -When `--log-redaction-level` is set to partial, two zip files are produced, and tagged contents in the redacted one should be hashed in the same way as ``cbcollect_info``: +When `--log-redaction-level` is set to partial, two zip files are produced, and tagged contents in the redacted one should be hashed in the same way as `cbcollect_info`: + - [source,bash] ---- - $ ./sgcollect_info --log-redaction-level=partial sgout.zip ... Zipfile built: sgout-redacted.zip Zipfile built: sgout.zip ---- -* ``--log-redaction-salt=SALT_VALUE``: salt used in the hashing of tagged data when enabling redaction. Defaults to a random uuid. -=== Bucket operation timeout +* `--log-redaction-salt=SALT_VALUE`: salt used in the hashing of tagged data when enabling redaction. Defaults to a random uuid. + +== Bucket operation timeout -The link:config-properties.html#databases-foo_db-bucket_op_timeout_ms[databases.$db.bucket_op_timeout_ms] property to override the default timeout used by Sync Gateway to query Couchbase Server. +The xref:config-properties.adoc#databases-foo_db-bucket_op_timeout_ms[`databases.$db.bucket_op_timeout_ms`] property to override the default timeout used by Sync Gateway to query Couchbase Server. It's generally not necessary to change this property unless there is a particularly heavy load on Couchbase Server which would increase the response time. -=== Support for IPv6 +== Support for IPv6 Sync Gateway now officially supports IPv6. @@ -87,4 +88,4 @@ Sync Gateway now officially supports IPv6. This release contains a number of bug fixes and enhancements for Couchbase Lite. Find out more in the release notes. -xref:release-notes.adoc[Release Notes] \ No newline at end of file +xref:release-notes.adoc[Release Notes] diff --git a/modules/ROOT/pages/integrating-external-stores.adoc b/modules/ROOT/pages/integrating-external-stores.adoc index 472e2c2ce..4563deef7 100644 --- a/modules/ROOT/pages/integrating-external-stores.adoc +++ b/modules/ROOT/pages/integrating-external-stores.adoc @@ -1,64 +1,60 @@ = Integrating External Stores +:url-downloads: https://www.couchbase.com/downloads The Sync Gateway REST API is divided in two categories: the Public REST API available on port 4984 and the Admin REST API accessible on port 4985. -Those are the default ports and they can be changed in the configuration file of Sync Gateway. +Those are the default ports and they can be changed in the configuration file of Sync Gateway. -In this guide, you will learn how to run the following operations on the Admin REST API: +In this guide, you will learn how to run the following operations on the Admin REST API: -* Bulk importing of documents. -* Exporting via the changes feed. -* Importing attachments. +* Bulk importing of documents. +* Exporting via the changes feed. +* Importing attachments. - -[[_external_store]] == External Store In this guide, you will use a simple movies API as the external data store. https://cl.ly/140P313l0p23/external-store.zip[Download the stub data and API server] and unzip the content into a new directory. -To start the server of the external store run the following commands: +To start the server of the external store run the following commands: [source,bash] ---- - cd external-store npm install node server.js ---- -You can open a browser window at http://localhost:8000/movies to view the JSON data. - -The external store supports two endpoints: +You can open a browser window at `+http://localhost:8000/movies+` to view the JSON data. -* GET ``/movies``: Retrieves all movies (from **movies.json**). -* POST ``/movies``: Takes one movie as the request body and updates the item with the same `\_id` in **movies.json**. +The external store supports two endpoints: +GET `/movies`:: +Retrieves all movies (from *movies.json*). +POST `/movies`:: +Takes one movie as the request body and updates the item with the same `_id` in *movies.json*. == Importing Documents The Sync Gateway Swagger JS library is a handy tool to send HTTP requests without having to write your own HTTP API wrapper. It relies on the Couchbase Mobile Swagger specs. -In this case, you will use the http://developer.couchbase.com/mobile/swagger/sync-gateway-admin/[Admin REST API spec]. -In the same directory, install the following dependencies. +In this case, you will use the xref:admin-rest-api.adoc[Admin REST API spec]. +In the same directory, install the following dependencies. [source,bash] ---- - npm install swagger-client && request-promise ---- -http://www.couchbase.com/nosql-databases/downloads#couchbase-mobile[Download Sync Gateway] and start it from the command line with a database called ``movies_lister``. +{url-downloads}#couchbase-mobile[Download Sync Gateway] and start it from the command line with a database called `movies_lister`. [source,bash] ---- - ~/Downloads/couchbase-sync-gateway/bin/sync_gateway -dbname movies_lister ---- -The Sync Gateway database is now available at http://localhost:4985/movies_lister/. -Create a new file called `import.js` with the following to retrieve the movies and insert them in the Sync Gateway database. +The Sync Gateway database is now available at `+http://localhost:4985/movies_lister/+`. +Create a new file called `import.js` with the following to retrieve the movies and insert them in the Sync Gateway database. [source,javascript] ---- - var request = require('request-promise') , Swagger = require('swagger-client'); @@ -103,62 +99,57 @@ var client = new Swagger({ }); ---- -Here's what the code above is doing: - -. Use the https://github.com/request/request-promise[request-promise] library to retrieve the movies from the external store. -. Save the movies to Sync Gateway. The `post_db_bulk_docs` method takes a db name (``movies_lister``) and the documents to save in the request body. Notice that the response from the external store is an array and must be wrapped in a JSON object of the form ``{docs: movies}``. -. The response of the `+/{db}/_bulk_docs+` request contains the generated revision numbers which are written back to the external store. +Here's what the code above is doing: +. Use the https://github.com/request/request-promise[request-promise] library to retrieve the movies from the external store. +. Save the movies to Sync Gateway. +The `post_db_bulk_docs` method takes a db name (`movies_lister`) and the documents to save in the request body. +Notice that the response from the external store is an array and must be wrapped in a JSON object of the form `{docs: movies}`. +. The response of the `+/{db}/_bulk_docs+` request contains the generated revision numbers which are written back to the external store. -[quote] -*Tip:* The Admin REST API Swagger spec is dynamically loaded. +TIP: The Admin REST API Swagger spec is dynamically loaded. You can use the `$$.$$help()` method to query the available object and methods. -This method is very helpful during development as it offers the documentation on the fly in the console. +This method is very helpful during development as it offers the documentation on the fly in the console. [source,javascript] ---- - client.help(); // prints all the tags to the console client.database.help(); // prints all the database methods to the console client.database.post_db_bulk_docs.help(); // prints all the parameters (querystring and request body) ---- -Start the program from the command line: +Start the program from the command line: [source,bash] ---- - node import.js ---- -Open the Sync Gateway Admin UI and you should see all the movies there. - +Open the Sync Gateway Admin UI and you should see all the movies there. image::admin-ui-movies-lister.png[] -Notice that the `\_rev` property is also stored on each record on the external store, ``movies.json``. +Notice that the `_rev` property is also stored on each record on the external store, `movies.json`. Run the program again, the same number of documents are visible in Sync Gateway. This time with a 2nd generation revision number. -This update operation was succesful because the parent revision number was sent as part of the request body. +This update operation was successful because the parent revision number was sent as part of the request body. == Exporting Documents -To export documents from Couchbase Mobile to the external system you will use a changes feed request to subscribe to changes and persist them to the external store. +To export documents from Couchbase Mobile to the external system you will use a changes feed request to subscribe to changes and persist them to the external store. -Install the following modules: +Install the following modules: [source,bash] ---- - npm install swagger-client && request ---- -Create a new file called `export.js` with the following: +Create a new file called `export.js` with the following: [source,javascript] ---- - var request = require('request') , Swagger = require('swagger-client'); @@ -210,48 +201,44 @@ var client = new Swagger({ }); ---- -Here's what the code above is doing: +Here's what the code above is doing: -. Gets the last sequence number of the database. -. Calls the `getChanges` method with the last sequence number. -. Sends changes request to Sync Gateway with the following parameters: -** *feed=longpoll* -** *include_docs=true* -** *since=X* (where X is the sequence number) -. Write the document to the external store. +. Gets the last sequence number of the database. +. Calls the `getChanges` method with the last sequence number. +. Sends changes request to Sync Gateway with the following parameters: +** `feed=longpoll` +** `include_docs=true` +** `since=X` (where X is the sequence number) +. Write the document to the external store. -Run the program from the command line: +Run the program from the command line: [source,bash] ---- - node export.js ---- -Open the Admin UI on http://localhost:4985/_admin/db/movies_lister and make changes to a document. -Notice that the change is also updated in the external store. - - -image::https://cl.ly/1s2Q0t1i3W2w/export-update.gif[] +Open the Admin UI on `+http://localhost:4985/_admin/db/movies_lister+` and make changes to a document. +Notice that the change is also updated in the external store. +image::export-update.gif[] == Importing Attachments -Every movie in the stub API has a link to a thumbnail (in the `posters.thumbnail` property). Before sending the `\_bulk_docs` request, you will fetch the thumbnail for each movie and embed it as a base64 string under the `\_attachments` property. +Every movie in the stub API has a link to a thumbnail (in the `posters.thumbnail` property). +Before sending the `_bulk_docs` request, you will fetch the thumbnail for each movie and embed it as a base64 string under the `_attachments` property. -Install the following dependencies: +Install the following dependencies: [source,bash] ---- - npm install request-promise && swagger-client ---- -Create a new file called `attachments.js` with the following to retrieve the movies, their thumbnails and insert them in the Sync Gateway database. +Create a new file called `attachments.js` with the following to retrieve the movies, their thumbnails and insert them in the Sync Gateway database. [source,javascript] ---- - var request = require('request-promise') , Swagger = require('swagger-client'); @@ -302,12 +289,10 @@ var client = new Swagger({ ---- Restart Sync Gateway to have an empty database and run the program. -The documents are saved with the attachment metadata. - +The documents are saved with the attachment metadata. image::admin-ui-attachment.png[] -You can view the thumbnail at `+http://localhost:4984/movies_lister/{db}/{doc}/{attachment}/+` (note it's on the public port 4984). - +You can view the thumbnail at `+http://localhost:4984/movies_lister/{db}/{doc}/{attachment}/+` (note it's on the public port 4984). image::sg-attachment.png[] diff --git a/modules/ROOT/pages/load-balancer.adoc b/modules/ROOT/pages/load-balancer.adoc index f23acf577..6227a4d4f 100644 --- a/modules/ROOT/pages/load-balancer.adoc +++ b/modules/ROOT/pages/load-balancer.adoc @@ -1,60 +1,61 @@ = Load Balancer This guide covers various aspects to consider when using a Load Balancer in a Couchbase Mobile deployment. -In particular, when using NGINX or AWS Elastic Load Balancer (ELB). For an architectural walkthrough, you can refer to the link:../../../training/deploy/install/index.html[Deploy] lessons of the Tutorial. +In particular, when using NGINX or AWS Elastic Load Balancer (ELB). For an architectural walkthrough, you can refer to the +// This is the original link -> link:../../../training/deploy/install/index.html[Deploy]. It redirects to the URL used below. +// TODO: determine correct target or migrate page if missing +https://developer.couchbase.com/documentation/mobile/2.0/couchbase-lite/index.html[Deploy] lessons of the Tutorial. == When to use a reverse proxy -* A reverse proxy can hide the existence of a Sync Gateway server or servers. This can help to secure the Sync gateway instances when your service is exposed to the internet. -* A reverse proxy can provide application firewall features that protect against common web-based attacks. -* A reverse proxy can offload ssl termination from the Sync Gateway instances, this can be a significant overhead when supporting large numbers of mobile devices. -* A reverse proxy can distribute the load from incoming requests to several Sync Gateway instances. -* A reverse proxy may rewrite the URL of each incoming request in order to match the relevant internal location of the requested resource. For Sync Gateway the reverse proxy may map the Internet facing port 80 to the standard Sync Gateway public REST API port 4984. - +* A reverse proxy can hide the existence of a Sync Gateway server or servers. +This can help to secure the Sync gateway instances when your service is exposed to the internet. +* A reverse proxy can provide application firewall features that protect against common web-based attacks. +* A reverse proxy can offload ssl termination from the Sync Gateway instances, this can be a significant overhead when supporting large numbers of mobile devices. +* A reverse proxy can distribute the load from incoming requests to several Sync Gateway instances. +* A reverse proxy may rewrite the URL of each incoming request in order to match the relevant internal location of the requested resource. +For Sync Gateway the reverse proxy may map the Internet facing port 80 to the standard Sync Gateway public REST API port 4984. == WebSocket Connection -To keep a WebSocket connection open, the replicator sends a WebSocket PING message (also known as heartbeat) every 300 seconds (5 minutes). The keep alive timeout value of the load balancer must be configured to a higher value than the heartbeat interval. +To keep a WebSocket connection open, the replicator sends a WebSocket PING message (also known as heartbeat) every 300 seconds (5 minutes). +The keep alive timeout value of the load balancer must be configured to a higher value than the heartbeat interval. For example, 360 seconds. -The following section demonstrates how to do that with NGINX. +The following section demonstrates how to do that with NGINX. == NGINX -Connect to the server running Sync Gateway and install the nginx server: +Connect to the server running Sync Gateway and install the nginx server: [source,bash] ---- - sudo apt-get install nginx ---- Make sure that the NGINX version installed is 1.3 or higher. -Earlier versions do not support WebSockets, and will cause connection problems with pull replications from Couchbase Lite. +Earlier versions do not support WebSockets, and will cause connection problems with pull replications from Couchbase Lite. [source,bash] ---- - nginx -v ---- -Once the installation is completed, you can access the NGINX welcome page from your browser. +Once the installation is completed, you can access the NGINX welcome page from your browser. [source,bash] ---- - http://127.0.0.1/ ---- -Note: Replace 127.0.0.1 with the IP address of your server. +Note: Replace 127.0.0.1 with the IP address of your server. === Basic nginx configuration for Sync Gateway -If you installed nginx using the instructions above, then you will create your sync_gateway configuration file in **/etc/nginx/sites-available**. -Create a file in that directory called sync_gateway with the following content: +If you installed nginx using the instructions above, then you will create your sync_gateway configuration file in */etc/nginx/sites-available*. +Create a file in that directory called sync_gateway with the following content: [source] ---- - upstream sync_gateway { server 127.0.0.1:4984; } @@ -77,11 +78,10 @@ server { ---- This `upstream` block specifies the server and port nginx will forward traffic to, in this example it would be sync_gateway running on the same server as nginx, listening on the default public REST API port of 4984. -Change these values if your sync_gateway is configured differently. +Change these values if your sync_gateway is configured differently. [source] ---- - # HTTP server # server { @@ -90,16 +90,14 @@ server { client_max_body_size 21m; ---- -The first section of the 'server' block defines common directives. - -* The `listen` directive instructs nginx to listen on port 80 for incoming traffic. -* The `server_name` directive instructs nginx to check that the HTTP `Host:` header value matches `myservice.example.org` (change this value to your domain). -* The `client_max_body_size` directive instructs nginx to accept request bodies up to 21MBytes, this is necessary to support attachments being sync'd to Sync Gateway. +The first section of the 'server' block defines common directives. +* The `listen` directive instructs nginx to listen on port 80 for incoming traffic. +* The `server_name` directive instructs nginx to check that the HTTP `Host:` header value matches `myservice.example.org` (change this value to your domain). +* The `client_max_body_size` directive instructs nginx to accept request bodies up to 21MBytes, this is necessary to support attachments being sync'd to Sync Gateway. [source] ---- - location / { proxy_pass http://sync_gateway; proxy_pass_header Accept; @@ -113,61 +111,56 @@ location / { } ---- -* The `location` block specifies directives for all URL paths below the root path '/'. -* The `proxy_pass` directive instructs nginx to forward all incoming traffic to servers defined in the sync_gateway `upstream` block. -* The two `proxy_pass_header` directives instruct nginx to pass `Accept:` and `Server:` headers on inbound and outbound traffic, these headers allow CouchbaseLite and sync_gateway to optimise data transfer, e.g. by using gzip compression and multipart/mixed if it is supported. -* The `keepalive_requests` directive instructs nginx to allow up to one thousand requests on the same connection, this is useful when getting a `\_changes` feed using longpoll. -* The `keepalive_timeout` directive instructs nginx to keep connection open for 360 seconds from the last request, this value is longer than the default (300 seconds) value for the heartbeat on the _changes feed using longpoll. -* The `proxy_read_timeout` directive instructs nginx to keep connection open for 360 seconds from the last server response, this value is longer than the default (300 seconds) value for the heartbeat on the _changes feed using longpoll. -* The two `proxy_set_header` directives enable support for WebSocket connections, which are used by Couchbase Lite for a pull replication's _changes feed. +* The `location` block specifies directives for all URL paths below the root path `/`. +* The `proxy_pass` directive instructs nginx to forward all incoming traffic to servers defined in the sync_gateway `upstream` block. +* The two `proxy_pass_header` directives instruct nginx to pass `Accept:` and `Server:` headers on inbound and outbound traffic, these headers allow CouchbaseLite and sync_gateway to optimize data transfer, e.g. by using gzip compression and multipart/mixed if it is supported. +* The `keepalive_requests` directive instructs nginx to allow up to one thousand requests on the same connection, this is useful when getting a `_changes` feed using longpoll. +* The `keepalive_timeout` directive instructs nginx to keep connection open for 360 seconds from the last request, this value is longer than the default (300 seconds) value for the heartbeat on the `_changes` feed using longpoll. +* The `proxy_read_timeout` directive instructs nginx to keep connection open for 360 seconds from the last server response, this value is longer than the default (300 seconds) value for the heartbeat on the `_changes` feed using longpoll. +* The two `proxy_set_header` directives enable support for WebSocket connections, which are used by Couchbase Lite for a pull replication's `_changes` feed. -We now need to enable the `sync_gateway` site, in the sites-enabled directory you need to make a symbolic link to the `sync_gateway` file you just created: +We now need to enable the `sync_gateway` site, in the sites-enabled directory you need to make a symbolic link to the `sync_gateway` file you just created: [source,bash] ---- - ln -s /etc/nginx/sites-available/sync_gateway /etc/nginx/sites-enabled/sync_gateway ---- -and then restart nginx: +and then restart nginx: [source,bash] ---- - sudo service nginx restart ---- -Take a look at the site in your web browser (or use a command line option like curl or wget), specifying the virtual host name you created above, and you should see that your request is proxied through to the Sync Gateway, but your traffic is going over port 80: +Take a look at the site in your web browser (or use a command line option like curl or wget), specifying the virtual host name you created above, and you should see that your request is proxied through to the Sync Gateway, but your traffic is going over port 80: [source,bash] ---- - curl http://myservice.example.org/ {“couchdb”:”Welcome”,”vendor”:{“name”:”Couchbase Sync Gateway”,”version”:1},”version”:”Couchbase Sync Gateway/1.0.3(81;fa9a6e7)”} ---- -If you access your server using its IP address, e.g. `http://127.0.0.1/` (so that no `Host:` header is sent), you should see the standard `Welcome to nginx!` page. +If you access your server using its IP address, e.g. `+http://127.0.0.1/+` (so that no `Host:` header is sent), you should see the standard `Welcome to nginx!` page. [source,bash] ---- - http://127.0.0.1/ ---- -Note: Replace 127.0.0.1 with the IP address of your server. +Note: Replace 127.0.0.1 with the IP address of your server. -You should see the standard Welcome to nginx! page. +You should see the standard Welcome to nginx! page. === Load-balancing requests across multiple Sync Gateway instances Sync Gateway instances have a "shared nothing" architecture: this means that you can scale out by simply deploying additional Sync Gateway instances. But incoming traffic needs to be distributed across all the instances. -Ngingx can easily accommodate this and balance the incoming traffic load across all your Sync Gateway instances. -Simply add the additional instances' IP addresses to the `upstream` block; for example: +Nginx can easily accommodate this and balance the incoming traffic load across all your Sync Gateway instances. +Simply add the additional instances' IP addresses to the `upstream` block; for example: [source,bash] ---- - upstream sync_gateway { server 192.168.1.10:4984; server 192.168.1.11:4984; @@ -177,47 +170,46 @@ upstream sync_gateway { === Transport Layer Security (HTTPS, SSL) -To secure the connection between clients and Sync Gateway in production, you will want to use Transport Layer Security (TLS, also known as HTTPS or SSL.) This not only encrypts data from eavesdroppers (including passwords and login tokens), it also protects against Man-In-The-Middle attacks by verifying to the client that it's connecting to the real server, not an impostor. +To secure the connection between clients and Sync Gateway in production, you will want to use Transport Layer Security (TLS, also known as HTTPS or SSL.) +This not only encrypts data from eavesdroppers (including passwords and login tokens), it also protects against Man-In-The-Middle attacks by verifying to the client that it's connecting to the real server, not an impostor. To enable TLS you will need an X.509 certificate. For production, you should get a certificate from a reputable Certificate Authority, which will be signed by that authority. This allows the client to verify that your certificate is trustworthy. You will end up with two files: a private key, and a public certificate. -Both must be stored on a filesystem accessible to the nginx process. +Both must be stored on a filesystem accessible to the nginx process. Treat the private key file as highly confidential data, since anyone with the key can impersonate your site in a Man-In-The-Middle attack. -Read access should be limited to the nginx process(es) and no others. +Read access should be limited to the nginx process(es) and no others. -For testing, you can easily create your own self-signed certificate using the `openssl` command-line tool: +For testing, you can easily create your own self-signed certificate using the `openssl` command-line tool: [source,bash] ---- - sudo mkdir -p /etc/nginx/ssl sudo openssl req -x509 -nodes -days 1095 -newkey rsa:2048 -keyout /etc/nginx/ssl/nginx.key -out /etc/nginx/ssl/nginx.crt ---- Whichever way you generated the certificate, you should now have two files, a certificate and a private key. -We will assume they are at */etc/nginx/ssl/nginx.crt* and **/etc/nginx/ssl/nginx.key**. +We will assume they are at */etc/nginx/ssl/nginx.crt* and */etc/nginx/ssl/nginx.key*. -Now add a new server section to the nginx configuration file to support SSL termination: +Now add a new server section to the nginx configuration file to support SSL termination: [source] ---- - server { listen 443 ssl; server_name myservice.example.org; client_max_body_size 21m; - + ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; - + ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols TLSv1; - + location / { proxy_pass http://sync_gateway; proxy_pass_header Accept; @@ -230,28 +222,26 @@ server { } ---- -Restart nginx to enable the new server: +Restart nginx to enable the new server: [source,bash] ---- - sudo service nginx restart ---- -Test using curl: +Test using curl: [source,bash] ---- - curl -k https://myservice.example.org/ {“couchdb”:”Welcome”,”vendor”:{“name”:”Couchbase Sync Gateway”,”version”:1},”version”:”Couchbase Sync Gateway/1.0.3(81;fa9a6e7)”} ---- If you are using a self-signed cert, add a `-k` flag before the URL. -This tells curl to accept an untrusted certificate; without this, the command will fail because your cert is not signed by a trusted Certificate Authority. +This tells curl to accept an untrusted certificate; without this, the command will fail because your cert is not signed by a trusted Certificate Authority. == AWS Elastic Load Balancer (ELB) -Since Sync Gateway and Couchbase Lite can have long running connections for changes feeds, you should set the *Idle Timeout* setting of the ELB to the maximum value of 3600 seconds (1 hour). +Since Sync Gateway and Couchbase Lite can have long running connections for changes feeds, you should set the *Idle Timeout* setting of the ELB to the maximum value of 3600 seconds (1 hour). -See the http://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html[ELB instructions] for more information on how to change this setting. \ No newline at end of file +See the https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/config-idle-timeout.html[ELB instructions] for more information on how to change this setting. diff --git a/modules/ROOT/pages/logging.adoc b/modules/ROOT/pages/logging.adoc index 18b355a9d..525b48f21 100644 --- a/modules/ROOT/pages/logging.adoc +++ b/modules/ROOT/pages/logging.adoc @@ -1,88 +1,73 @@ = Logging +:idprefix: +:idseparator: - -Continuous logging is a new feature in Sync Gateway 2.1 that allows the console log output to be separated from log files collected by Couchbase Support. +Continuous logging is a new feature in Sync Gateway 2.1 that allows the console log output to be separated from log files collected by Couchbase Support. -This allows system administrators running Sync Gateway to tweak the log level, and log keys for the console output to suit their needs, whilst maintaining the level of logging required by Couchbase Support for investigation of issues. +This allows system administrators running Sync Gateway to tweak the log level, and log keys for the console output to suit their needs, whilst maintaining the level of logging required by Couchbase Support for investigation of issues. == Console Log Output -The console output of Sync Gateway can be filtered down via log levels and log keys, and you can tweak this as much as you like without impacting Support's ability to analyze the log files described in <>. +The console output of Sync Gateway can be filtered down via log levels and log keys, and you can tweak this as much as you like without impacting Support's ability to analyze the log files described in <>. === Log Levels The console log output can be configured with the following log levels, ordered from least verbose, to most. Note that log levels are additive, so if you enable `info` level, `warn` and `error` logs are also enabled. -By default, the log level is set to ``info``. +By default, the log level is set to `info`. -[cols="1,1,1", options="header"] +[cols="1,1,2"] |=== -| - Log Level - -| - Appearance - -| - Description - - - -|``none`` -| - - -| - Disables log output - -|``error`` -|``[ERR]`` -| - Displays errors that need urgent attention - -|``warn`` -|``[WRN]`` -| - Displays warnings that need some attention - -|``info`` -|``[INF]`` -| - Displays information about normal operations that don't need attention - -|``debug`` -|``[DBG]`` -| - Displays verbose output that might be useful when debugging - -|``trace`` -|``[TRC]`` -| - Displays extremely verbose output that might be useful when debugging +|Log Level |Appearance |Description + +|`none` +| - +|Disables log output + +|`error` +|`[ERR]` +|Displays errors that need urgent attention + +|`warn` +|`[WRN]` +|Displays warnings that need some attention + +|`info` +|`[INF]` +|Displays information about normal operations that don't need attention + +|`debug` +|`[DBG]` +|Displays verbose output that might be useful when debugging + +|`trace` +|`[TRC]` +|Displays extremely verbose output that might be useful when debugging |=== -Log levels can be set in the configuration file (see the link:config-properties.html#logging-$level[logging.$level] reference). +Log levels can be set in the configuration file (see the xref:config-properties.adoc#logging-$level[`logging.$level`] reference). === Log Keys Log keys are used to provide finer-grained control over what is logged. -By default, only `HTTP` is enabled. +By default, only `HTTP` is enabled. -All log keys and descriptions are described in the link:config-properties.html#logging-console-log_keys[logging.console.log_level] property reference. +All log keys and descriptions are described in the xref:config-properties.adoc#logging-console-log_keys[`logging.console.log_level`] property reference. === Console Output Color -There is an option to color log output based on log level if link:config-properties.html#logging-console-color_enabled[logging.console.color_enabled] is set to ``true``. -Note: this setting is always disabled on Windows for compatibility reasons. +There is an option to color log output based on log level if xref:config-properties.adoc#logging-console-color_enabled[`logging.console.color_enabled`] is set to `true`. +Note: this setting is always disabled on Windows for compatibility reasons. === Console Output Redirection -The log files described below are intended for Couchbase Support, and users are urged not to rely on these. +The log files described below are intended for Couchbase Support, and users are urged not to rely on these. If you have special requirements for logs, such as centralized logging, you can redirect the console output to a file, and apply your own log collection mechanism to feed that data elsewhere. -For example: +For example: [source] ---- - # Start Sync Gateway and redirect console output to a file ./sync-gateway > my_sg_logs.txt 2>&1 @@ -93,63 +78,45 @@ logcollector my_sg_logs.txt == Log File Outputs These are 4 log files split by log level, with a guaranteed retention period for each. -The log files can be collected with link:sgcollect-info.html[SGCollect Info], and can be analyzed by Couchbase Support for diagnosing issues in Sync Gateway. -As described above, it is recommended to use link:index.html#console-output-redirection[Console Output redirection] if you require special handling of log files from Sync Gateway, as these files are intended for Couchbase Support. +The log files can be collected with xref:sgcollect-info.adoc[SGCollect Info], and can be analyzed by Couchbase Support for diagnosing issues in Sync Gateway. +As described above, it is recommended to use +//can't find this fragment target on index.adoc +link:index.html#console-output-redirection[Console Output redirection] if you require special handling of log files from Sync Gateway, as these files are intended for Couchbase Support. -[cols="1,1,1,1", options="header"] +[cols="1,1,1,1"] |=== -| - Log File - -| - Default enabled - -| - Default max_age - -| - Minimum max_age - - - -|``sg_error.log`` -|``true`` -| - 360 Days -| - 180 Days - -|``sg_warn.log`` -|``true`` -| - 180 Days -| - 90 Days - -|``sg_info.log`` -|``true`` -| - 6 Days -| - 3 Days - -|``sg_debug.log`` -|``false`` -| - 2 Days -| - 1 Day +|Log File |Default enabled |Default max_age |Minimum max_age + +|`sg_error.log` +|`true` +|360 Days +|180 Days + +|`sg_warn.log` +|`true` +|180 Days +|90 Days + +|`sg_info.log` +|`true` +|6 Days +|3 Days + +|`sg_debug.log` +|`false` +|2 Days +|1 Day |=== -Each log level and its parameters are described in the link:config-properties.html#logging-$level[logging.$level] property reference. +Each log level and its parameters are described in the xref:config-properties.adoc#logging-$level[logging.$level] property reference. === Log File Rotation These four log files will be rotated once each exceeds a threshold size, defined by `max_size` in megabytes. Once rotated, the log files will be compressed with gzip, to reduce the disk space taken up by older logs. -These old logs will then be cleaned up once the age exceeds `max_age` in days. +These old logs will then be cleaned up once the age exceeds `max_age` in days. == Log Redaction All log outputs can be redacted, this means that user-data, considered to be private, is removed. -This feature is optional and can be enabled in the configuration with the link:config-properties.html#logging-redaction_level[logging.redaction_level] property. \ No newline at end of file +This feature is optional and can be enabled in the configuration with the xref:config-properties.adoc#logging-redaction_level[`logging.redaction_level`] property. diff --git a/modules/ROOT/pages/os-level-tuning.adoc b/modules/ROOT/pages/os-level-tuning.adoc index 681988eba..7691631be 100644 --- a/modules/ROOT/pages/os-level-tuning.adoc +++ b/modules/ROOT/pages/os-level-tuning.adoc @@ -1,43 +1,40 @@ = OS Level Tuning +:url-keepalive: https://tldp.org/HOWTO/TCP-Keepalive-HOWTO -To get the most out of Sync Gateway, it may be necessary to tune a few parameters of the OS. +To get the most out of Sync Gateway, it may be necessary to tune a few parameters of the OS. -[[_tuning_the_max_no._of_file_descriptors]] == Tuning the max no. of file descriptors -Raising the maximum number of file descriptors available to Sync Gateway is important because it directly affects the maximum number of *sockets* the Sync Gateway can have open, and therefore the maximum number of endpoints that the Sync Gateway can support. +Raising the maximum number of file descriptors available to Sync Gateway is important because it directly affects the maximum number of *sockets* the Sync Gateway can have open, and therefore the maximum number of endpoints that the Sync Gateway can support. === Linux Instructions (CentOS) -The following instructions are geared towards CentOS. +The following instructions are geared towards CentOS. ==== Global limits -Increase the max number of file descriptors available to **all processes**. -To specify the number of system wide file descriptors allowed, open up the `/etc/sysctl.conf` file and add the following line. +Increase the max number of file descriptors available to *all processes*. +To specify the number of system wide file descriptors allowed, open up the `/etc/sysctl.conf` file and add the following line. [source,bash] ---- - fs.file-max = 500000 ---- -Apply the changes and persist them (this will last across reboots) by running the following command. +Apply the changes and persist them (this will last across reboots) by running the following command. [source,bash] ---- - $ sysctl -p ---- ==== Sync Gateway Config Limits The maximum number of open files descriptors should also be configured in the Sync Gateway configuration file. -Refer to the link:config-properties.html#server-configuration[server section] of the configuration guide and to the example below. +Refer to the xref:config-properties.adoc#server-configuration[server section] of the configuration guide and to the example below. [source,javascript] ---- - { "maxFileDescriptors": 250000, "databases": { @@ -48,43 +45,40 @@ Refer to the link:config-properties.html#server-configuration[server section] of ==== Ulimits systemd config -The `/usr/lib/systemd/system/sync_gateway.service` has a hardcoded limit specified by ``LimitNOFILE=65535``. -To increase that, edit the `/sync_gateway.service` file to your desired value and restart the service. +The `/usr/lib/systemd/system/sync_gateway.service` has a hardcoded limit specified by `LimitNOFILE=65535`. +To increase that, edit the `/sync_gateway.service` file to your desired value and restart the service. ==== Ulimits CLI If you are running Sync Gateway outside of systemd, use the following instructions. -If you are using systemd, you can skip this section. +If you are using systemd, you can skip this section. Increase the *ulimit* setting for max number of file descriptors available to a single process. For example, setting it to 250K will allow the Sync Gateway to have 250K connections open at any given time, and leave 250K remaining file descriptors available for the rest of the processes on the machine. -These settings are just an example, you will probably want to tune them for your own particular use case. +These settings are just an example, you will probably want to tune them for your own particular use case. [source,bash] ---- - $ ulimit -n 250000 ---- -In order to persist the ulimit change across reboots, add the following lines to ``/etc/security/limits.conf``. +In order to persist the ulimit change across reboots, add the following lines to `/etc/security/limits.conf`. [source,bash] ---- - * soft nofile 250000 * hard nofile 250000 ---- -Verify your changes by running the following commands. +Verify your changes by running the following commands. [source,bash] ---- - $ cat /proc/sys/fs/file-max $ ulimit -n ---- -The value of both commands above should be ``250000``. +The value of both commands above should be `250000`. === References @@ -93,71 +87,66 @@ The value of both commands above should be ``250000``. == Tuning the TCP Keepalive parameters -If you have already raised the maximum number of file descriptors available to Sync Gateway, but you are still seeing "too many open files" errors, you may need to tune the TCP Keepalive parameters. +If you have already raised the maximum number of file descriptors available to Sync Gateway, but you are still seeing "too many open files" errors, you may need to tune the TCP Keepalive parameters. === Understanding the problem -Mobile endpoints tend to abruptly disconnect from the network without closing their side of the connection, as described in http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html[Section 2.3. (Checking for dead peers)] of the TCP-Keepalive-HOWTO. +Mobile endpoints tend to abruptly disconnect from the network without closing their side of the connection, as described in {url-keepalive}/overview.html[Section 2.3. (Checking for dead peers)] of the TCP-Keepalive-HOWTO. By default, these connections will hang around for approximately 7200 seconds (2 hours) before they are detected to be dead and cleaned up by the tcp/ip stack of the Sync Gateway process. -If enough of these connections accumulate, you can end up seeing "too many open files" errors on Sync Gateway. +If enough of these connections accumulate, you can end up seeing "too many open files" errors on Sync Gateway. -If you are seeing "too many open files" errors, you can count the number of established connections coming into your sync gateway with the following command: +If you are seeing "too many open files" errors, you can count the number of established connections coming into your sync gateway with the following command: [source,bash] ---- - $ lsof -p | grep -i established | wc -l ---- -If the value returned is near your max file descriptor limit, then you can either try increasing the max file descriptor limit even higher, or tuning the TCP Keepalive parameters to reduce the amount of time that dead peers will cause a socket to be held open on their behalf. +If the value returned is near your max file descriptor limit, then you can either try increasing the max file descriptor limit even higher, or tuning the TCP Keepalive parameters to reduce the amount of time that dead peers will cause a socket to be held open on their behalf. [[_linux_instructions_centos_1]] === Linux Instructions (CentOS) -Tuning the TCP Keepalive settings is not without its downsides -- it will increase the amount of overall network traffic on your system, because the tcp/ip stack will be sending more frequent Keepalive packets in order to detect dead peers faster. +Tuning the TCP Keepalive settings is not without its downsides -- it will increase the amount of overall network traffic on your system, because the tcp/ip stack will be sending more frequent Keepalive packets in order to detect dead peers faster. The following settings will reduce the amount of time that dead peer connections hang around from approximately 2 hours down to approximately 30 minutes. -Add the following lines to your `/etc/sysctl.conf` file: +Add the following lines to your `/etc/sysctl.conf` file: [source,bash] ---- - net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_intvl = 60 net.ipv4.tcp_keepalive_probes = 20 ---- -This translates to: +This translates to: -. The keepalive routines wait initially for 10 minutes (600 secs) before sending the first keepalive probe -. Resend the probe every minute (60 seconds). -. If no ACK response is received for 20 consecutive times, the connection is marked as broken. +. The keepalive routines wait initially for 10 minutes (600 secs) before sending the first keepalive probe +. Resend the probe every minute (60 seconds). +. If no ACK response is received for 20 consecutive times, the connection is marked as broken. To reduce the amount of time even further, you can reduce the `tcp_retries2` value. -Add the following line to your `/etc/sysctl.conf` file: +Add the following line to your `/etc/sysctl.conf` file: [source,bash] ---- - net.ipv4.tcp_retries2 = 8 ---- -To activate the changes and persist them across reboots, run: +To activate the changes and persist them across reboots, run: [source,bash] ---- - $ sysctl -p ---- -See http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/usingkeepalive.html[Using TCP keepalive under Linux] for more details on setting these parameters. +See {url-keepalive}/usingkeepalive.html[Using TCP keepalive under Linux] for more details on setting these parameters. -[[_references_1]] === References -* http://tldp.org/HOWTO/TCP-Keepalive-HOWTO/overview.html[TCP Keepalive HOWTO] -* http://stackoverflow.com/questions/5907527/application-control-of-tcp-retransmission-on-linux[Application control of TCP retransmission on Linux] +* {url-keepalive}/overview.html[TCP Keepalive HOWTO] +* https://stackoverflow.com/questions/5907527/application-control-of-tcp-retransmission-on-linux[Application control of TCP retransmission on Linux] * https://groups.google.com/forum/#!msg/golang-nuts/rRu6ibLNdeI/0bjSmO5fN_8J[Proactively closing longpoll connections for endpoints that disappear from the network] -* http://linux.die.net/man/7/tcp[TCP man page] +* https://linux.die.net/man/7/tcp[TCP man page] * https://github.com/couchbase/sync_gateway/issues/742[Sync Gateway Issue 742] diff --git a/modules/ROOT/pages/release-notes.adoc b/modules/ROOT/pages/release-notes.adoc index 1caa0346a..1ce6ab575 100644 --- a/modules/ROOT/pages/release-notes.adoc +++ b/modules/ROOT/pages/release-notes.adoc @@ -1,4 +1,5 @@ = Release Notes +:url-issues-sync: https://github.com/couchbase/sync_gateway/issues == 2.1 @@ -6,45 +7,46 @@ The following features are being deprecated and will be unsupported in an upcoming version of Sync Gateway. -* *Bucket shadowing* has been deprecated since 1.4 and has now become unsupported. The recommended approach to perform operations on a bucket dedicated to Couchbase Mobile is to enable link:shared-bucket-access.html[shared bucket access]. +* *Bucket shadowing* has been deprecated since 1.4 and has now become unsupported. +The recommended approach to perform operations on a bucket dedicated to Couchbase Mobile is to enable xref:shared-bucket-access.adoc[shared bucket access]. *Performance Improvements* -- https://github.com/couchbase/sync_gateway/issues/716[*#716*] Use sync.atomic to synchronize logging logLevel -- https://github.com/couchbase/sync_gateway/issues/2396[*#2396*] SyncGateway restart after node automatic failover is taking more time -- https://github.com/couchbase/sync_gateway/issues/2536[*#2536*] SG-Replicator throughput is not increasing with SG-Accelerator -- https://github.com/couchbase/sync_gateway/issues/2563[*#2563*] Allow callers to provide current value to WriteUpdateWithXattr +- {url-issues-sync}/716[*#716*] Use sync.atomic to synchronize logging logLevel +- {url-issues-sync}/2396[*#2396*] SyncGateway restart after node automatic failover is taking more time +- {url-issues-sync}/2536[*#2536*] SG-Replicator throughput is not increasing with SG-Accelerator +- {url-issues-sync}/2563[*#2563*] Allow callers to provide current value to WriteUpdateWithXattr *Enhancements* -- https://github.com/couchbase/sync_gateway/issues/145[*#145*] Switch from base/logging.go to 'clog' package -- https://github.com/couchbase/sync_gateway/issues/341[*#341*] Sync_Gateway config file only supports one URL in connection string -- https://github.com/couchbase/sync_gateway/issues/716[*#716*] Use sync.atomic to synchronize logging logLevel -- https://github.com/couchbase/sync_gateway/issues/939[*#939*] Use W3C log format for HTTP logging -- https://github.com/couchbase/sync_gateway/issues/1021[*#1021*] Enable log level to be set via SG config -- https://github.com/couchbase/sync_gateway/issues/1652[*#1652*] Differentiate logging between standard out and standard error -- https://github.com/couchbase/sync_gateway/issues/1964[*#1964*] Add DCP support to Walrus w/ rollback ability -- https://github.com/couchbase/sync_gateway/issues/3119[*#3119*] Avoid design doc/view creation when already present -- https://github.com/couchbase/sync_gateway/issues/3526[*#3526*] 2.1.0- sg collect info - Show message if sync gateway is not running -- https://github.com/couchbase/sync_gateway/issues/3584[*#3584*] Sync-gateway does not show any error on logs when used wrong name/value pairs for logging +- {url-issues-sync}/145[*#145*] Switch from base/logging.go to 'clog' package +- {url-issues-sync}/341[*#341*] Sync_Gateway config file only supports one URL in connection string +- {url-issues-sync}/716[*#716*] Use sync.atomic to synchronize logging logLevel +- {url-issues-sync}/939[*#939*] Use W3C log format for HTTP logging +- {url-issues-sync}/1021[*#1021*] Enable log level to be set via SG config +- {url-issues-sync}/1652[*#1652*] Differentiate logging between standard out and standard error +- {url-issues-sync}/1964[*#1964*] Add DCP support to Walrus w/ rollback ability +- {url-issues-sync}/3119[*#3119*] Avoid design doc/view creation when already present +- {url-issues-sync}/3526[*#3526*] 2.1.0- sg collect info - Show message if sync gateway is not running +- {url-issues-sync}/3584[*#3584*] Sync-gateway does not show any error on logs when used wrong name/value pairs for logging *Bugs* -- https://github.com/couchbase/sync_gateway/issues/1574[*#1574*] Windows installer does not start and stop service wrapper -- https://github.com/couchbase/sync_gateway/issues/2173[*#2173*] Go-couchbase 500 errors when rebalancing -- https://github.com/couchbase/sync_gateway/issues/3548[*#3548*] Windows logs are written to "Program Files (x86)" when running from "Program Files" -- https://github.com/couchbase/sync_gateway/issues/3549[*#3549*] Incompatible Windows filename from _sgcollect_info endpoint -- https://github.com/couchbase/sync_gateway/issues/3555[*#3555*] _sgcollect_info endpoint fails with 500 error on build 78 -- https://github.com/couchbase/sync_gateway/issues/3559[*#3559*] Output_directory parameter is ignored on sg_collectinfo rest end point -- https://github.com/couchbase/sync_gateway/issues/3561[*#3561*] Uploadhost is ignored when upload parameter is not given to _sgcollect_info end point -- https://github.com/couchbase/sync_gateway/issues/3572[*#3572*] _sg_collect_info rest end point : Throwing bad request error for a requirement for upload host with upload enabled to false -- https://github.com/couchbase/sync_gateway/issues/3583[*#3583*] Sgcollect : No sgcollect info zip for sg accel -- https://github.com/couchbase/sync_gateway/issues/3632[*#3632*] Sgcollect rest api fails with no write access to "/opt/couchbase-sync-gateway/tools" -- https://github.com/couchbase/sync_gateway/issues/3655[*#3655*] Sg collect rest API does not create zip files under /home/sync-gateway +- {url-issues-sync}/1574[*#1574*] Windows installer does not start and stop service wrapper +- {url-issues-sync}/2173[*#2173*] Go-couchbase 500 errors when rebalancing +- {url-issues-sync}/3548[*#3548*] Windows logs are written to "Program Files (x86)" when running from "Program Files" +- {url-issues-sync}/3549[*#3549*] Incompatible Windows filename from _sgcollect_info endpoint +- {url-issues-sync}/3555[*#3555*] _sgcollect_info endpoint fails with 500 error on build 78 +- {url-issues-sync}/3559[*#3559*] Output_directory parameter is ignored on sg_collectinfo rest end point +- {url-issues-sync}/3561[*#3561*] Uploadhost is ignored when upload parameter is not given to _sgcollect_info end point +- {url-issues-sync}/3572[*#3572*] _sg_collect_info rest end point : Throwing bad request error for a requirement for upload host with upload enabled to false +- {url-issues-sync}/3583[*#3583*] Sgcollect : No sgcollect info zip for sg accel +- {url-issues-sync}/3632[*#3632*] Sgcollect rest API fails with no write access to "/opt/couchbase-sync-gateway/tools" +- {url-issues-sync}/3655[*#3655*] Sgcollect rest API does not create zip files under /home/sync-gateway *Known Issues* -- https://github.com/couchbase/sync_gateway/issues/3562[*#3562*] Sync Gateway requires Couchbase Server nodes to use the same SSL memcached port +- {url-issues-sync}/3562[*#3562*] Sync Gateway requires Couchbase Server nodes to use the same SSL memcached port == Upgrading @@ -52,17 +54,19 @@ The upgrade from views to GSI (N1QL) happens automatically when starting a Sync Installation will follow the same approach implemented in 2.0 for view changes. On startup, Sync Gateway will check for the existence of the GSI indexes, and only attempt to create them if they do not already exist. -As part of the existence check, Sync Gateway will also check if link:config-properties.html#databases-foo_db-num_index_replicas[databases.$db.num_index_replicas] for the existing indexes matches the value specified in the configuration file. +As part of the existence check, Sync Gateway will also check if xref:config-properties.adoc#databases-foo_db-num_index_replicas[databases.$db.num_index_replicas] for the existing indexes matches the value specified in the configuration file. If not, Sync Gateway will drop and recreate the index. Then, Sync Gateway will wait until indexes are available before starting to serve requests. Sync Gateway 2.1 will *not* automatically remove the previously used design documents. -Removal of the obsolete design documents is done via a call to the new link:admin-rest-api.html#/server/post__post_upgrade[+/{db}/_post_upgrade+] endpoint in Sync Gateway`'s Admin REST API. -This endpoint can be run in preview mode (``?preview=true``) to see which design documents would be removed. +Removal of the obsolete design documents is done via a call to the new xref:admin-rest-api.adoc#/server/post\__post_upgrade[+/{db}/_post_upgrade+] endpoint in Sync Gateway`'s Admin REST API. +This endpoint can be run in preview mode (`?preview=true`) to see which design documents would be removed. To summarize, the steps to perform an upgrade to Sync Gateway 2.1 are: -. Upgrade one node in the cluster to 2.1, and wait for it to be reachable via the REST API (for example at http://localhost:4985/). +. Upgrade one node in the cluster to 2.1, and wait for it to be reachable via the REST API (for example at \http://localhost:4985/). . Upgrade the rest of the nodes in the cluster. . Clean up obsolete views: -** *Optional* Issue a call to `/_post_upgrade?preview=true` on any node to preview which design documents will be removed. To upgrade to 2.1, expect to see "sync_gateway" and "sync_housekeeping" listed. -** Issue a call to `/post_upgrade` to remove the obsolete design documents. The response should indicate that "sync_gateway" and "sync_housekeeping" were removed. \ No newline at end of file +** *Optional* Issue a call to `_post_upgrade?preview=true` on any node to preview which design documents will be removed. +To upgrade to 2.1, expect to see "sync_gateway" and "sync_housekeeping" listed. +** Issue a call to `post_upgrade` to remove the obsolete design documents. +The response should indicate that "sync_gateway" and "sync_housekeeping" were removed. diff --git a/modules/ROOT/pages/resolving-conflicts.adoc b/modules/ROOT/pages/resolving-conflicts.adoc index 5a97eb3bc..604d45d6c 100644 --- a/modules/ROOT/pages/resolving-conflicts.adoc +++ b/modules/ROOT/pages/resolving-conflicts.adoc @@ -1,51 +1,52 @@ = Resolving Conflicts +:idprefix: +:idseparator: - Since Couchbase Lite 2.0, conflicts are automatically resolved. -This functionality aims to simplify the default behavior of conflict handling and save disk space (conflicting revisions will no longer be stored in the database). The Couchbase Lite SDK guides describe how the automatic conflict resolution works: +This functionality aims to simplify the default behavior of conflict handling and save disk space (conflicting revisions will no longer be stored in the database). +The Couchbase Lite SDK guides describe how the automatic conflict resolution works: -* xref:2.1@couchbase-lite:ROOT::swift.adoc#handling-conflicts[Swift] -* xref:2.1@couchbase-lite:ROOT::java.adoc#handling-conflicts[Java/Android] -* xref:2.1@couchbase-lite:ROOT::csharp.adoc#handling-conflicts[C#] -* xref:2.1@couchbase-lite:ROOT::objc.adoc#handling-conflicts[Objective-C] +* xref:2.1@couchbase-lite::swift.adoc#handling-conflicts[Swift] +* xref:2.1@couchbase-lite::java.adoc#handling-conflicts[Java/Android] +* xref:2.1@couchbase-lite::csharp.adoc#handling-conflicts[C#] +* xref:2.1@couchbase-lite::objc.adoc#handling-conflicts[Objective-C] The following guide describes how to handle conflicts that are created by Couchbase Lite 1.x clients. -This content isn't necessary for applications that use Couchbase Mobile 2.0 only. +This content isn't necessary for applications that use Couchbase Mobile 2.0 only. In Couchbase Lite 1.x, a conflict usually occurs when two writers are offline and save a different revision of the same document. Couchbase Mobile provides features to resolve these conflicts, the resolution rules are written in the application to keep full control over which edit (also called a revision) should be picked. -The http://developer.couchbase.com/documentation/mobile/1.5/guides/couchbase-lite/native-api/revision/index.html[revision guide] and http://developer.couchbase.com/documentation/mobile/1.5/guides/couchbase-lite/native-api/document/index.html#document-conflict-faq[documents conflicts FAQ] are good resources to learn how to resolve conflicts on the devices with Couchbase Lite. -This guide describes how to handle the conflict resolution on the server-side using the Sync Gateway Admin REST API. +The https://developer.couchbase.com/documentation/mobile/1.5/guides/couchbase-lite/native-api/revision/index.html[revision guide] and https://developer.couchbase.com/documentation/mobile/1.5/guides/couchbase-lite/native-api/document/index.html#document-conflict-faq[documents conflicts FAQ] are good resources to learn how to resolve conflicts on the devices with Couchbase Lite. +This guide describes how to handle the conflict resolution on the server-side using the Sync Gateway Admin REST API. == Creating a conflict -During development, the *new_edits* flag can be used to allow conflicts to be created on demand. +During development, the *new_edits* flag can be used to allow conflicts to be created on demand. [source,bash] ---- - // Persist three revisions of user foo with different statuses // and updated_at dates curl -X POST http://localhost:4985/sync_gateway/_bulk_docs \ -H "Content-Type: application/json" \ -d '{"new_edits": false, "docs": [{"_id": "foo", "type": "user", "updated_at": "2016-06-24T17:37:49.715Z", "status": "online", "_rev": "1-123"}, {"_id": "foo", "type": "user", "updated_at": "2016-06-26T17:37:49.715Z", "status": "offline", "_rev": "1-456"}, {"_id": "foo", "type": "user", "updated_at": "2016-06-25T17:37:49.715Z", "status": "offline", "_rev": "1-789"}]}' - + // Persist three revisions of task bar with different names curl -X POST http://localhost:4985/sync_gateway/_bulk_docs \ -H "Content-Type: application/json" \ -d '{"new_edits": false, "docs": [{"_id": "bar", "type": "task", "name": "aaa", "_rev": "1-123"}, {"_id": "bar", "type": "task", "name": "ccc", "_rev": "1-456"}, {"_id": "bar", "type": "task", "name": "bbb", "_rev": "1-789"}]}' ---- -It can be set in the request body of the POST `+/{db}/_bulk_docs+` endpoint. +It can be set in the request body of the POST `+/{db}/_bulk_docs+` endpoint. == Detecting a conflict -Conflicts are detected on the changes feed with the following query string options. +Conflicts are detected on the changes feed with the following query string options. [source,bash] ---- - curl -X GET 'http://localhost:4985/sync_gateway/_changes?active_only=true&style=all_docs' - + { "results": [ {"seq":1,"id":"_user/","changes":[{"rev":""}]}, @@ -56,29 +57,27 @@ curl -X GET 'http://localhost:4985/sync_gateway/_changes?active_only=true&style= } ---- -With `active_only=true` and `style=all_docs` set, the changes feed excludes the deletions (also known as tombstones) and channel access removals which are not needed for resolving conflicts. +With `active_only=true` and `style=all_docs` set, the changes feed excludes the deletions (also known as tombstones) and channel access removals which are not needed for resolving conflicts. In this guide, we will write a program in node.js to connect to the changes feed and use the https://github.com/request/request[request] library to perform operations on the Sync Gateway Admin REST API. The concepts covered below should also apply to other server-side languages, the implementation will differ but the sequence of operations is the same. -In a new directory, install the library with npm. +In a new directory, install the library with npm. [source,bash] ---- - npm install request ---- -Create a new file called index.js with the following. +Create a new file called index.js with the following. [source,javascript] ---- - var request = require('request'); var sync_gateway_url = 'http://localhost:4985/sync_gateway/'; var seq = process.argv[2]; - + getChanges(seq); - + function getChanges(seq) { var querystring = 'style=all_docs&active_only=true&include_docs=true&feed=longpoll&since=' + seq; var options = { @@ -108,42 +107,39 @@ function getChanges(seq) { } ---- -Let's go through this step by step: - -. GET request to the _changes endpoint. With the following options: +Let's go through this step by step: +. GET request to the `_changes` endpoint. +With the following options: +** `feed=longpoll&since=`: The response will contain all the changes since the specified `seq`. +If `seq` is the last sequence number (the most recent one) then the connection will remain open until a new document is processed by Sync Gateway and the change event is sent. +The `getChanges` method is called recursively to always have the latest changes. +** `include_docs:` The response will contain the document body (i.e. the current revision for that document). +** `all_docs&active_only=true:` The response will exclude changes that are deletions and channel access removals. -* *feed=longpoll&since=* -+ -// : - The response will contain all the changes since the specified seq. If seq is the last sequence number (the most recent one) then the connection will remain open until a new document is processed by Sync Gateway and the change event is sent. The getChanges method is called recursively to always have the latest changes. -+* *include_docs:* The response will contain the document body (i.e. the current revision for that document). +. *Detect and resolve conflicts.* +If there are more than one revision then it's a conflict. +Resolve the conflict and return (stop processing this response). +Once the conflict is resolve, get the next change(s). +. *There were no conflicts in this response, get the next change(s).* -+* *all_docsactive_only=true:* The response will exclude changes that are deletions and channel access removals. +The program won't run yet because the `resolveConflicts` method isn't defined, read the next section to learn how to resolve conflicts once they are detected. -.. *Detect and resolve conflicts.* If there are more than one revision then it's a conflict. Resolve the conflict and return (stop processing this response). Once the conflict is resolve, get the next change(s). -.. *There were no conflicts in this response, get the next change(s).* - -The program won`'t run yet because the resolveConflicts method isn`'t defined, read the next section to learn how to resolve conflicts once they are detected. - -[[_resolving_conflicts]] === Resolving conflicts -To resolve conflicts, the open_revs=all option on the document endpoint returns all the revisions of a given document. -The *Accept: application/json* header is used to have a single JSON object in the response (otherwise the response is in multipart format). +To resolve conflicts, the `open_revs=all` option on the document endpoint returns all the revisions of a given document. +The *Accept: application/json* header is used to have a single JSON object in the response (otherwise the response is in multipart format). [source,bash] ---- - curl -X GET -H 'Accept: application/json' 'http://localhost:4984/sync_gateway/foo?open_revs=all' ---- From there, the App Server decides how to merge the data and/or elect the winning update operation. -Add the following in `index.js` below the *getChanges* method. +Add the following in `index.js` below the *getChanges* method. [source,javascript] ---- - function chooseLatest(revisions) { var winning_rev = null; var latest_time = 0; @@ -156,7 +152,7 @@ function chooseLatest(revisions) { } return {revisions: revisions, winning_rev: winning_rev}; } - + function resolveConflicts(current_rev, callback) { var options = { url: sync_gateway_url + current_rev._id + '?open_revs=all', @@ -185,7 +181,7 @@ function resolveConflicts(current_rev, callback) { // revisions must be removed even in this scenario. resolved = {revisions: revisions, winning_rev: current_rev}; } - + // 3. Prepare the changes for the _bulk_docs request. var bulk_docs = revisions.map(function (revision) { if (revision._rev == current_rev._rev) { @@ -196,7 +192,7 @@ function resolveConflicts(current_rev, callback) { } return revision }); - + // 4. Write each change (deletion or update) to the database. var options = {url: sync_gateway_url + '_bulk_docs', body: JSON.stringify({docs: bulk_docs})}; request.post(options, function (error, response, body) { @@ -210,26 +206,30 @@ function resolveConflicts(current_rev, callback) { } ---- -So what is this code doing? +So what is this code doing? -. *Use open_revs=all to get the properties in each revision.* -. *Resolve the conflict.* For user documents, the revision with the latest updated_at value wins. For other document types, the current revision (the one that got picked deterministically by the system) remains the winner. Note that non-current revisions must still be removed otherwise they may be promoted as the current revision at a later time. The resolution logic may be different for each document type. -. *Prepare the changes for the _bulk_docs request.* All non-current revision are marked for deletion with the `\_deleted: true` property. The current revision properties are replaced with the properties of the winning revision. +. *Use `open_revs=all` to get the properties in each revision.* +. *Resolve the conflict.* +For user documents, the revision with the latest `updated_at` value wins. +For other document types, the current revision (the one that got picked deterministically by the system) remains the winner. +Note that non-current revisions must still be removed otherwise they may be promoted as the current revision at a later time. +The resolution logic may be different for each document type. +. *Prepare the changes for the `_bulk_docs` request.* +All non-current revision are marked for deletion with the `_deleted: true` property. +The current revision properties are replaced with the properties of the winning revision. . *Write each change (deletion or update) to the database.* -Start the program from sequence 0, the first sequence number in any Couchbase Mobile database. +Start the program from sequence 0, the first sequence number in any Couchbase Mobile database. [source,bash] ---- - node index.js 0 ---- -The conflicts that were added at the beginning of the guide are detected and resolved. +The conflicts that were added at the beginning of the guide are detected and resolved. [source] ---- - Document with ID _user/ has 1 revisions. Document with ID foo has 3 revisions. Conflicts exist. Resolving... @@ -241,4 +241,5 @@ Document with ID foo has 1 revisions. Document with ID bar has 1 revisions. ---- -Add more conflicting revisions from the command-line with a different document ID (baz for example). The conflict is resolved and the program continues to listen for the next change(s). +Add more conflicting revisions from the command-line with a different document ID (baz for example). +The conflict is resolved and the program continues to listen for the next change(s). diff --git a/modules/ROOT/pages/rest-api-client.adoc b/modules/ROOT/pages/rest-api-client.adoc index 5b597c87c..e1a760656 100644 --- a/modules/ROOT/pages/rest-api-client.adoc +++ b/modules/ROOT/pages/rest-api-client.adoc @@ -1,18 +1,21 @@ = REST API Client +:idprefix: +:idseparator: - +:url-downloads: https://www.couchbase.com/downloads Whether you're developing a web application getting data from the Sync Gateway API or integrating it with another system you will almost certainly need an HTTP library to consume the Public and Admin Sync Gateway REST APIs. The documentation for the Sync Gateway REST APIs is using Swagger which is a great toolkit for writing REST API documentation, and also to generate HTTP libraries. -This guide will walk you through how to start using those libraries to display documents stored in Sync Gateway on a web page +This guide will walk you through how to start using those libraries to display documents stored in Sync Gateway on a web page -Follow the steps below to get Sync Gateway up and running. +Follow the steps below to get Sync Gateway up and running. -. http://www.couchbase.com/nosql-databases/downloads#couchbase-mobile[Download Sync Gateway] -. In a new working directory, open a new file called `sync-gateway-config.json` with the following -+ +. {url-downloads}#couchbase-mobile[Download Sync Gateway] +. In a new working directory, open a new file called `sync-gateway-config.json` with the following ++ +-- [source,javascript] ---- - { "log": ["HTTP+"], "CORS": { @@ -29,37 +32,33 @@ Follow the steps below to get Sync Gateway up and running. } } ---- -+ -Here, you're enabling CORS on ``http://localhost:8000``, the hostname of the web server that will serve the web application. -. Start Sync Gateway from the command line with the configuration file -+ +Here, you're enabling CORS on `+http://localhost:8000+`, the hostname of the web server that will serve the web application. +-- + +. Start Sync Gateway from the command line with the configuration file ++ [source,bash] ---- - ~/Downloads/couchbase-sync-gateway/bin/sync_gateway sync-gateway-config.json ---- -. Insert a few documents using the POST `+/{db}/_bulk_docs+` endpoint -+ +. Insert a few documents using the POST `+/{db}/_bulk_docs+` endpoint ++ [source,bash] ---- - curl -X POST http://localhost:4985/todo/_bulk_docs \ -H "Content-Type: application/json" \ -d '{"docs": [{"task": "avocados", "type": "task"}, {"task": "oranges", "type": "task"}, {"task": "tomatoes", "type": "task"}]}' ---- - -[[_a_simple_web_application]] == A Simple Web Application In this section you will use Swagger JS in the browser to insert a few documents and display them in a list. -Create a new file called *index.html* with the following. +Create a new file called *index.html* with the following. [source,html] ---- - @@ -77,11 +76,10 @@ Create a new file called *index.html* with the following. Install the https://github.com/swagger-api/swagger-js[swagger-js] library in your working project. -Next, create a new file called *index.js* to start sending requests to Sync Gateway. +Next, create a new file called *index.js* to start sending requests to Sync Gateway. [source,javascript] ---- - // initialize swagger client, point to a swagger spec window.client = new SwaggerClient({ url: 'http://developer.couchbase.com/mobile/swagger/sync-gateway-public/spec.json', @@ -93,35 +91,31 @@ window.client = new SwaggerClient({ ---- Here you're initializing the Swagger library with the Sync Gateway public REST API spec and promises enabled. -Promises are great because you can chain HTTP operations in a readable style. +Promises are great because you can chain HTTP operations in a readable style. -[quote] -*Note:* Keep in mind that in this example the Swagger client is pointing to the spec hosted on developer.couchbase.com. +NOTE: Keep in mind that in this example the Swagger client is pointing to the spec hosted on developer.couchbase.com. We often publish changes to those specs for documentation purposes; if it's a breaking change then it will modify the request and parameter names in the Swagger client and break your code. You can refer to the https://github.com/couchbaselabs/couchbase-mobile-portal/blob/master/swagger/CHANGELOG.md[changelog of the specs] to find the list of methods and parameters that changed. -In production, we highly encourage you to download the spec as a `$$.$$json` file and pass it to the Swagger client using the `{spec: }` option. - -In this working directory, start a web server with the command `python -m SimpleHTTPServer 8000` and navigate to http://localhost:8000/index.html in a browser. -Open the dev tools to access the console and you should see the list of operations available on the `client` object. +In production, we highly encourage you to download the spec as a `$$.$$json` file and pass it to the Swagger client using the `{spec: }` option. +In this working directory, start a web server with the command `python -m SimpleHTTPServer 8000` and navigate to \http://localhost:8000/index.html in a browser. +Open the dev tools to access the console and you should see the list of operations available on the `client` object. image::swagger-browser.png[] All the endpoints are grouped by tag. -A tag represents a certain functionality of the API (i.e database, query, authentication). +A tag represents a certain functionality of the API (i.e database, query, authentication). The `client.help()` method is a helper function that prints all the tags available. In this case we'd like to query all documents in the database so we'll use the `get_db_all_docs` method on the database tag to perform this operation. -The helper function is available on any node of the API, so you can write `client.database.get_db_all_docs.help()` to print the documentation for that endpoint as shown below. - +The helper function is available on any node of the API, so you can write `client.database.get_db_all_docs.help()` to print the documentation for that endpoint as shown below. image::swagger-all-docs.png[] -Copy the following below the existing code in *index.js* to query all the documents in the database and display them in the list. +Copy the following below the existing code in *index.js* to query all the documents in the database and display them in the list. [source,javascript] ---- - client.query.get_db_all_docs({db: 'todo', include_docs: true}) .then(function (res) { var rows = res.obj.rows; @@ -137,7 +131,8 @@ client.query.get_db_all_docs({db: 'todo', include_docs: true}) }) ---- -The *include_docs* option is used to retrieve the document properties (the text to display on the screen is located on the `doc.task` field). A promise can either be fulfilled with a value (the successful response) or rejected with a reason (the error response). Reload the browser and you should see the list of tasks. - +The *include_docs* option is used to retrieve the document properties (the text to display on the screen is located on the `doc.task` field). +A promise can either be fulfilled with a value (the successful response) or rejected with a reason (the error response). +Reload the browser and you should see the list of tasks. image::task-list.png[] diff --git a/modules/ROOT/pages/rest-api.adoc b/modules/ROOT/pages/rest-api.adoc index 93a42e60b..10ea5c047 100644 --- a/modules/ROOT/pages/rest-api.adoc +++ b/modules/ROOT/pages/rest-api.adoc @@ -1,15 +1,16 @@ -include::_attributes.adoc[] += Public REST API +:idprefix: +:idseparator: - The API explorer below groups all the endpoints by functionality. -You can click on a label to expand the list of endpoints. +You can click on a label to expand the list of endpoints. You can also send a request to each endpoint against an instance of Sync Gateway. To do so, you must enable CORS with the following in the configuration file. -Refer to the Sync Gateway link:getting-started.html[installation guide] for more information on starting a new instance of Sync Gateway. +Refer to the Sync Gateway xref:getting-started.adoc[installation guide] for more information on starting a new instance of Sync Gateway. [source,javascript] ---- - { ... "CORS": { diff --git a/modules/ROOT/pages/running-replications.adoc b/modules/ROOT/pages/running-replications.adoc index 5183f7032..94597adbc 100644 --- a/modules/ROOT/pages/running-replications.adoc +++ b/modules/ROOT/pages/running-replications.adoc @@ -1,43 +1,42 @@ -= Running replications += Running Replications Sync Gateway has the ability to run active one way replications between two Sync Gateway databases. Documents go through the Sync Function on the target Sync Gateway instance which ensures that access permissions are updated. -On the architecture diagram below, any changes that users/systems make on either Sync Gateway instance will be replicated to the other Sync Gateway instance. +On the architecture diagram below, any changes that users/systems make on either Sync Gateway instance will be replicated to the other Sync Gateway instance. image::running-replications.png[] -*Note:* A _Sync Gateway database_ can also be referred to as a namespace for documents, the data is *always* stored in Couchbase Server. +NOTE: A _Sync Gateway database_ can also be referred to as a namespace for documents, the data is *always* stored in Couchbase Server. -Features: +Features: -* JSON configuration to specify replications -* Supports multiple replications running concurrently -* Can run both OneShot and Continuous replications -* Does not store anything persistently -* Stateless -- can be interrupted/restarted anytime without negative side effects -* Filter replications using channels +* JSON configuration to specify replications +* Supports multiple replications running concurrently +* Can run both OneShot and Continuous replications +* Does not store anything persistently +* Stateless -- can be interrupted/restarted anytime without negative side effects +* Filter replications using channels -Limitations: +Limitations: -* Can only replicates SG databases that are hosted on recent versions of Sync Gateway (after commit 50d30eb3d on March 7, 2014) -* In deployments with multiple Sync Gateway nodes, only _one_ of the Sync Gateways should be configured for replications. If multiple Sync Gateways are configured for replications, it could substantially increase the amount of duplicate work, and therefore should be avoided. The limitation is that the system is not guaranteed to be Highly Available: if the Sync Gateway that is chosen to drive the replication goes down or is otherwise removed from the system, then the replications will stop. +* Can only replicates SG databases that are hosted on recent versions of Sync Gateway (after commit 50d30eb3d on March 7, 2014) +* In deployments with multiple Sync Gateway nodes, only _one_ of the Sync Gateways should be configured for replications. +If multiple Sync Gateways are configured for replications, it could substantially increase the amount of duplicate work, and therefore should be avoided. +The limitation is that the system is not guaranteed to be Highly Available: if the Sync Gateway that is chosen to drive the replication goes down or is otherwise removed from the system, then the replications will stop. - -[[_running_replications_via_the_rest_api]] == Running replications via the REST API -A replication is run by sending a POST request to the server endpoint /_replicate, with a JSON object defining the replication parameters. +A replication is run by sending a POST request to the server endpoint `_replicate`, with a JSON object defining the replication parameters. Both one-shot and continuous replications can be run. Each replication is one-way between two local or remote Sync Gateway databases. Multiple replications can run simultaneously, supporting bi-directional replications and different replication topologies. -Be aware that both databases being synchronized should have the same sync function, otherwise it could lead to unexpected behaviour. +Be aware that both databases being synchronized should have the same sync function, otherwise it could lead to unexpected behavior. These parameters start a one-shot replication between two databases on the local Sync Gateway instance. -The request will block until the replication has completed. +The request will block until the replication has completed. [source,javascript] ---- - { "source": "db", "target": "db-copy" @@ -45,11 +44,10 @@ The request will block until the replication has completed. ---- These parameters start a one-shot replication between one database on the local Sync Gateway instance and one on a remote Sync Gateway instance. -The request will return immediately and the replication will run asynchronously. +The request will return immediately and the replication will run asynchronously. [source,javascript] ---- - { "source": "db", "target": "http://example.com:4985/db-copy", @@ -57,12 +55,11 @@ The request will return immediately and the replication will run asynchronously. } ---- -These parameters start a continuous replication between one database on the local Sync Gateway instance and one on a remote Sync Gateway instance with the user provided replication_id. -The request will return immediately and the replication will run asynchronously. +These parameters start a continuous replication between one database on the local Sync Gateway instance and one on a remote Sync Gateway instance with the user provided `replication_id`. +The request will return immediately and the replication will run asynchronously. [source,javascript] ---- - { "replication_id":"my-named-replication", "source": "db", @@ -73,11 +70,10 @@ The request will return immediately and the replication will run asynchronously. These parameters start a continuous replication between one database on the local Sync Gateway instance and one on a remote Sync Gateway instance. The replicator will batch up to 1000 revisions at a time, this will improve replication performance but will use more memory resources. -Source database documents will be filtered so that only those tagged with the channel names "channel1" or "channel2" are replicated. +Source database documents will be filtered so that only those tagged with the channel names "channel1" or "channel2" are replicated. [source,javascript] ---- - { "source": "db", "target": "http://example.com:4985/db-copy", @@ -90,103 +86,76 @@ Source database documents will be filtered so that only those tagged with the ch == Configuration Properties -The _replicate JSON Object supports the following properties. +The `_replicate` JSON Object supports the following properties. -[cols="1,1,1,1", options="header"] +[cols="1,1,3,1"] |=== -| - Name - -| - Type - -| - Description - -| - Default - - - -|``source`` -| - URL -|__Required.__ A URL pointing to the source database for the replication, the URL may be relative i.e. just the name of a local database on the Sync Gateway instance. The URL may point to the Admin REST API which will replicate all documents in the DB, or it may point to the public REST API which will only copy documents in the users assigned channels. -| - none - -|``target`` -| - URL -|__Required.__ A URL pointing to the target database for the replication, the URL may be relative i.e. just the name of a local database on the Sync Gateway instance. The URL may point to the Admin REST API or it may point to the public REST API, this will impact the behaviour of the target database sync function. -| - none - -|``continuous`` -| - Boolean -|__Optional.__ Indicates whether the replication should be a one-shot or continuous replication. -| - false - -|``filter`` -| - String -|__Optional.__ Passes the name of filter to apply to the source documents, currently the only supported filter is "sync_gateway/bychannel", this will replicate documents only from the set of named channels. -| - none - -|``query_params`` -| - Object -|``Optional.`` Passes parameters to the filter, for the "sync_gateway/bychannel" filter the value should be an array or channel names (JSON strings). -| - none - -|``cancel`` -| - Boolean -|__Optional.__ Indicates that a running replication task should be cancelled, the running task is identified by passing its replication_id or by passing the original source and target values. -| - false - -|``replication_id`` -| - String -|__Optional.__ If the cancel parameter is true then this is the id of the active replication task to be cancelled, otherwise this is the replication_id to be used for the new replication. If no replication_id is given for a new replication it will be assigned a random UUID. -| - false - -|``async`` -| - Boolean -|__Optional.__ Indicates that a one-shot replication should be run asynchronously and the request should return immediately. Replication progress can be monitored by using the _active_tasks resource. -| - false - -|``changes_feed_limit`` -| - Number -|``Optional.`` The maximum number of change entries to pull in each loop of a continuous changes feed. -| - 50 +|Name |Type |Description |Default + +|`source` +|URL +|_Required._ A URL pointing to the source database for the replication, the URL may be relative i.e. just the name of a local database on the Sync Gateway instance. +The URL may point to the Admin REST API which will replicate all documents in the DB, or it may point to the public REST API which will only copy documents in the users assigned channels. +|none + +|`target` +|URL +|_Required._ A URL pointing to the target database for the replication, the URL may be relative i.e. just the name of a local database on the Sync Gateway instance. +The URL may point to the Admin REST API or it may point to the public REST API, this will impact the behavior of the target database sync function. +|none + +|`continuous` +|Boolean +|_Optional._ Indicates whether the replication should be a one-shot or continuous replication. +|false + +|`filter` +|String +|_Optional._ Passes the name of filter to apply to the source documents, currently the only supported filter is "sync_gateway/bychannel", this will replicate documents only from the set of named channels. +|none + +|`query_params` +|Object +|_Optional._ Passes parameters to the filter, for the "sync_gateway/bychannel" filter the value should be an array or channel names (JSON strings). +|none + +|`cancel` +|Boolean +|_Optional._ Indicates that a running replication task should be canceled, the running task is identified by passing its `replication_id` or by passing the original source and target values. +|false + +|`replication_id` +|String +|_Optional._ If the cancel parameter is true then this is the id of the active replication task to be canceled, otherwise this is the `replication_id` to be used for the new replication. +If no replication_id is given for a new replication it will be assigned a random UUID. +|false + +|`async` +|Boolean +|_Optional._ Indicates that a one-shot replication should be run asynchronously and the request should return immediately. +Replication progress can be monitored by using the `_active_tasks` resource. +|false + +|`changes_feed_limit` +|Number +|_Optional._ The maximum number of change entries to pull in each loop of a continuous changes feed. +|50 |=== == Running replication on startup -If you want to run replications as soon as Sync Gateway starts, you can define replications in the top level "replications" property of the Sync Gateway configuration, the "replications" value is an array of objects, each object defines a single replication, the object properties are the same as those for the _replicate end-point on the Admin REST API. +If you want to run replications as soon as Sync Gateway starts, you can define replications in the top level "replications" property of the Sync Gateway configuration, the "replications" value is an array of objects, each object defines a single replication, the object properties are the same as those for the `_replicate` end-point on the Admin REST API. -One-shot replications are always run asynchronously even if the "async" property is not set to true. +One-shot replications are always run asynchronously even if the `async` property is not set to true. A One-shot replication that references a local database for either source or target, will be run after a short delay (5 seconds) in order to allow the local REST API's to come up. -Replications may be given a user defined "replication_id" otherwise Sync Gateway will generate a random UUID. -Replications defined in config may not contain the "cancel" property. +Replications may be given a user defined `replication_id` otherwise Sync Gateway will generate a random UUID. +Replications defined in config may not contain the `cancel` property. [source,javascript] ---- - { - "log":["*"], + "log":["*"], "replications":[ { "source": "db", @@ -227,23 +196,22 @@ Replications defined in config may not contain the "cancel" property. "GUEST": {"disabled": false, "admin_channels": ["*"]} } } - } + } } ---- == Monitoring replications By default a simple one-shot replication blocks until it is complete and returns the stats for the completed task. -Async one-shot and continuous replications return immediately with the in flight task stats. +Async one-shot and continuous replications return immediately with the in flight task stats. -You can get a list of active replication tasks by sending a GET request to the `/_active_tasks` endpoint, this will return a list of all running one-shot and continuous replications for the current Sync Gateway instance. +You can get a list of active replication tasks by sending a GET request to the `_active_tasks` endpoint, this will return a list of all running one-shot and continuous replications for the current Sync Gateway instance. The response is a JSON array of active task objects, each object contains the original request parameters for the replication, a unique `replication_id` and some stats for the replication instance. -The list of returned stats and their meaning can be found on the API reference of the link:admin-rest-api.html#/server/get__active_tasks[/_active_tasks] endpoint. +The list of returned stats and their meaning can be found on the API reference of the xref:admin-rest-api.adoc#/server/get\__active_tasks[`_active_tasks`] endpoint. [source,javascript] ---- - [ { "type":"replication", @@ -270,16 +238,15 @@ The list of returned stats and their meaning can be found on the API reference o ] ---- -== Cancelling replications +== Canceling replications -An active replication task is canceled by sending a POST request to the server endpoint /_replicate, with a JSON object. -The JSON object must contain the "cancel" property set to true and either a valid "replication_id" or the identical source, target and continuous values used to start the replication. +An active replication task is canceled by sending a POST request to the server endpoint `_replicate`, with a JSON object. +The JSON object must contain the `cancel` property set to true and either a valid `replication_id` or the identical source, target and continuous values used to start the replication. -This will cancel an active replication with a "replication_id" of "my-one-shot-replication", the "replication_id" value can be obtained by sending a request to _active_tasks. +This will cancel an active replication with a `replication_id` of "my-one-shot-replication", the `replication_id` value can be obtained by sending a request to `_active_tasks`. [source,javascript] ---- - { "cancel": true, "replication_id": "my-one-shot-replication" @@ -287,23 +254,21 @@ This will cancel an active replication with a "replication_id" of "my-one-shot-r ---- This will cancel a replication that was started with same "source" and "target" values as those in the cancel request. -By ommitting the "continuous" property it's value will default to **false**, a replication must also have been started as a one-shot to match. +By omitting the "continuous" property it's value will default to *false*, a replication must also have been started as a one-shot to match. [source,javascript] ---- - { - "cancel":true, + "cancel":true, "source": "db", "target": "db-copy" } ---- -When an active task is cancelled, the response returns the stats of the replication up to the point when it was stopped. +When an active task is canceled, the response returns the stats of the replication up to the point when it was stopped. [source,javascript] ---- - { "type":"replication", "replication_id":"3791d562153505408e0b2730603ed7c1", @@ -323,8 +288,9 @@ When an active task is cancelled, the response returns the stats of the replicat XDCR (cross data centre replication) is the Couchbase Server API to replicate between Couchbase Server clusters. Both XDCR and SG-Replicate can be used to keep clusters in different data centres in sync. However, SG-Replicate was designed specifically for a Couchbase Mobile deployment. -The diagram below describes the notable differences between SG-Replicate and XDCR. +The diagram below describes the notable differences between SG-Replicate and XDCR. image::xdcr-sg-replicate.png[] -NOTE: Sync Gateway is not compatible with XDCR in Active - Active mode (also known as bi-direction XDCR). If you intend to use XDCR between clusters that use Sync Gateway, make sure that XDCR is configured to replicate documents one-way only (Active - Passive). \ No newline at end of file +NOTE: Sync Gateway is not compatible with XDCR in Active - Active mode (also known as bi-direction XDCR). +If you intend to use XDCR between clusters that use Sync Gateway, make sure that XDCR is configured to replicate documents one-way only (Active - Passive). diff --git a/modules/ROOT/pages/server-integration.adoc b/modules/ROOT/pages/server-integration.adoc index 70f66f50c..8b8679213 100644 --- a/modules/ROOT/pages/server-integration.adoc +++ b/modules/ROOT/pages/server-integration.adoc @@ -1,4 +1,7 @@ -include::_attributes.adoc[] += Webhooks and Changes Feed +:idprefix: +:idseparator: - +:url-couchdb: http://guide.couchdb.org/draft/notifications.html This guide describes two approaches for integrating Sync Gateway with other servers. These approaches can be used to build services that react to changes in documents. @@ -9,67 +12,61 @@ Examples of use cases include: The integration approaches are: -* xref:#changes-feed[*Changes Feed*]: The changes feed returns a sorted list of changes made to documents in the database. -* xref:#webhooks[*Webhooks*]: Sync Gateway can detect document updates and post the updated documents to one or more external URLs. +<>:: +The changes feed returns a sorted list of changes made to documents in the database. +<>:: +Sync Gateway can detect document updates and post the updated documents to one or more external URLs. Here's a table that compares each API in different scenarios: -[cols="1,1,1", options="header"] -|=== -| - Scenario - -| - Changes feed (pull) - -| - Webhooks (push) - - - -| - Sequence/Ordered -| - Yes -| - No - -| - User Access Control -| - Fine Grain -| - Limited - -| - Scalable -| - Yes -| - No - -| - Data Stream replay on Failure -| - Yes -| - No +[cols="1,1,1",width="80%"] |=== +|Scenario |Changes feed (pull) |Webhooks (push) -== Changes Feed +|Sequence/Ordered +|Yes +|No + +|User Access Control +|Fine Grain +|Limited + +|Scalable +|Yes +|No -This article describes how to use the changes feed API to integrate Sync Gateway with other backend processes. For instance if you have a channel called "needs-email" you could have a bot that sends an email and then saves the document back with a flag to keep it out of the "needs-email" channel. +|Data Stream replay on Failure +|Yes +|No +|=== -The changes feed API is a REST API endpoint (xref:sync-gateway-public.adoc#/database/get\__db___changes[`+/{db}/_changes+`]) that returns a sorted list of changes made to documents in the database. It permits applications to implement business logic that reacts to changes in documents. There are several methods of connecting to the changes feed (also know as the feed type). The first 3 methods (`polling`, `longpoll` and `continuous`) are based on the CouchDB API. The last method (`websocket`) is specific to Sync Gateway. +== Changes Feed -- link:http://guide.couchdb.org/draft/notifications.html#polling[polling] (default): returns the list of changes immediately. A new request must be sent to get the next set of changes. -- link:http://guide.couchdb.org/draft/notifications.html#long[longpolling]: in addition to regular polling, if the request is sent with a special `last_seq` parameter, it will stay open until a new change occurs and is posted. -- link:http://guide.couchdb.org/draft/notifications.html#continuous[continuous]: the continuous changes API allows you to receive change notifications as they come, in a single HTTP connection. You make a request to the continuous changes API and both you and Sync Gateway will hold the connection open “forever.” -- xref:#websockets[websockets]: the WebSocket mode is conceptually the same as continuous mode but it should avoid issues with proxy servers and gateways that cause continuous mode to fail in many real-world mobile use cases. +This article describes how to use the changes feed API to integrate Sync Gateway with other backend processes. +For instance if you have a channel called "needs-email" you could have a bot that sends an email and then saves the document back with a flag to keep it out of the "needs-email" channel. + +The changes feed API is a REST API endpoint (xref:sync-gateway-public.adoc#/database/get\__db___changes[`+/{db}/_changes+`]) that returns a sorted list of changes made to documents in the database. +It permits applications to implement business logic that reacts to changes in documents. +There are several methods of connecting to the changes feed (also know as the feed type). +The first 3 methods (`polling`, `longpoll` and `continuous`) are based on the CouchDB API. +The last method (`websocket`) is specific to Sync Gateway. + +{url-couchdb}#polling[polling] (default):: +Returns the list of changes immediately. +A new request must be sent to get the next set of changes. +{url-couchdb}#long[longpolling]:: +In addition to regular polling, if the request is sent with a special `last_seq` parameter, it will stay open until a new change occurs and is posted. +{url-couchdb}#continuous[continuous]:: +The continuous changes API allows you to receive change notifications as they come, in a single HTTP connection. +You make a request to the continuous changes API and both you and Sync Gateway will hold the connection open “forever.” +<>:: +The WebSocket mode is conceptually the same as continuous mode but it should avoid issues with proxy servers and gateways that cause continuous mode to fail in many real-world mobile use cases. === WebSockets -The primary problem with the continuous mode is buggy HTTP chunked-mode body parsing that buffers up the entire response before sending any of it on; since the continuous feed response never ends, nothing gets through to the client. This can often be a problem with proxy servers but can be avoided by using the WebSocket method. +The primary problem with the continuous mode is buggy HTTP chunked-mode body parsing that buffers up the entire response before sending any of it on; +since the continuous feed response never ends, nothing gets through to the client. +This can often be a problem with proxy servers but can be avoided by using the WebSocket method. The client requests WebSockets by setting the `_changes` URL's feed query parameter to `websocket`, and opening a WebSocket connection to that URL: @@ -83,11 +80,14 @@ Upgrade: websocket ==== Specifying Options -After the connection opens, the client MUST send a single textual message to the server, specifying the feed options. This message is identical to the body of a regular HTTP POST to `_changes`, i.e. it's a JSON object whose keys are the parameters (for example, `{"since": 112233, "include_docs": true}`). Depending on which client you use, make sure that options are sent as binary. +After the connection opens, the client MUST send a single textual message to the server, specifying the feed options. +This message is identical to the body of a regular HTTP POST to `_changes`, i.e. it's a JSON object whose keys are the parameters (for example, `{"since": 112233, "include_docs": true}`). +Depending on which client you use, make sure that options are sent as binary. ==== Messages -Once the server receives the options, it will begin to send text-format messages. The messages are JSON; each contains one or more change notifications (in the same format as the regular feed) wrapped in an array: +Once the server receives the options, it will begin to send text-format messages. The messages are JSON; +each contains one or more change notifications (in the same format as the regular feed) wrapped in an array: [source] ---- @@ -96,9 +96,11 @@ Once the server receives the options, it will begin to send text-format messages ] ---- -(The current server implementation sends at most one notification per message, but this could change. Clients should accept any number.) +(The current server implementation sends at most one notification per message, but this could change. +Clients should accept any number.) -An empty array is a special case: it denotes that at this point the feed has finished sending the backlog of existing revisions, and will now wait until new revisions are created. It thus indicates that the client has "caught up" with the current state of the database. +An empty array is a special case: it denotes that at this point the feed has finished sending the backlog of existing revisions, and will now wait until new revisions are created. +It thus indicates that the client has "caught up" with the current state of the database. The `websocket` mode behaves like the `continuous` mode: after the backlog of notifications (if any) is sent, the connection remains open and new notifications are sent as they occur. @@ -108,41 +110,45 @@ For efficiency, the feed can be sent in compressed form; this greatly reduces th To signal that it accepts a compressed feed, the client adds `"accept_encoding":"gzip"` to the feed options in the initial message it sends. -Compressed messages are sent from the server as binary. This is of course necessary as they contain gzip data, and it also lets the client distinguish them from uncompressed messages. (The server will only ever send one kind.) +Compressed messages are sent from the server as binary. +This is of course necessary as they contain gzip data, and it also lets the client distinguish them from uncompressed messages. +(The server will only ever send one kind.) -The compressed messages sent from the server constitute a single stream of gzip-compressed data. They cannot be decompressed individually! Instead, the client should open a gzip decompression session when the feed opens, and write each binary message to it as input as it arrives. The output from the decompressor consists of a sequence of JSON arrays, each of which has the same interpretation as a text message (above). +The compressed messages sent from the server constitute a single stream of gzip-compressed data. +They cannot be decompressed individually! +Instead, the client should open a gzip decompression session when the feed opens, and write each binary message to it as input as it arrives. +The output from the decompressor consists of a sequence of JSON arrays, each of which has the same interpretation as a text message (above). == Webhooks Since Sync Gateway 1.1, you can configure webhooks to detect document changes and to post changed documents to URLs that you specify. -In more detail, the steps for a single webhook event handler are: +In more detail, the steps for a single webhook event handler are: -. **Raise and listen for events**: Document changes (creations, updates, and deletions) that are made through Sync Gateway's Public REST API, including document changes that result from Couchbase Lite push replications, raise events that webhook event handlers listen for. -. **Filter**: You can define a `filter` function to examine the contents of the changed documents, and to decide which ones to post. -. **Post**: Sync Gateway uses asynchronous HTTP or HTTPS POSTs to post the changed documents identified by the `filter` function to the specified URL. Without a `filter` function, Sync Gateway posts all changed documents. +. *Raise and listen for events*: Document changes (creations, updates, and deletions) that are made through Sync Gateway's Public REST API, including document changes that result from Couchbase Lite push replications, raise events that webhook event handlers listen for. +. *Filter*: You can define a `filter` function to examine the contents of the changed documents, and to decide which ones to post. +. *Post*: Sync Gateway uses asynchronous HTTP or HTTPS POSTs to post the changed documents identified by the `filter` function to the specified URL. +Without a `filter` function, Sync Gateway posts all changed documents. You can define multiple webhook event handlers. -For example, you could define webhooks with different filtering criteria and that post changed documents to different URLs. +For example, you could define webhooks with different filtering criteria and that post changed documents to different URLs. -[quote] -*Caution:* Webhooks post your application's data, which might include user data, to URLs. -Consider the security implications. +CAUTION: Webhooks post your application's data, which might include user data, to URLs. +Consider the security implications. === When events are raised -Sync Gateway raises a `document_changed` event every time it writes a document to a Couchbase Server bucket, such as during a Couchbase Lite push replication session. +Sync Gateway raises a `document_changed` event every time it writes a document to a Couchbase Server bucket, such as during a Couchbase Lite push replication session. -You can configure event handlers for webhooks with the link:config-properties.html#event_handlers[event_handlers] property in the database configuration section of the JSON configuration file. +You can configure event handlers for webhooks with the xref:config-properties.adoc#databases-foo_db-event_handlers[event_handlers] property in the database configuration section of the JSON configuration file. ==== Examples Following is a simple example of a `webhook` event handler. -In this case, a single instance of a `webhook` event handler is defined for the event ``document_changed``. -Every time a document changes, the document is sent to the URL ``http://someurl.com``. +In this case, a single instance of a `webhook` event handler is defined for the event `document_changed`. +Every time a document changes, the document is sent to the URL `+http://someurl.com+`. [source,javascript] ---- - "event_handlers": { "document_changed": [ { @@ -154,12 +160,11 @@ Every time a document changes, the document is sent to the URL ``http://someurl. ---- Following is an example that defines two `webhook` event handlers. -The `filter` function in the first handler recognizes documents with `doc.type` equal to `A` and posts the documents to the URL ``http://someurl.com/type_A``. -The `filter` function in the second handler recognizes documents with `doc.type` equal to B and posts the documents to the URL ``http://someurl.com/type_B``. +The `filter` function in the first handler recognizes documents with `doc.type` equal to `A` and posts the documents to the URL `+http://someurl.com/type_A+`. +The `filter` function in the second handler recognizes documents with `doc.type` equal to B and posts the documents to the URL `+http://someurl.com/type_B+`. [source,javascript] ---- - "event_handlers": { "document_changed": [ {"handler": "webhook", diff --git a/modules/ROOT/pages/sgcollect-info.adoc b/modules/ROOT/pages/sgcollect-info.adoc index a20f91d57..87eaadeab 100644 --- a/modules/ROOT/pages/sgcollect-info.adoc +++ b/modules/ROOT/pages/sgcollect-info.adoc @@ -1,43 +1,38 @@ = SG Collect Info `sgcollect_info` is command line utility that provides detailed statistics for a specific Sync Gateway node. -This tool must be run on each node individually, not on all simultaneously. +This tool must be run on each node individually, not on all simultaneously. -`sgcollect_info` outputs the following statistics in a zip file: +`sgcollect_info` outputs the following statistics in a zip file: -. Logs -. Configuration -. Expvars (exported variables) that contain important stats -. System Level OS stats -. Golang profile output (runtime memory and cpu profiling info) +. Logs +. Configuration +. Expvars (exported variables) that contain important stats +. System Level OS stats +. Golang profile output (runtime memory and cpu profiling info) - -[[_cli_command_and_parameters]] == CLI command and parameters -To see the CLI command line parameters, run: +To see the CLI command line parameters, run: [source,bash] ---- - ./sgcollect_info --help ---- == Examples -Collect Sync Gateway diagnostics and save locally: +Collect Sync Gateway diagnostics and save locally: [source,bash] ---- - ./sgcollect_info /tmp/sgcollect_info.zip ---- -Collect Sync Gateway diagnostics and upload them to the Couchbase Support AWS S3 bucket: +Collect Sync Gateway diagnostics and upload them to the Couchbase Support AWS S3 bucket: [source,bash] ---- - ./sgcollect_info \ --sync-gateway-config=/path/to/config.json \ --sync-gateway-executable=/usr/bin/sync_gateway \ @@ -49,134 +44,96 @@ Collect Sync Gateway diagnostics and upload them to the Couchbase Support AWS S3 == REST Endpoint -`sgcollect_info` can now be run from the Admin REST API as of Sync Gateway 2.1 using the link:admin-rest-api.html?v=2.1#/server/post__sgcollect_info[/_sgcollect_info] endpoint. +`sgcollect_info` can now be run from the Admin REST API as of Sync Gateway 2.1 using the xref:admin-rest-api.adoc#/server/post\__sgcollect_info[_sgcollect_info] endpoint. == Zipfile contents -The tool creates the following log files in the ouput file. +The tool creates the following log files in the ouput file. -[cols="1,1", options="header"] +[cols="1,2"] |=== -| - Log file - -| - Description - +|Log file |Description +|`sync_gateway_access.log` +|The http access log for sync gateway (i.e which GETs and PUTs it has received and from which IPs) -|``sync_gateway_access.log`` -| - The http access log for sync gateway (i.e which GETs and PUTs it has received and from which IPs) +|`sg_accel_access.log` +|The http access log for sg_accel (i.e which GETs and PUTs it has received and from which IPs) -|``sg_accel_access.log`` -| - The http access log for sg_accel (i.e which GETs and PUTs it has received and from which IPs) +|`sg_accel_error.log` +|The error log (all logging sent to stderr by sg_accel) for the sg_accel process -|``sg_accel_error.log`` -| - The error log (all logging sent to stderr by sg_accel) for the sg_accel process +|`sync_gateway_error.log` +|The error log (all logging sent to stderr by sync_gateway) for the sync_gateway process -|``sync_gateway_error.log`` -| - The error log (all logging sent to stderr by sync_gateway) for the sync_gateway process +|`server_status.log` +|The output of \http://localhost:4895 for the running sync gateway -|``server_status.log`` -| - The output of http://localhost:4895 for the running sync gateway +|`db_db_name_status.log` +|The output of \http://localhost:4895/db_name for the running sync gateway -|``db_db_name_status.log`` -| - The output of http://localhost:4895/db_name for the running sync gateway +|`sync_gateway.json` +|The on-disk configuration file used by sync_gateway when it was launched -|``sync_gateway.json`` -| - The on-disk configuration file used by sync_gateway when it was launched +|`sg_accel.json` +|The on-disk configuration file used by sg_accel when it was launched -|``sg_accel.json`` -| - The on-disk configuration file used by sg_accel when it was launched +|`running_server_config.log` +|The configuration used by sync gateway as it is running (may not match the on-disk config as it can be changed on-the-fly) -|``running_server_config.log`` -| - The configuration used by sync gateway as it is running (may not match the on-disk config as it can be changed on-the-fly) +|`running_db_db_name_config.log` +|The config used by sync gateway for the database specified by db_name -|``running_db_db_name_config.log`` -| - The config used by sync gateway for the database specified by db_name +|`expvars_json.log` +|The expvars (global exposed variables - see https://www.mikeperham.com/2014/12/17/expvar-metrics-for-golang/ for the running sync gateway instance) -|``expvars_json.log`` -| - The expvars (global exposed variables - see http://www.mikeperham.com/2014/12/17/expvar-metrics-for-golang/ for the running sync gateway instance) +|`sgcollect_info_options.log` +|The command line arguments passed to sgcollect_info for this particular output -|``sgcollect_info_options.log`` -| - The command line arguments passed to sgcollect_info for this particular output +|`sync_gateway.log` +|OS-level System Stats -|``sync_gateway.log`` -| - OS-level System Stats +|`expvars_json.log` +|Exported Variables (expvars) from Sync Gateway which show runtime stats -|``expvars_json.log`` -| - Exported Variables (expvars) from Sync Gateway which show runtime stats +|`goroutine.pdf/raw/txt` +|Goroutine pprof profile output -|``goroutine.pdf/raw/txt`` -| - Goroutine pprof profile output +|`heap.pdf/raw/txt` +|Heap pprof profile output -|``heap.pdf/raw/txt`` -| - Heap pprof profile output +|`profile.pdf/raw/txt` +|CPU profile pprof profile output -|``profile.pdf/raw/txt`` -| - CPU profile pprof profile output +|`syslog.tar.gz` +|System level logs like /var/log/dmesg on Linux -|``syslog.tar.gz`` -| - System level logs like /var/log/dmesg on Linux +|`sync_gateway` +|The Sync Gateway binary executable -|``sync_gateway`` -| - The Sync Gateway binary executable - -|``pprof_http_*.log`` -| - The pprof output that collects directly via an http client rather than using go tool, in case Go is not installed +|`pprof_http_*.log` +|The pprof output that collects directly via an http client rather than using go tool, in case Go is not installed |=== == Installing Optional Dependencies -`sgcollect_info` will be able to collect more information if the following tools are installed: - -* https://golang.org/doc/install[Golang] -- this should be the same version that Sync Gateway was built with. +`sgcollect_info` will be able to collect more information if the following tools are installed: +* https://golang.org/doc/install[Golang] -- this should be the same version that Sync Gateway was built with. -[cols="1,1", options="header"] +[cols="1,1",width="50%"] |=== -| - SG Version - -| - Go build version - - - -| +|Version |Go build version - 1.3.0 -| - 1.5.3 +|1.3.0 +|1.5.3 -| - 1.3.1 -| - 1.6.3 +|1.3.1 +|1.6.3 |=== -If go is not installed, sgcollect_info will print the following error message, you can ignore this message and there is no need to report it. +If go is not installed, `sgcollect_info` will print the following error message, you can ignore this message and there is no need to report it. -`Exception during compression: [Error 2] The system cannot find the file specified IMPORTANT: Compression using gozip failed. Falling back to python implementation. Please let us know about this and provide console output.` +`Exception during compression: [Error 2] The system cannot find the file specified IMPORTANT:Compression using gozip failed. Falling back to python implementation. Please let us know about this and provide console output.` -* http://www.graphviz.org/Download..php[Graphviz] -- this is used to render PDFs of the https://golang.org/pkg/net/http/pprof/[go pprof] output. +* https://www.graphviz.org/Download..php[Graphviz] -- this is used to render PDFs of the https://golang.org/pkg/net/http/pprof/[go pprof] output. diff --git a/modules/ROOT/pages/shared-bucket-access.adoc b/modules/ROOT/pages/shared-bucket-access.adoc index a3bbc8e11..2f02c5eaf 100644 --- a/modules/ROOT/pages/shared-bucket-access.adoc +++ b/modules/ROOT/pages/shared-bucket-access.adoc @@ -1,31 +1,30 @@ -= += Bucket Access +:url-downloads: https://www.couchbase.com/downloads -{% include landing.html %} +With Sync Gateway 1.5, you can seamlessly extend an existing Couchbase Server deployment to connect with remote edge devices that are occasionally disconnected or connected. -With Sync Gateway 1.5, you can seamlessly extend an existing Couchbase Server deployment to connect with remote edge devices that are occasionally disconnected or connected. - -In previous releases, you either had to ensure all writes happened through Sync Gateway, or had to set up bucket shadowing to ensure that the security and replication metadata needed by mobile applications was preserved. +In previous releases, you either had to ensure all writes happened through Sync Gateway, or had to set up bucket shadowing to ensure that the security and replication metadata needed by mobile applications was preserved. In this release, the metadata created by the Sync Gateway is abstracted from applications reading and writing data directly to Couchbase Server. -Sync Gateway 1.5 utilizes a new feature of Couchbase Server 5.0 called XATTRs (e*X*tended *ATTR*ibutes) to store that metadata into an external document fragment. -Mobile, web and desktop applications can therefore write to the same bucket in a Couchbase cluster. +Sync Gateway 1.5 utilizes a new feature of Couchbase Server 5.0 called XATTRs (e**X**tended **ATTR**ibutes) to store that metadata into an external document fragment. +Mobile, web and desktop applications can therefore write to the same bucket in a Couchbase cluster. == How to enable it This new feature was made opt-in primarily out of consideration for existing customers upgrading from Sync Gateway 1.4. It ensures that their existing configs will continue to work as-is, and supports upgrade without bringing down the entire Sync Gateway cluster. -The steps below walk through how to enable this new feature. - -. https://www.couchbase.com/downloads[Download Couchbase Server 5.0]. -. https://www.couchbase.com/downloads?family=Mobile&product=Couchbase%20Sync%20Gateway&edition=Enterprise%20Edition[Download Sync Gateway 1.5]. -. Create a new bucket in the Couchbase Server Admin Console. -. With Role Based Access Control (RBAC) newly introduced in Couchbase Server 5.0, you'll need to create a new user with authorized access to the bucket. Choose the *Security > Add User* option in the Couchbase Server Admin and select the *Bucket Full Access* and *Read Only Admin* roles. -. Start Sync Gateway with the following configuration file. +The steps below walk through how to enable this new feature. + +. {url-downloads}[Download Couchbase Server 5.0]. +. {url-downloads}#couchbase-mobile[Download Sync Gateway 1.5]. +. Create a new bucket in the Couchbase Server Admin Console. +. With Role Based Access Control (RBAC) newly introduced in Couchbase Server 5.0, you'll need to create a new user with authorized access to the bucket. +Choose the *Security > Add User* option in the Couchbase Server Admin and select the *Bucket Full Access* and *Read Only Admin* roles. +. Start Sync Gateway with the following configuration file. + - +-- [source,json] ---- - { "databases": { "db": { @@ -39,38 +38,40 @@ The steps below walk through how to enable this new feature. } } ---- -+ + There are two properties to keep in mind. -The `enable_shared_bucket_access` property is used to disable the default behaviour. +The `enable_shared_bucket_access` property is used to disable the default behavior. And the `import_docs` property to specify that this Sync Gateway node should perform import processing of incoming documents. -Note that in a clustered environment, only 1 node should use the `import_docs` property. -. On start-up, Sync Gateway will generate the mobile-specific metadata for all the pre-existing documents in the Couchbase Server bucket. From then on, documents can be inserted on the Server directly (SDKs) or through the Sync Gateway REST API. The mobile metadata is no longer kept in the document, but in a system extended attribute in Couchbase Server. +Note that in a clustered environment, only 1 node should use the `import_docs` property. +-- -The reference to the configuration API changes can be found below. +. On start-up, Sync Gateway will generate the mobile-specific metadata for all the pre-existing documents in the Couchbase Server bucket. +From then on, documents can be inserted on the Server directly (SDKs) or through the Sync Gateway REST API. +The mobile metadata is no longer kept in the document, but in a system extended attribute in Couchbase Server. -* link:config-properties.html#1.5/databases-foo_db-enable_shared_bucket_access[$dbname.enable_shared_bucket_access] to enable convergence for a given database. -* link:config-properties.html#1.5/databases-foo_db-import_docs[$dbname.import_docs] to give a particular Sync Gateway node the role of importing the documents. -* link:config-properties.html#1.5/databases-foo_db-import_filter[$dbname.import_filter] to select which document(s) to make aware to mobile clients. +The reference to the configuration API changes can be found below. -When this feature is enabled, the REST API will include the following changes. +* link:config-properties.html#databases-foo_db-enable_shared_bucket_access[$dbname.enable_shared_bucket_access] to enable convergence for a given database. +* link:config-properties.html#databases-foo_db-import_docs[$dbname.import_docs] to give a particular Sync Gateway node the role of importing the documents. +* link:config-properties.html#databases-foo_db-import_filter[$dbname.import_filter] to select which document(s) to make aware to mobile clients. -* Sync Gateway purging (link:admin-rest-api.html?v=1.5#/document/post\__db___purge[+/{db}/_purge+]) removes the document and its associated extended attributes. -* Sync Gateway document expiry (PUT link:admin-rest-api.html?v=1.5#/document/put\__db___doc_[+/{db}/{docid}+]) will tombstone the active revision. +When this feature is enabled, the REST API will include the following changes. +* Sync Gateway purging (xref:admin-rest-api.adoc#/document/post\__db___purge[/+{db}+/_purge]) removes the document and its associated extended attributes. +* Sync Gateway document expiry (PUT xref:admin-rest-api.adoc#/document/put\__db___doc_[+/{db}/{docid}+]) will tombstone the active revision. == Tombstones When this feature is enabled, mobile tombstones are not retained indefinitely. They will be purged based on the server's metadata purge interval. -To ensure tombstones are replicated to clients, you should set the server's metadata purge interval based on your expected replication frequency (see the link:config-properties.html#1.5/databases-foo_db-enable_shared_bucket_access[$dbname.enable_shared_bucket_access] reference). +To ensure tombstones are replicated to clients, you should set the server's metadata purge interval based on your expected replication frequency (see the link:config-properties.html#databases-foo_db-enable_shared_bucket_access[$dbname.enable_shared_bucket_access] reference). == Sample App -The following tutorial demonstrates the extended attributes support introduced in Sync Gateway 1.5. +The following tutorial demonstrates the extended attributes support introduced in Sync Gateway 1.5. [source] ---- -
@@ -84,24 +85,17 @@ The following tutorial demonstrates the extended attributes support introduced i
---- - -//
- -//
- - == Migrating from Bucket Shadowing As of Sync Gateway 1.5, the Bucket Shadowing feature is deprecated and no longer supported. -The following steps outline a recommended method for migrating from Bucket Shadowing to the latest version with interoperability between Couchbase Server SDKs and Couchbase Mobile. +The following steps outline a recommended method for migrating from Bucket Shadowing to the latest version with interoperability between Couchbase Server SDKs and Couchbase Mobile. -. Follow the recommendations in the https://developer.couchbase.com/documentation/server/current/install/upgrade-online.html[Couchbase Server documentation] to upgrade all instances to 5.0. -. Create a new bucket on Couchbase Server (**bucket 2**). -. Install Sync Gateway 1.5 on a separate node with shared access enabled and connect it to the new bucket (**bucket 2**). +. Follow the recommendations in the xref:server:install:upgrade-online.adoc[Couchbase Server documentation] to upgrade all instances to 5.0. +. Create a new bucket on Couchbase Server (*bucket 2*). +. Install Sync Gateway 1.5 on a separate node with shared access enabled and connect it to the new bucket (*bucket 2*). . Setup a link:running-replications.html[push replication] from the Sync Gateway instance used for Bucket Shadowing to the Sync Gateway 1.5 instance. -. Once the replication has completed, test your application is performing as expected. -. Update the load balancer to direct incoming traffic to the Sync Gateway 1.5 instance when you are ready to upgrade. -. Delete the first bucket (**bucket 1**). - +. Once the replication has completed, test your application is performing as expected. +. Update the load balancer to direct incoming traffic to the Sync Gateway 1.5 instance when you are ready to upgrade. +. Delete the first bucket (*bucket 1*). // diff --git a/modules/ROOT/pages/sync-function-api.adoc b/modules/ROOT/pages/sync-function-api.adoc index e416b2978..0b2ee4c61 100644 --- a/modules/ROOT/pages/sync-function-api.adoc +++ b/modules/ROOT/pages/sync-function-api.adoc @@ -1,89 +1,105 @@ = Sync Function API +:idprefix: +:idseparator: - The sync function is the core API you interact with on Sync Gateway. -This article explains its functionality, and how you write and configure it. +This article explains its functionality, and how you write and configure it. For simple applications it might be the only server-side code you need to write. -For more complex applications it is still a primary touchpoint for managing data routing and access control. +For more complex applications it is still a primary touchpoint for managing data routing and access control. The sync function is a JavaScript function whose source code is stored in the Sync Gateway's database configuration file. -The sync function is called every time a new revision/update is made to a document, and the changes to channels and access made by the sync function are __tied to that revision__. -If the document is later updated, the sync function will be called again on the new revision, and the new channel assignments and user/channel access _replace_ the ones from the first call. - -It can do the following things: - -* *Validate the document:* If the document has invalid contents, the sync function can throw an exception to reject it. The document won't be added to the database, and the client request will get an error response -* *Authorize the change:* The sync function can call `requireUser()` or `requireRole()` to specify what user(s) are allowed to modify the document. If the user making the change isn't in that list, an exception is thrown and the update is rejected with an error. Similarly, `requireAccess()` requires that the user making the change have access to any of the listed channels. -* *Assign the document to channels:* Based on the contents of the document, the sync function can call *channel()* to add the document to one or more channels. This makes it accessible to users who have access to those channels, and will cause the document to be pulled by users that are subscribed to those channels. -* *Grant users access to channels:* Calling `access(user, channel)` grants a user access to a channel. This allows documents to act as membership lists or access-control lists. - -*The sync function is crucial to the security of your application.* It's in charge of data validation, and of authorizing both read and write access to documents. +The sync function is called every time a new revision/update is made to a document, and the changes to channels and access made by the sync function are _tied to that revision_. +If the document is later updated, the sync function will be called again on the new revision, and the new channel assignments and user/channel access _replace_ the ones from the first call. + +It can do the following things: + +Validate the document:: +If the document has invalid contents, the sync function can throw an exception to reject it. +The document won't be added to the database, and the client request will get an error response +Authorize the change:: +The sync function can call `requireUser()` or `requireRole()` to specify what user(s) are allowed to modify the document. +If the user making the change isn't in that list, an exception is thrown and the update is rejected with an error. +Similarly, `requireAccess()` requires that the user making the change have access to any of the listed channels. +Assign the document to channels:: +Based on the contents of the document, the sync function can call *channel()* to add the document to one or more channels. +This makes it accessible to users who have access to those channels, and will cause the document to be pulled by users that are subscribed to those channels. +Grant users access to channels:: +Calling `access(user, channel)` grants a user access to a channel. +This allows documents to act as membership lists or access-control lists. + +*The sync function is crucial to the security of your application.* +It's in charge of data validation, and of authorizing both read and write access to documents. The API is high-level and lets you do some powerful things very simply, but you do need to remain vigilant and review the function carefully to make sure that it detects threats and prevents all illegal access. -The sync function should be a focus of any security review of your application. +The sync function should be a focus of any security review of your application. You write your sync function in JavaScript. -The basic structure of the sync function looks like this: +The basic structure of the sync function looks like this: [source,javascript] ---- - function (doc, oldDoc) { // Your code here } ---- -The sync function arguments are: +The sync function arguments are: -* ``doc``: An object, the content of the document that is being saved. This matches the JSON that was saved by the Couchbase Lite and replicated to Sync Gateway. The `\_id` property contains the document ID, and the `\_rev` property the new revision ID. If the document is being deleted, there will be a `\_deleted` property with the value true. -* ``oldDoc``: If the document has been saved before, the revision that is being replaced is available in this argument. Otherwise it's ``null``. (In the case of a document with conflicts, the current provisional winning revision is passed in ``oldDoc``.) Your implementation of the sync function can omit the `oldDoc` parameter if you do not need it (JavaScript ignores extra parameters passed to a function). +`doc`:: +An object, the content of the document that is being saved. +This matches the JSON that was saved by the Couchbase Lite and replicated to Sync Gateway. +The `_id` property contains the document ID, and the `_rev` property the new revision ID. +If the document is being deleted, there will be a `_deleted` property with the value true. +`oldDoc`:: +If the document has been saved before, the revision that is being replaced is available in this argument. +Otherwise it's `null`. +(In the case of a document with conflicts, the current provisional winning revision is passed in `oldDoc`.) +Your implementation of the sync function can omit the `oldDoc` parameter if you do not need it (JavaScript ignores extra parameters passed to a function). -The default Sync Function is documented in the API reference link:config-properties.html#1.5/databases-foo_db-sync[databases.$dbname.sync]. +The default Sync Function is documented in the API reference xref:config-properties.adoc#1.5/databases-foo_db-sync[databases.$dbname.sync]. == Validation and Authorization The sync function API provides several methods that you can use to validate document creation, updates and deletions. For error conditions, you can simply call the built-in JavaScript `throw()` function. -You can also enforce user access privileges by calling ``requireUser()``, ``requireRole()``, or ``requireAccess()``. +You can also enforce user access privileges by calling `requireUser()`, `requireRole()`, or `requireAccess()`. What happens to rejected documents? Firstly, they aren't saved to the Sync Gateway's database, so no access changes take effect. -Instead an error code (usually 403 Forbidden) is returned to Couchbase Lite's replicator. +Instead an error code (usually 403 Forbidden) is returned to Couchbase Lite's replicator. -Any other exception (including implicit ones thrown by the JavaScript runtime, like array bounds exceptions) will also prevent the document update, but will cause the gateway to return an HTTP 500 "Internal Error" status. +Any other exception (including implicit ones thrown by the JavaScript runtime, like array bounds exceptions) will also prevent the document update, but will cause the gateway to return an HTTP 500 "Internal Error" status. === throw() -At the most basic level, the sync function can prevent a document from persisting or syncing to any other users by calling `throw()` with an error object containing a ``forbidden``: property. -You enforce the validity of document structure by checking the necessary constraints and throwing an exception if they're not met. +At the most basic level, the sync function can prevent a document from persisting or syncing to any other users by calling `throw()` with an error object containing a `forbidden`: property. +You enforce the validity of document structure by checking the necessary constraints and throwing an exception if they're not met. -Here is an example sync function that disallows all writes to the database it is in. +Here is an example sync function that disallows all writes to the database it is in. [source,javascript] ---- - function(doc) { throw({forbidden: "read only!"}) } ---- The document update will be rejected with an HTTP 403 "Forbidden" error code, with the value of the `forbidden:` property being the HTTP status message. -This is the preferred way to reject an update. +This is the preferred way to reject an update. In validating a document, you'll often need to compare the new revision to the old one, to check for illegal changes in state. For example, some properties may be immutable after the document is created, or may be changeable only by certain users, or may only be allowed to change in certain ways. -That's why the current document contents are given to the sync function, as the `oldDoc` parameter. +That's why the current document contents are given to the sync function, as the `oldDoc` parameter. We recommend that you not create invalid documents in the first place. As much as possible, your app logic and validation function should prevent invalid documents from being created locally. -The server-side sync function validation should be seen as a fail-safe and a guard against malicious access. +The server-side sync function validation should be seen as a fail-safe and a guard against malicious access. -[[_requireuserusername]] === requireUser(username) -The `requireUser()` function authorizes a document update by rejecting it unless it's made by a specific user or users, as shown in the following example: +The `requireUser()` function authorizes a document update by rejecting it unless it's made by a specific user or users, as shown in the following example: [source,javascript] ---- - // Throw an error if username is not "snej": requireUser("snej"); @@ -92,18 +108,16 @@ requireUser(["snej", "jchris", "tleyden"]); ---- The function signals rejection by throwing an exception, so the rest of the sync function will not be run. -All properties of the `doc` parameter should be considered __untrusted__, since this is after all the object that you're validating. -This may sound obvious, but it can be easy to make mistakes, like calling `requireUser(doc.owners)` instead of ``requireUser(oldDoc.owners)``. -When using one document property to validate another, look up that property in ``oldDoc``, not ``doc``! +All properties of the `doc` parameter should be considered _untrusted_, since this is after all the object that you're validating. +This may sound obvious, but it can be easy to make mistakes, like calling `requireUser(doc.owners)` instead of `requireUser(oldDoc.owners)`. +When using one document property to validate another, look up that property in `oldDoc`, not `doc`! -[[_requirerolerolename]] === requireRole(rolename) -The `requireRole()` function authorizes a document update by rejecting it unless the user making it has a specific role or roles, as shown in the following example: +The `requireRole()` function authorizes a document update by rejecting it unless the user making it has a specific role or roles, as shown in the following example: [source,javascript] ---- - // Throw an error unless the user has the "admin" role: requireRole("admin"); @@ -112,18 +126,16 @@ requireRole(["admin", "old-timer"]); ---- The argument may be a single role name, or an array of role names. -In the latter case, the user making the change must have one or more of the given roles. +In the latter case, the user making the change must have one or more of the given roles. -The function signals rejection by throwing an exception, so the rest of the sync function will not be run. +The function signals rejection by throwing an exception, so the rest of the sync function will not be run. -[[_requireaccesschannels]] === requireAccess(channels) -The `requireAccess()` function authorizes a document update by rejecting it unless the user making it has access to at least one of the given channels, as shown in the following example: +The `requireAccess()` function authorizes a document update by rejecting it unless the user making it has access to at least one of the given channels, as shown in the following example: [source,javascript] ---- - // Throw an exception unless the user has access to read the "events" channel: requireAccess("events"); @@ -134,26 +146,25 @@ if (oldDoc) { } ---- -The function signals rejection by throwing an exception, so the rest of the sync function will not be run. +The function signals rejection by throwing an exception, so the rest of the sync function will not be run. -If a user was granted access to the link:data-routing.html#special-channels[star channel] (noted ``\*``), a call to `requireAccess('any channel name')'` will fail because the user wasn't granted access to that channel (only to the `\*` channel). To allow a user to perform a document update in this case, you can specify multiple channel names (``requireAccess('any channel name', '*')'``) +If a user was granted access to the xref:data-routing.adoc#special-channels[star channel] (noted `+*+`), a call to `requireAccess('any channel name')'` will fail because the user wasn't granted access to that channel (only to the `+*+` channel). To allow a user to perform a document update in this case, you can specify multiple channel names (`requireAccess('any channel name', '*')'`) == Routing The sync function API provides several functions that you can use to route documents. -The routing functions assign documents to channels, and enable user access to channels (which will route documents in those channels to those users.) +The routing functions assign documents to channels, and enable user access to channels (which will route documents in those channels to those users.) -Routing changes have no effect until the document is actually saved in the database, so if the sync function first calls `channel()` or ``access()``, but then rejects the update, the channel and access changes will not occur. +Routing changes have no effect until the document is actually saved in the database, so if the sync function first calls `channel()` or `access()`, but then rejects the update, the channel and access changes will not occur. === channel (name) The `channel()` function routes the document to the named channel(s). It accepts one or more arguments, each of which must be a channel name string, or an array of strings. The channel function can be called zero or more times from the sync function, for any document. -Here is an example that routes all "published" documents to the "public" channel: +Here is an example that routes all "published" documents to the "public" channel: [source,javascript] ---- - function (doc, oldDoc) { if (doc.published) { channel ("public"); @@ -161,45 +172,41 @@ function (doc, oldDoc) { } ---- -[quote] -*Tip:* As a convenience, it is legal to call `channel` with a `null` or `undefined` argument; it simply does nothing. -This allows you to do something like `channel(doc.channels)` without having to first check whether `doc.channels` exists. +TIP: As a convenience, it is legal to call `channel` with a `null` or `undefined` argument; it simply does nothing. +This allows you to do something like `channel(doc.channels)` without having to first check whether `doc.channels` exists. -[quote] -*Note:* Channels don't have to be predefined. -A channel implicitly comes into existence when a document is routed to it. +NOTE: Channels don't have to be predefined. +A channel implicitly comes into existence when a document is routed to it. If the document was previously routed to a channel, but the current call to the sync function (for an updated revision) doesn't route it to that channel, the document is removed from the channel. This may cause users to lose access to that document. If that happens, the next time Couchbase Lite pulls changes from the gateway, it will receive an empty revision of the document with nothing but a `"_removed": true` property. -(Of course the previous revisions of the document remain in your Couchbase Lite database until it's compacted.) +(Of course the previous revisions of the document remain in your Couchbase Lite database until it's compacted.) == Read Access === access (username, channelname) The `access()` function grants access to a channel to a specified user. -It can be called multiple times from a sync function. +It can be called multiple times from a sync function. The first argument can be an array of strings, in which case each user in the array is given access. The second argument can also be an array of strings, in which case the user(s) are given access to each channel in the array. -As a convenience, either argument may be `null` or ``undefined``, in which case nothing happens. +As a convenience, either argument may be `null` or `undefined`, in which case nothing happens. -If a user name begins with the prefix ``role:``, the rest of the name is interpreted as a role rather than a user. -The call then grants access to the specified channels for all users with that role. +If a user name begins with the prefix `role:`, the rest of the name is interpreted as a role rather than a user. +The call then grants access to the specified channels for all users with that role. -[quote] -*Note:* The effects of all access calls by all active documents are effectively unioned together, so if _any_ document grants a user access to a channel, that user has access to the channel. +NOTE: The effects of all access calls by all active documents are effectively unioned together, so if _any_ document grants a user access to a channel, that user has access to the channel. -[quote] -*Caution:* Revoking access to a channel can cause a user to lose access to documents, if s/he no longer has access to any channels those documents are in. -However, the replicator does _not_ currently delete such documents that have already been synced to a user's device (although future changes to those documents will not be replicated.) This is a design limitation of the Sync Gateway 1.0 that may be resolved in the future. +CAUTION: Revoking access to a channel can cause a user to lose access to documents, if s/he no longer has access to any channels those documents are in. +However, the replicator does _not_ currently delete such documents that have already been synced to a user's device (although future changes to those documents will not be replicated.) +This is a design limitation of the Sync Gateway 1.0 that may be resolved in the future. -The following code snippets shows some valid ways to call ``access()``: +The following code snippets shows some valid ways to call `access()`: [source,javascript] ---- - access ("jchris", "mtv"); access ("jchris", ["mtv", "mtv2", "vh1"]); access (["snej", "jchris", "role:admin"], "vh1"); @@ -208,11 +215,10 @@ access (null, "hbo"); // no-op access ("snej", null); // no-op ---- -Here is an example of a sync function that grants access to a channel for all the users listed in a document: +Here is an example of a sync function that grants access to a channel for all the users listed in a document: [source,javascript] ---- - function (doc, oldDoc) { if (doc.type == "chat_room") { // Give members access to the chat channel this document manages: @@ -229,78 +235,74 @@ function (doc, oldDoc) { The `role()` function grants a user a role, indirectly giving them access to all channels granted to that role. It can also affect the user's ability to revise documents, if the access function requires role membership to validate certain types of changes. Its use is similar to `access` -- the value of either parameter can be a string, an array of strings, or null. -If the value is null, the call is a no-op. +If the value is null, the call is a no-op. -For consistency with the `access` call, role names must always be prefixed with ``role:``. +For consistency with the `access` call, role names must always be prefixed with `role:`. An exception is thrown if a role name doesn't match this. -Some examples: +Some examples: [source,javascript] ---- - role ("jchris", "role:admin"); role ("jchris", ["role:portlandians", "role:portlandians-owners"]); role (["snej", "jchris", "traun"], "role:mobile"); role ("ed", null); // no-op ---- -[quote] -*Note:* Roles, like users, have to be explicitly created by an administrator. +NOTE: Roles, like users, have to be explicitly created by an administrator. So unlike channels, which come into existence simply by being named, you can't create new roles with a `role()` call. Nonexistent roles don't cause an error, but have no effect on the user's access privileges. -You can create a role after the fact; as soon as a role is created, any pre-existing references to it take effect. +You can create a role after the fact; as soon as a role is created, any pre-existing references to it take effect. == Expiry === expiry (value) Calling `expiry(value)` from within the sync function will set the expiry value (TTL) on the document. -When the expiry value is reached, the document will be purged from the database. +When the expiry value is reached, the document will be purged from the database. [source,javascript] ---- - expiry("2018-07-06T17:00:00+01:00") ---- -Under the hood, the expiration time is set and managed on the Couchbase Server document (TTL is not supported for databases in walrus mode). The value can be specified in two ways: +Under the hood, the expiration time is set and managed on the Couchbase Server document (TTL is not supported for databases in walrus mode). The value can be specified in two ways: -* *ISO-8601 format:* for example the 6th of July 2016 at 17:00 in the BST timezone would be ``2016-07-06T17:00:00+01:00``; -* *as a numeric Couchbase Server expiry value:* Couchbase Server expiries are specified as Unix time, and if the desired TTL is below 30 days then it can also represent an interval in seconds from the current time (for example, a value of 5 will remove the document 5 seconds after it is written to Couchbase Server). The document expiration time is returned in the response of GET link:rest-api.html#/document/get\__db___doc_[+/{db}/{doc}+] when `show_exp=true` is included in the querystring. +* *ISO-8601 format:* for example the 6th of July 2016 at 17:00 in the BST timezone would be `2016-07-06T17:00:00+01:00`; +* *as a numeric Couchbase Server expiry value:* Couchbase Server expiries are specified as Unix time, and if the desired TTL is below 30 days then it can also represent an interval in seconds from the current time (for example, a value of 5 will remove the document 5 seconds after it is written to Couchbase Server). +The document expiration time is returned in the response of GET xref:rest-api.adoc#/document/get\__db___doc_[+/{db}/{doc}+] when `show_exp=true` is included in the querystring. As with the existing explicit purge mechanism, this applies only to the local database; it has nothing to do with replication. This expiration time is not propagated when the document is replicated. -The purge of the document does not cause it to be deleted on any other database. +The purge of the document does not cause it to be deleted on any other database. -If link:shared-bucket-access.html[shared bucket access] is enabled (introduced in Sync Gateway 1.5), the behaviour of the expiry feature changes in the following way: when the expiry value is reached, instead of getting purged, the *active* revision of the document is tombstoned. +If xref:shared-bucket-access.adoc[shared bucket access] is enabled (introduced in Sync Gateway 1.5), the behavior of the expiry feature changes in the following way: when the expiry value is reached, instead of getting purged, the *active* revision of the document is tombstoned. If there is another non-tombstoned revision for this document (i.e a conflict) it will become the active revision. -The tombstoned revision will be purged when the server's metadata purge interval is reached. +The tombstoned revision will be purged when the server's metadata purge interval is reached. == Document Conflicts If a document is in conflict there will be multiple current revisions. -The default, "winning" one is the one whose channel assignments and access grants take effect. +The default, "winning" one is the one whose channel assignments and access grants take effect. == Handling deletions Validation checks often need to treat deletions specially, because a deletion is just a revision with a `"_deleted": true` property and usually nothing else. Many types of validations won't work on a deletion because of the missing properties -- for example, a check for a required property, or a check that a property value doesn't change. -You'll need to skip such checks if `doc._deleted` is true. +You'll need to skip such checks if `doc._deleted` is true. == Example Here's an example of a complete, useful sync function that properly validates and authorizes both new and updated documents. -The requirements are: +The requirements are: -* Only users with the role `editor` may create or delete documents. -* Every document has an immutable `creator` property containing the name of the user who created it. -* Only users named in the document's (required, non-empty) `writers` property may make changes to a document, including deleting it. -* Every document must also have a `title` and a `channels` property. +* Only users with the role `editor` may create or delete documents. +* Every document has an immutable `creator` property containing the name of the user who created it. +* Only users named in the document's (required, non-empty) `writers` property may make changes to a document, including deleting it. +* Every document must also have a `title` and a `channels` property. + - [source,javascript] ---- - function (doc, oldDoc) { if (doc._deleted) { // Only editors with write access can delete documents: @@ -333,38 +335,37 @@ function (doc, oldDoc) { } ---- - == Changing the sync function The Sync Function computes document routing to channels and user access to channels at document write time. -If the Sync Function is changed, Sync Gateway needs to reprocess all existing documents in the bucket to recalculate the routing and access assignments. +If the Sync Function is changed, Sync Gateway needs to reprocess all existing documents in the bucket to recalculate the routing and access assignments. The Admin REST API has a re-sync endpoint to process every document in the database again. -To update the Sync Function, it is recommended to follow the steps outlined below: +To update the Sync Function, it is recommended to follow the steps outlined below: -. Update the configuration file of the Sync Gateway instance. -. Restart Sync Gateway. -. Take the database offline using the link:admin-rest-api.html#!/database/post_db_offline[+/{db}/_offline+] endpoint. -. Call the re-sync endpoint on the Admin REST API. The message body of the response contains the number of changes that were made as a result of calling re-sync. -. Bring the database back online using the link:admin-rest-api.html#!/database/post_db_online[+/{db}/_online+] endpoint. +. Update the configuration file of the Sync Gateway instance. +. Restart Sync Gateway. +. Take the database offline using the xref:admin-rest-api.adoc#!/database/post_db_offline[+/{db}/_offline+] endpoint. +. Call the re-sync endpoint on the Admin REST API. The message body of the response contains the number of changes that were made as a result of calling re-sync. +. Bring the database back online using the xref:admin-rest-api.adoc#!/database/post_db_online[+/{db}/_online+] endpoint. This is an expensive operation because it requires every document in the database to be processed by the new function. -The database can't accept any requests until this process is complete (because no user's full access privileges are known until all documents have been scanned). Therefore the Sync Function update will result in application downtime between the call to the `+/{db}/_offline+` and `+/{db}/_online+` endpoints as mentioned above. +The database can't accept any requests until this process is complete (because no user's full access privileges are known until all documents have been scanned). Therefore the Sync Function update will result in application downtime between the call to the `+/{db}/_offline+` and `+/{db}/_online+` endpoints as mentioned above. === When should you run a re-sync? When running a re-sync operation, the context in the Sync Function is the admin user. -For that reason, calling the ``requireUser``, `requireAccess` and `requireRole` methods will always succeed. +For that reason, calling the `requireUser`, `requireAccess` and `requireRole` methods will always succeed. It is very likely that you are using those functions in production to govern write operations. But in a re-sync operation, all the documents are already written to the database. For that reason, it is recommended to use re-sync for changing the assignment to channels only (i.e. -reads). If the modifications to the Sync Function only impact write security (and not routing/access), you won't need to run the re-sync operation. +reads). If the modifications to the Sync Function only impact write security (and not routing/access), you won't need to run the re-sync operation. -Similarly, if you wish to change the channel/access rules, but only want those rules to apply to documents written after the change was made, then you don't need to run the re-sync operation. +Similarly, if you wish to change the channel/access rules, but only want those rules to apply to documents written after the change was made, then you don't need to run the re-sync operation. If you need to ensure access to the database during the update, you can create a read-only backup of the Sync Gateway's bucket beforehand, then run a secondary Sync Gateway on the backup bucket, in read-only mode. -After the update is complete, switch to the main Gateway and bucket. +After the update is complete, switch to the main Gateway and bucket. -In a clustered environment with multiple Sync Gateway instances sharing the load, all the instances need to share the same configuration, so they all need to be taken offline either by stopping the process or taking them offline using the link:admin-rest-api.html#!/database/post_db_offline[+/{db}/_offline+] endpoint. +In a clustered environment with multiple Sync Gateway instances sharing the load, all the instances need to share the same configuration, so they all need to be taken offline either by stopping the process or taking them offline using the xref:admin-rest-api.adoc#!/database/post_db_offline[+/{db}/_offline+] endpoint. After the configuration is updated, *one* instance should be brought up so it can update the database -- if more than one is running at this time, they'll conflict with each other. -After the first instance finishes opening the database, the others can be started. +After the first instance finishes opening the database, the others can be started. diff --git a/modules/ROOT/pages/upgrade.adoc b/modules/ROOT/pages/upgrade.adoc index 696a2bba0..0ebc9bea2 100644 --- a/modules/ROOT/pages/upgrade.adoc +++ b/modules/ROOT/pages/upgrade.adoc @@ -1,39 +1,42 @@ -:page-permalink: guides/sync-gateway/upgrade/index.html += Upgrade +:idprefix: +:idseparator: - This section is an overview of the different options to upgrade a running cluster to the latest version of Sync Gateway and Couchbase Server. For a complete list of instructions, we recommend to follow the http://docs.couchbase.com/tutorials/travel-sample/deploy/centos#/0/4/0[upgrade section] in the travel sample tutorial. -You will learn how to perform a rolling upgrade and enable the shared bucket access introduced in Sync Gateway 1.5 in order to use N1QL, Mobile and Server SDKs on the same bucket. +You will learn how to perform a rolling upgrade and enable the shared bucket access introduced in Sync Gateway 1.5 in order to use N1QL, Mobile and Server SDKs on the same bucket. == Sync Gateway In each of the scenarios described below, the upgrade process will trigger views in Couchbase Server to be re-indexed. During the re-indexing, operations that are dependent on those views will not be available. -The main operations relying on views to be indexed are: +The main operations relying on views to be indexed are: -* A user requests data that doesn't reside in the link:config-properties.html#1.5/databases-foo_db-cache-channel_cache_max_length[channel cache]. -* A new channel or role is granted to a user in the link:sync-function-api.html[Sync Function]. +* A user requests data that doesn't reside in the xref:config-properties.adoc#databases-foo_db-cache-channel_cache_max_length[channel cache]. +* A new channel or role is granted to a user in the xref:sync-function-api.adoc[Sync Function]. The unavailability of those operations may result in some requests not being processed. The duration of the downtime will depend on the data set and frequency of replications with mobile clients. -To avoid this downtime, it is possible to pre-build the view index before directing traffic to the upgraded node (see the link:index.html#view-indexing[view indexing] section). +To avoid this downtime, it is possible to pre-build the view index before directing traffic to the upgraded node (see the <> section). -[cols="1,1,6a", options="header"] +[cols="1,1,6a"] |=== -|From -|To -|Steps +|From |To |Steps |1.3 |1.4 -|* A rolling upgrade is supported: modify your load balancer's config to stop any HTTP traffic going to the node that will be upgraded, perform the upgrade on the given node and re-balance the traffic across all nodes. Repeat this operation for each node that needs to be upgraded. +|A rolling upgrade is supported: modify your load balancer's config to stop any HTTP traffic going to the node that will be upgraded, perform the upgrade on the given node and re-balance the traffic across all nodes. +Repeat this operation for each node that needs to be upgraded. |1.4 |1.5 xattrs disabled -|* A rolling upgrade is supported: modify your load balancer's config to stop any HTTP traffic going to the node that will be upgraded, perform the upgrade on the given node and re-balance the traffic across all nodes. Repeat this operation for each node that needs to be upgraded. +|A rolling upgrade is supported: modify your load balancer's config to stop any HTTP traffic going to the node that will be upgraded, perform the upgrade on the given node and re-balance the traffic across all nodes. +Repeat this operation for each node that needs to be upgraded. |1.5 xattrs disabled |1.5 xattrs enabled -|* A rolling upgrade is supported: modify your load balancer's config to stop any HTTP traffic going to the node that will be upgraded, perform the upgrade on the given node and re-balance the traffic across all nodes. Repeat this operation for each node that needs to be upgraded. +|* A rolling upgrade is supported: modify your load balancer's config to stop any HTTP traffic going to the node that will be upgraded, perform the upgrade on the given node and re-balance the traffic across all nodes. +Repeat this operation for each node that needs to be upgraded. * The mobile metadata for existing documents is automatically migrated. * The first node to be upgraded should have the `import_docs=continuous` property enabled. @@ -45,35 +48,32 @@ To avoid this downtime, it is possible to pre-build the view index before direct That being said, it is possible to avoid this downtime by running the 2 upgrade paths mentioned above (first, an upgrade from 1.4 to 1.5, and second, an upgrade from 1.5 to 1.5 with xattrs enabled). |=== -[quote] -*Note:* Enabling convergence on your existing deployment (i.e XATTRs) is *not* reversible. -It is recommended to test the upgrade on a staging environment before upgrading the production environment. +NOTE: Enabling convergence on your existing deployment (i.e XATTRs) is *not* reversible. +It is recommended to test the upgrade on a staging environment before upgrading the production environment. == Couchbase Server -All of the different upgrade paths mentioned above assume that Couchbase Server is running a link:upgrade.html[compatible version] for Sync Gateway. +All of the different upgrade paths mentioned above assume that Couchbase Server is running a xref:compatibility-matrix.adoc[compatible version] for Sync Gateway. There are 3 commonly used upgrade paths for Couchbase Server. -Depending on the one you choose, there may be additional consideration to keep in mind when using Sync Gateway: +Depending on the one you choose, there may be additional consideration to keep in mind when using Sync Gateway: -[cols="1,1,1,6a", options="header"] +[cols="1,1,1,6a"] |=== -|Upgrade Strategy -|Downtime -|Additional Machine Requirements -|Impact when using Sync Gateway +|Upgrade Strategy |Downtime |Additional Machine Requirements |Impact when using Sync Gateway |Rolling Online Upgrade |None |Low -|* **Potential transient connection errors:** The Couchbase Server re-balance operations can result in transient connection errors between Couchbase Server and Sync Gateway, which could result in Sync Gateway performance degradation. -* **Potential for unexpected server errors during re-balance:** There is an increased potential to lose in-flight ops during a fail-over. +|*Potential transient connection errors:* The Couchbase Server re-balance operations can result in transient connection errors between Couchbase Server and Sync Gateway, which could result in Sync Gateway performance degradation. + +*Potential for unexpected server errors during re-balance:* There is an increased potential to lose in-flight ops during a fail-over. |Upgrade Using Inter-cluster Replication |Small amount during switchover |High - duplicate entire cluster |Using an XDCR (Cross Data Center Replication) approach will have incur some Sync Gateway downtime, but less downtime than other approaches where Sync Gateway is shutdown during the entire Couchbase Server upgrade. -It's important to note that the XDCR replication must be a **one way** replication from the existing (source) Couchbase Server cluster to the new (target) Couchbase Server cluster, and that no other writes can happen on the new (target) Couchbase Server cluster other than the writes from the XDCR replication, and no Sync Gateway instances should be configured to use the new (target) Couchbase Server cluster until the last step in the process. +It's important to note that the XDCR replication must be a *one way* replication from the existing (source) Couchbase Server cluster to the new (target) Couchbase Server cluster, and that no other writes can happen on the new (target) Couchbase Server cluster other than the writes from the XDCR replication, and no Sync Gateway instances should be configured to use the new (target) Couchbase Server cluster until the last step in the process. . Start XDCR to do a one way replication from the existing (source) Couchbase Server cluster to the new (target) Couchbase Server cluster running the newer version. . Wait until the target Couchbase Server has caught up to all the writes in the source Couchbase Server cluster. @@ -84,25 +84,26 @@ It's important to note that the XDCR replication must be a **one way** replicati Caveats: -* **Small amount of downtime during switchover:** Since there may be writes still in transit after Sync Gateway has been shutdown, there will need to be some downtime until the target Couchbase Server cluster is completely caught up. -* **XDCR should be monitored:** Make sure to monitor the XDCR relationship as per https://developer.couchbase.com/documentation/server/current/xdcr/xdcr-intro.html[XDCR docs]. +* *Small amount of downtime during switchover:* Since there may be writes still in transit after Sync Gateway has been shutdown, there will need to be some downtime until the target Couchbase Server cluster is completely caught up. +* *XDCR should be monitored:* Make sure to monitor the XDCR relationship as per xref:5.5@server:xdcr:xdcr-intro.adoc[XDCR docs]. |Offline Upgrade |During entire upgrade |None -|* Take Sync Gateway offline -* Upgrade Couchbase Server using any of the options mentioned in the https://developer.couchbase.com/documentation/server/current/install/upgrading.html[Upgrading Couchbase Server] documentation. +| +* Take Sync Gateway offline +* Upgrade Couchbase Server using any of the options mentioned in the xref:5.5@server:install:upgrade.adoc[Upgrading Couchbase Server] documentation. * Bring Sync Gateway online |=== === View Indexing Sync Gateway uses Couchbase Server views to index and query documents. -When Sync Gateway starts, it will publish a Design Document which contains the View definitions (map/reduce functions). For example, the Design Document for Sync Gateway is the following: +When Sync Gateway starts, it will publish a Design Document which contains the View definitions (map/reduce functions). +For example, the Design Document for Sync Gateway is the following: [source,json] ---- - { "views":{ "access":{ @@ -120,42 +121,44 @@ When Sync Gateway starts, it will publish a Design Document which contains the V Following the Design Document creation, it must run against all the documents in the Couchbase Server bucket to build the index which may result in downtime. During a Sync Gateway upgrade, the index may also have to be re-built if the Design Document definition has changed. To avoid this downtime, you can publish the Design Document and build the index before starting Sync Gateway by using the Couchbase Server REST API. -The following curl commands refer to a Sync Gateway 1.3 -> Sync Gateway 1.4 upgrade but they apply to any upgrade of Sync Gateway or Accelerator. +The following curl commands refer to a Sync Gateway 1.3 -> Sync Gateway 1.4 upgrade but they apply to any upgrade of Sync Gateway or Accelerator. -. Start Sync Gateway 1.4 with Couchbase Server instance that *isn't* your production environment. Then, copy the Design Document to a file with the following. +. Start Sync Gateway 1.4 with Couchbase Server instance that *isn't* your production environment. +Then, copy the Design Document to a file with the following. + - [source,bash] ---- - $ curl localhost:8092//_design/sync_gateway/ > ddoc.json ---- -. Create a Development Design Document on the cluster where Sync Gateway is going to be upgraded from 1.3: -+ +. Create a Development Design Document on the cluster where Sync Gateway is going to be upgraded from 1.3: ++ +-- [source,bash] ---- - $ curl -X PUT http://localhost:8092//_design/dev_sync_gateway/ -d @ddoc.json -H "Content-Type: application/json" ---- -+ -This should return: -+ + +This should return: [source,bash] ---- - {"ok":true,"id":"_design/dev_sync_gateway"} ---- -. Run a View Query against the Development Design Document. By default, a Development Design Document will index one vBucket per node, however we can force it to index the whole bucket using the `full_set` parameter: -+ +-- +. Run a View Query against the Development Design Document. +By default, a Development Design Document will index one vBucket per node, however we can force it to index the whole bucket using the `full_set` parameter: ++ +-- [source,bash] ---- - $ curl "http://localhost:8092/sync_gateway/_design/dev_sync_gateway/_view/role_access_vbseq?full_set=true&stale=false&limit=1" ---- -+ + This may take some time to return, and you can track the index's progress in the Couchbase Server UI. -Note that this will consume disk space to build an almost duplicate index until the switch is made. -. Upgrade Sync Gateway. When Sync Gateway 1.4 starts, it will publish the new Design Document to Couchbase Server. This will match the Development Design Document we just indexed, so will be available immediately. +Note that this will consume disk space to build an almost duplicate index until the switch is made. +-- + +. Upgrade Sync Gateway. When Sync Gateway 1.4 starts, it will publish the new Design Document to Couchbase Server. +This will match the Development Design Document we just indexed, so will be available immediately.