diff --git a/.DS_Store b/.DS_Store new file mode 100644 index 0000000000..c80bcc3995 Binary files /dev/null and b/.DS_Store differ diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 09b6de5e34..530fb8f321 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -121,8 +121,13 @@ * xref:eventing:eventing-overview.adoc[Eventing Service: Fundamentals] ** xref:eventing:eventing-Terminologies.adoc[Terminology] ** xref:eventing:eventing-language-constructs.adoc[Language Constructs] + *** xref:eventing:eventing-timers.adoc[Timers] ** xref:eventing:eventing-adding-function.adoc[Adding a Couchbase Function] ** xref:eventing:eventing-examples.adoc[Examples: Using the Eventing Service] + *** xref:eventing:eventing-example-data-enrichment.adoc[Data Enrichment] + *** xref:eventing:eventing-examples-high-risk.adoc[Risk Assessment] + *** xref:eventing:eventing-examples-cascade-delete.adoc[Cascade Delete] + *** xref:eventing:eventing-examples-docexpiry.adoc[Document Expiry and Archival] ** xref:eventing:eventing-debugging-and-diagnosability.adoc[Debugging and Diagnosability] ** xref:eventing:eventing-statistics.adoc[Statistics] ** xref:eventing:troubleshooting-best-practices.adoc[Troubleshooting and Best Practices] diff --git a/modules/eventing/.DS_Store b/modules/eventing/.DS_Store new file mode 100644 index 0000000000..3ce9dd10aa Binary files /dev/null and b/modules/eventing/.DS_Store differ diff --git a/modules/eventing/pages/eventing-Terminologies.adoc b/modules/eventing/pages/eventing-Terminologies.adoc index 534611ad6d..a6288c892d 100644 --- a/modules/eventing/pages/eventing-Terminologies.adoc +++ b/modules/eventing/pages/eventing-Terminologies.adoc @@ -23,31 +23,40 @@ NOTE: In the Couchbase Server 6.0 BETA release, the handler code size is limited The persistent state of a Function is captured by entities listed below, while any state that appears on the execution stack is ephemeral. -* The metadata bucket (which will eventually be a system collection). +* The metadata bucket. * The document being observed and its Extended Attributes. == Buckets -Every Function definition needs two distinct buckets: source bucket and metadata bucket. +There are two distinct buckets: source bucket and metadata bucket. -The Functions use a source bucket to listen to data changes. -The Function handler code polls the source bucket for data mutations. -Multiple Functions can use the same source bucket. +*Source Bucket* -Metadata bucket stores internal artifacts and checkpoint information. -The metadata bucket provides better restartability semantics when the eventing nodes is offline. -Ensure that your Function handler code does not write any data to this bucket. +Couchbase Functions use a bucket to track data mutations. This bucket is termed as the source bucket. The source bucket can be either Couchbase or Ephemeral bucket type. However, memcached bucket types are not supported. -The Functions can in turn trigger data mutations, to avoid a cyclic generation of events, ensure that you carefully consider options when you select the source and destination buckets. -When you are using a series of handlers, ensure that: +When you are creating a function, you need to specify a source bucket. The handler code polls this bucket to track data mutations. -* The source bucket can be either Couchbase or Ephemeral bucket. -However, memcached buckets are not supported. -* Metadata bucket is used by the Eventing Service to store some critical checkpoint information. -Avoid writing to the metadata bucket or flushing it. -It is recommended that a separate bucket be kept destined as a metadata bucket. -* A read operation from the source bucket is allowed but operations such as delete, set and update are not supported. -* The destination buckets to which event handlers perform a write operation, do not have other event handlers configured to listen and track data mutations. +NOTE: You can use a common source bucket to listen to multiple Functions running different code. + +When a source bucket is deleted, all deployed functions associated with this source + +bucket, are undeployed. + + +After processing the handler code, documents can be stored in a different bucket. For understanding purposes, this bucket can be termed as a destination bucket. + +At times, the handler code can trigger data mutations. To avoid cyclic generation of data changes, refer to xref:troubleshooting-best-practices.adoc#cyclicredun[Bucket Allocation Considerations]. + +*Metadata Bucket* + +Metadata bucket stores artifacts such as timers, DCP streams, worker allocations along with internal checkpoint information. + +When you are creating a Function, ensure that a separate bucket is destined as a + +metadata bucket. You can use a common metadata bucket across multiple Functions. + +NOTE: At any point, refrain from deleting the metadata bucket. Also, ensure that the Function handler code does not perform a write operation on the metadata bucket. + +If a metadata bucket gets accidentally deleted, then all deployed functions are + +undeployed and associated indexes get dropped. == Workers @@ -57,11 +66,14 @@ The worker units are used mostly during Function execution. [#section_mzd_l1p_m2b] == Bindings -Bindings are constructs that allow the separating of environment-specific variables from the handler source code such as bucket-names, external endpoint URLs, and credentials. -Bindings allow Functions to be developed without source-code changes while working on different workflows. -Using bindings, you can port source code from different environments such as Test-to-Test, Test-to-Production, or Production-to-Production environments without making code changes. +A binding allows separating of environment specific variables from the handler code. + +You can add bindings as a name-value pair, where name is the actual name of the bucket in the cluster, and value is the alias that you can use to refer to the bucket from your + +handler code. -During Function definition when you are creating a bindings name-value pair, for a source bucket, ensure that only read operation is performed in the Function handler code. +NOTE: Bindings are mandatory when your handler code performs any bucket related operations. + +Bindings ensure easy porting of source code across different environments. You can export-import Functions from test-to-test, test-to-production, or production-to-production environments, without making code changes. == Function Settings @@ -91,7 +103,7 @@ To edit a deployed Function, you must first un-deploy the Function. == Feed Boundary Feed Boundary is a time milestone used during a Function configuration. -Using the Feed Boundary option, you can either invoke a Function on all the data present in the cluster or choose to invoke a Function during future instances of data mutation, post Function deployment. +Using the Feed Boundary option, you can either invoke a Function on all data mutations available in the cluster or choose to invoke a Function during future instances of data mutation, post Function deployment. == Undeploy @@ -107,9 +119,3 @@ Newly created handlers start in an undeployed state. When a Function gets deleted, the source code implementing the Function, all processing checkpoints, and other artifacts in the metadata provider are purged. Before deleting, make sure you have undeployed the Function. - -== Timers - -Timers provide execution of code at a preconfigured clock time or after a specified number of seconds. -Using timers, you can write a simple JavaScript Function handler code to delay or trigger the execution of a Function at specific wall-clock time events. -Timers allow archiving of expired documents at a preconfigured clock time. diff --git a/modules/eventing/pages/eventing-adding-function.adoc b/modules/eventing/pages/eventing-adding-function.adoc index cb691d9ca5..ff65fb715a 100644 --- a/modules/eventing/pages/eventing-adding-function.adoc +++ b/modules/eventing/pages/eventing-adding-function.adoc @@ -16,7 +16,7 @@ To add a new Function, proceed as follows: | Source Bucket | The name of a bucket currently defined on the cluster. -For more information on Source Bucket, refer to xref:clustersetup:create-bucket.adoc[Create a Bucket]. +For more information on creating buckets, refer to xref:clustersetup:create-bucket.adoc[Create a Bucket]. | Metadata Bucket | The name of a bucket currently defined on the cluster. @@ -61,7 +61,7 @@ Additional controls are now displayed: The controls are: . Click *Deploy*. This displays the *Confirm Deploy Function* dialog. The Feed Boundary determines whether documents previously in existence needs to be included in the Function's activities: the options are *Everything* and *From now*. -The *Everything* option invokes a Function on all the data present in the cluster. +The *Everything* option invokes a Function on all mutations available in the cluster The *From now* option invokes a Function during future instances of data mutation, post Function deployment. . Click *Deploy* Function. This deploys the Function and returns you to the main Eventing screen. diff --git a/modules/eventing/pages/eventing-api.adoc b/modules/eventing/pages/eventing-api.adoc index c0bfa9b31d..a254c47022 100644 --- a/modules/eventing/pages/eventing-api.adoc +++ b/modules/eventing/pages/eventing-api.adoc @@ -3,6 +3,8 @@ [abstract] The Functions REST API, available by default at port 8096, provides the methods available to work with Couchbase Functions. +NOTE: The Functions REST API is a Beta feature intended for development purposes only, do not use them in production; no Enterprise Support is provided for Beta features. + .Functions API [cols="2,3,6"] |=== diff --git a/modules/eventing/pages/eventing-debugging-and-diagnosability.adoc b/modules/eventing/pages/eventing-debugging-and-diagnosability.adoc index 17c9b2eb2d..1fe56a640a 100644 --- a/modules/eventing/pages/eventing-debugging-and-diagnosability.adoc +++ b/modules/eventing/pages/eventing-debugging-and-diagnosability.adoc @@ -1,16 +1,19 @@ = Debugging and Diagnosability [abstract] -Debugging and diagnostics in the Eventing Service comprises of debugging functions, logging functions, and log redaction. +Debugging and diagnostics in the Eventing Service comprises of debugging functions, functions log, and log redaction. + [#debugging-functions] -== *Debugging Functions* +== Debugging Functions Couchbase Server, for its Eventing Service framework, includes an online real-time Javascript Debugger. Debug is a special flag on a Function. The Debug option integrates seamlessly with the Google Chrome Debugger engine for running the Javascript code. -*Debugging Workflow* +Port *9140* is the default Eventing debug port. To change the default port settings, see xref:eventing-debugging-and-diagnosability.adoc#modifydebugport[Modifying the Debug Port]. + +=== Debugging Workflow * During a debug session, a single mutation received by the Function is considered and sent to the Debugger. This technique ensures that processing of the other data mutations in the cluster does not get blocked. @@ -19,14 +22,16 @@ This technique ensures that processing of the other data mutations in the cluste * Using the Debug option, you can place breakpoints in the code and run the Function execution, one line at a time. The step-step execution helps while troubleshooting the deployed Function. * If the debugged event-instance completes execution, no further event-instance gets trapped for debugging. +* If a debug session gets terminated during execution, then the mutation may be abruptly processed. * Debugging is a convenience-feature intended to help during Function development: it is not designed for production environments. -Debug mode should not be used in Product environments, as it affects the in-order processing of the document mutations. -In a production environment, debug sessions introduce timing issues. -If a debug session gets terminated during execution, then the mutation may be abruptly processed. +Debug mode should not be used in production environments, as it affects the in-order processing of the document mutations. +Additionally, the debug sessions introduce timing related issues. -*Debugging a Function Using the Debug Option* -. From the *Couchbase Web Console* > *Eventing* page, click on the name of a deployed Function. +=== Debugging a Function Using the Debug Option + +. To enable debugging, navigate to *Couchbase Web Console* > *Eventing* page, from the top banner, click *Settings* and check the *Enable Debugger* option. +. From the *Eventing* page, click on the name of a deployed Function. The deployed Function expands to provide additional options. Click *Edit JavaScript*. + @@ -44,23 +49,51 @@ image::debug_2.png[,300] + image::debug_3.png[,300] -. Copy the URL from the Debugging pop-up and paste it into your Google Chrome browser. +. Copy and paste the URL from the Debugging pop-up to your Google Chrome browser. + image::debug_4.png[,600] . From your Google Chrome browser, you can add breakpoints and run step-step diagnosis to debug and troubleshoot the deployed Function. -From the Debugging pop-up, click *Stop Debugging* to terminate a debug session. +. From the Debugging pop-up, click *Stop Debugging* to terminate a debug session. + +=== Transpiler and Source Map + +A transpiler accepts source code provided as input from one high-level programming language and produces an equivalent code in another high-level programming language. + +Couchbase Server uses a native transpiler. This transpiler converts the handler code to a code that the JavaScript engine can understand. If this transpiler was unavailable, then the JavaScript engine would have failed to compile any native N1QL queries. + +Source map, generated by our native transpiler, provides a mapping between the transpiled code and the original function handler code. Debugging is easy as the debugger detects the source map and presents the code in the original format. + +Upon source map detection, a text confirmation flag gets displayed in the debug window + +(highlighted below). + + +image::debug_sourcemap.png[,600] + +[#modifydebugport] +=== *Modifying the Debug Port* + +By default, the *ns_server* configures the Eventing Debug on port *9140*. Using the *static_config* file you can modify the default Eventing Debug port. + +To modify the debug port: + + . Stop Couchbase server. + . Navigate to the */opt/couchbase/etc/couchbase/static_config* file. This is the location where Couchbase Server picks up the configuration parameters. + . Edit the *static_config file* to add the new eventing_debug_port and the new port-number information. + . (Optional step) To remove any old configuration file, delete the */opt/couchbase/var/lib/couchbase/config/config.dat* file. + . Start Couchbase server. + +*Note*: If no port numbers are not specified, default ports are used. To override some or all default ports, append the user-defined ports to the *static_config file* file. [#logging-functions] --- -*Logging Functions* +== Logging Functions The Eventing Service creates two different types of logs: * System Logs * Application Logs -*System Logs* +=== System Logs For the Eventing Service, Couchbase Server creates a separate system log file, termed as eventing.log. The system log file captures all the Eventing Service related system errors depending on the level and severity of the reported problem. @@ -69,14 +102,14 @@ For every node, a single system log file gets created. The *eventing.log* contains redactable user data and the log is collected using the *cbcollect_info* tool. For log rotation, refer to xref:clustersetup:ui-logs.adoc[Using Logs]. -*Application Logs* +=== Application Logs Application logs allow you to identify and capture various Function related activities and errors. All Function-related activities such as editing the handler code, debugging, or modifying feed boundaries conditions, get recorded in the Application logs. Couchbase Server creates an individual log file for every Function in the cluster. By default, the maximum size of an Application log file is 40MB, and the number of log files before rotation is 10. -Unlike system logs, the Application logs are user configurable. +Unlike system logs, the Application logs are user-configurable. You can access an Application log file using the command line interface. Couchbase Server creates different application log files depending on the level and severity of the reported problem, as configured during Function definition. @@ -123,7 +156,7 @@ A corresponding Application log file, *enrich_ip_nums.log*, gets created in the Whenever the *enrich_ip_nums.log* reaches 10MB in size, assuming the maximum size of an Application log file is 10MB and the number of log files before rotation is 10, the system automatically generates the *enrich_ip_nums.log.1* file, during its first iteration. The file *enrich_ip_nums.log* transfers all the log information to this new log file. For this illustration, since the number of log files is 10, the system stores 10 such files, the currently active log file along with 9 truncated files, at any given instance. --- + [#log-redaction] == Log Redaction diff --git a/modules/eventing/pages/eventing-example-data-enrichment.adoc b/modules/eventing/pages/eventing-example-data-enrichment.adoc new file mode 100644 index 0000000000..af1f0a4017 --- /dev/null +++ b/modules/eventing/pages/eventing-example-data-enrichment.adoc @@ -0,0 +1,111 @@ += Data Enrichment + +== Goal: A document contains attributes whose form makes them difficult to search on. +Therefore, on the document's creation or modification, a new attribute should be created to accompany each original attribute; this new attribute being instantiated with a value that directly corresponds to that of its associated, original attribute; but takes a different form, thereby becoming more supportive of search. +Original attributes are subsequently retrievable based on successful searches performed on new attributes. + +== Implementation: Create a JavaScript function that contains an *OnUpdate* handler. +The handler listens for data-changes within a specified, source bucket. +When any document within the bucket is created or mutated, the handler executes a user-defined routine. +In this example, if the created or mutated document contains two specifically named fields containing IP addresses (these respectively corresponding to the beginning and end of an address-range), the handler-routine converts each IP address to an integer. +A new document is created in a specified, target bucket: this new document is identical to the old, except that it has two additional fields, which contain integers that correspond to the IP addresses. +The original document, in the source bucket, is not changed. + +== Preparations* + +This example requires the creation of three buckets: metadata, target and source buckets. + +Proceed as follows: + +. Create target and metadata buckets. +To create a bucket, refer to xref:clustersetup:create-bucket.adoc[Create a Bucket]. +The target bucket contains documents that will be created by the Function. +Don’t add any documents explicitly to this bucket. +. Follow Step 1. +and create a source bucket. +In the Source bucket screen: + .. Click *Add Document*. + .. In the *Add Document* window, specify the name *SampleDocument* as the *New**Document ID* + .. Click *Save*. + .. In the *Edit Document* dialog, paste the following within the edit panel. ++ +[cols=1*] +|=== +| { + +"country": "AD", + +"ip_end": "5.62.60.9", + +"ip_start": "5.62.60.1" + +} +|=== + + .. Click *Save*. +This step concludes all required preparations. + +*Procedure* + +Proceed as follows: + +. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*, to add a new Function. +. In the *ADD FUNCTION* dialog, for individual Function elements provide the below information: + ** For the Source Bucket drop-down, select the *source* _option_ that was created for this purpose. + ** For the Metadata Bucket drop-down, select the *metadata* _option_ that was created for this purpose. + ** Enter *enrich_ip_nums* as the name of the Function you are creating in the *Function**Name* text-box. + ** Enter *Enrich a document, converts IP Strings to Integers that are stored in new attributes,* in the *Description* text-box. + ** For the *Settings* option, use the default values. + ** For the Bindings option, specify *target* as the name of the bucket, and specify *tgt* as the associated value. +. After providing all the required information in the *ADD FUNCTION* dialog, click *Next*: *Add Code*. +The *enrich_ip_nums* dialog appears. +The *enrich_ip_nums* dialog initially contains a placeholder code block. +You will substitute your actual *enrich_ip_nums* code in this block. ++ +image::addfunctions_ex1.png[,400] + +. Copy the following Function, and paste it in the placeholder code block of the *enrich_ip_nums* dialog: ++ +---- + function OnUpdate(doc, meta) { + log('document', doc); + doc["ip_num_start"] = get_numip_first_3_octets(doc["ip_start"]); + doc["ip_num_end"] = get_numip_first_3_octets(doc["ip_end"]); + tgt[meta.id]=doc; +} + +function get_numip_first_3_octets(ip) +{ + var return_val = 0; + if (ip) + { + var parts = ip.split('.'); + + //IP Number = A x (256*256*256) + B x (256*256) + C x 256 + D + return_val = (parts[0]*(256*256*256)) + (parts[1]*(256*256)) + (parts[2]*256) + parseInt(parts[3]); + return return_val; + } +} +---- ++ +After pasting, the screen appears as displayed below: ++ +image::enrich_ip_nums.png[,500] ++ +The *OnUpdate* routine specifies that when a change occurs to data within the bucket, the routine *get_numip_first_3_octets* is run on each document that contains *ip_start* and *ip_end*. +A new document is created whose data and metadata are based on those of the document on which *get_numip_first_3_octets* is run; but with the addition of *ip_num_start* and *ip_num_end data-fields*, which contain the numeric values returned by *get_numip_first_3_octets*. +The *get_numip_first_3_octets* routine splits the IP address, converts each fragment to a numeral, and adds the numerals together, to form a single value; which it returns. + +. Click *Save*. +. To return to the Eventing screen, click *Eventing* and click on the newly created Function name. +The Function *enrich_ip_nums* is listed as a defined Function. ++ +image::deploy_enrich_ip_nums.png[,400] + +. Click *Deploy*. +. From the *Confirm Deploy Function* dialog, click *Deploy Function*. +From this point, the defined Function is executed on all existing documents and on subsequent mutations. +. To check results of the deployed Function, click the *Documents* tab. +. Select *target* bucket from the *Bucket* drop-down.As this shows, a version of *SampleDocument* has been added to the *target* bucket. +It contains all the attributes of the original document, with the addition of *ip_num_start* and *ip_num_end*; which contain the numeric values that correspond to *ip_start* and *ip_end*, respectively. +Additional documents added to the *source* bucket, which share the *ip_start* and *ip_end* attributes, will be similarly handled by the defined Function: creating such a document, and changing any attribute in such a document both cause the Function's execution. diff --git a/modules/eventing/pages/eventing-examples-cascade-delete.adoc b/modules/eventing/pages/eventing-examples-cascade-delete.adoc new file mode 100644 index 0000000000..7f1480557f --- /dev/null +++ b/modules/eventing/pages/eventing-examples-cascade-delete.adoc @@ -0,0 +1,96 @@ += Cascade Delete + +*Goal*: This example illustrates how to leverage the Eventing Service to perform a cascade delete operation. +When a user is deleted, Couchbase Functions provide a reliable method to delete all the associated documents with the deleted user. + +This example requires you to create three buckets: users, metadata and transactions buckets. + +For steps to create buckets, see https://developer.couchbase.com/documentation/server/5.1/clustersetup/create-bucket.html[[.underline]#Create Bucket#^]. + +*Implementation*: Create a JavaScript Function that contains an *OnDelete* handler. +The handler listens to data-changes within a specified, *users* source bucket. +When a user within the source bucket gets deleted, the handler executes a routine to remove the deleted user. +When the delete operation is complete, all associated documents of the delete users get removed. + +Proceed as follows: + +. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*,to add a new Function. ++ +image::functions_add_4exp3.png[,400] + +. In the *ADD FUNCTION* dialog, for individual Function elements, provide the below information: + ** For the *Source Bucket* drop-down, select the *Users* that was created for this purpose. + ** For the *Metadata Bucket* drop-down, select the *metadata* that was created for this purpose. + ** Enter *delete_orphaned_txns* as the name of the Function you are creating in the *Function**Name* text-box. + ** Enter *Delete Orphaned Transactions from the `transactions’ bucket when user_id is less than 10* in the *Description* text-box. + ** For the *Settings* option, use the default values. + ** For the *Bindings* option, specify *users* as the *name* of the bucket and specify **src**_**user** as the associated *value*. +. After providing all the required information in the *ADD FUNCTION* dialog, click *Next*: *Add Code*. +The *delete_orphaned_txns* dialog appears. ++ +The *delete_orphaned_txns* dialog initially contains a placeholder code block. +You will substitute your actual *delete_orphaned_txns* code in this block. ++ +image::addfunctions-code_exp3.png[,400] + +. Copy the following Function, and paste it in the placeholder code block of the *delete_orphaned_txns* screen: ++ +---- +function OnUpdate(doc, meta) { + log('OnUpdate document:', meta.id); +} + +function OnDelete(meta) { + log('Document Deleted:', meta.id); + if(meta.id < 10) + { + try + { + var this_user_id = meta.id; + var del = delete from `transactions` where user_id = TONUMBER($this_user_id); + del.execQuery(); + log('Deleted Orphaned Transactions for User:', this_user_id); + } + catch(e) + { + log('Exception:', e) + } + } +} +---- ++ +After pasting, the screen appears as displayed below: ++ +image::ondelete-functions.png[,600] ++ +The *OnDelete* handler is triggered for user delete transaction. +The handler checks if the *user_id* is less than 10. +When this condition is fulfilled, then an N1QL query is triggered to delete all user related information. +The handler is then configured to record this delete operation in a Function specific application log file. +. + +. To return to the Eventing screen, click *Eventing*. +The Function *delete_orphaned_txns* is listed as a defined Function. +Currently, it is listed as *Undeployed* and *Paused*. +. Click *Deploy*. +. From the *Confirm Deploy Function* dialog, click *Deploy Function*. +From this point, the defined Function is executed on all existing documents and on subsequent mutations. +. Navigate to the *Couchbase Web Console* > *Query* page. +Before deleting a user, a snapshot of *Query Result* from the *users* bucket is displayed: ++ +image::queryresults_ondelerte.png[,400] + +. The *Query Results* display users with **user_id**s from 1 to 10. +. Navigate to the *Couchbase Web Console* > *Buckets* page. +Delete two users from the *Users* bucket: + ** Select *User4* from the list and click the *delete* icon. + ** Select *User10* from the list and click the *delete* icon. +. From the *Query Editor*, execute an N1QL query to check that all related records for the deleted users are removed from the cluster. ++ +---- +SELECT user_id, COUNT(1) FROM `Users` GROUP BY user_id ORDER BY user_id ASC; +---- ++ +image::query-results-ondelete.png[,400] + +. In the *Query Results* pane notice that user_ids, *user_id4* and *user_id 10* are removed as part of the cascade user delete operation. diff --git a/modules/eventing/pages/eventing-examples-docexpiry.adoc b/modules/eventing/pages/eventing-examples-docexpiry.adoc new file mode 100644 index 0000000000..eb5c8c5492 --- /dev/null +++ b/modules/eventing/pages/eventing-examples-docexpiry.adoc @@ -0,0 +1,120 @@ += Document Expiry and Archival + +*Goal*: When a document in an existing bucket is about to expire, a new document is created in a newly created bucket. + +*Implementation*: Write an OnUpdate handler, which runs whenever a document is created or mutated. +The handler calls a timer routine, which executes a callback function, two minutes prior to any document’s established expiration. +This function retrieves a specified value from the document, and stores it in a document of the same name, in a specified target bucket. +The original document in the source bucket is not changed.. + +For this example, the buckets created such as source, target, and metadata buckets, are used. +A new document is created within the source bucket, and this document has its expiration — or Time To Live (TTL) — set to occur ten minutes after the document's creation. + +Python script for this Example is provided for reference. +Using the Couchbase SDK, you can create or modify the document expiration. +In this example, the Couchbase SDK Python client creates a document and sets the document's expiration. + +---- +from couchbase.cluster import Cluster +from couchbase.cluster import PasswordAuthenticator +import time +cluster = Cluster('couchbase://localhost:8091') +authenticator = PasswordAuthenticator('Administrator', 'password') +cluster.authenticate(authenticator) + +cb = cluster.open_bucket('source') +cb.upsert('SampleDocument2', {'a_key': 'a_value'}) +cb.touch('SampleDocument2', ttl=10*60) +---- + +The script imports a Couchbase cluster object, and authenticates against it, using (for demonstration purposes) the Full Administrator username and password (the cluster is assumed to be accessible on localhost). +The script then opens the existing source bucket, and inserts a new document, named *SampleDocument2*, whose body is *{'a_key': 'a_value'}*. + +For information on installing the Couchbase Python SDK, refer to xref:java-sdk::start-using-sdk.adoc[Start Using the SDK]. +For information on using the Couchbase Python SDK to establish bucket-expiration, refer to xref:dotnet-sdk::document-operations.adoc[Document Operations]. + +*Procedure* + +Proceed as follows: + +. Install the Couchbase SDK Python client and from the appropriate folder, start Python. ++ +---- +./python +---- + +. On the Python prompt, enter the provided code. ++ +---- +>>> from couchbase.cluster import Cluster +>>> from couchbase.cluster import PasswordAuthenticator +>>> import time +>>> cluster = Cluster('couchbase://localhost:8091') +>>> authenticator = PasswordAuthenticator('Administrator', 'password') +>>> cluster.authenticate(authenticator) +>>> cb = cluster.open_bucket('source') +>>> cb.upsert('SampleDocument2', {'a_key': 'a_value'}) +OperationResult +>>> cb.touch('SampleDocument2', ttl=10*60) +OperationResult +>>> +---- + +. To verify bucket creation, access the *Buckets* screen from the *Couchbase Web Console* and click the *Document* tab of the *Source* bucket. +The new document gets displayed. +. [Optional Step] Click on a document's id to view the metadata information. +. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*, to add a new Function. +The *ADD FUNCTION* dialog appears. +. In the *ADD FUNCTION* dialog, for individual Function elements provide the below information: + ** For the *Source Bucket* drop-down, select *source*. + ** For the *Metadata Bucket* drop-down, select *metadata*. + ** Enter *add_timer_before_expiry* as the name of the Function you are creating in the *FunctionName* text-box. + ** Enter text *Function that adds timer before document expiry*, in the *Description* text-box. + ** For the *Settings* option, use the default values. + ** For the *Bindings* option, add two bindings. +For the first binding specify *source* as the name of the bucket, and specify *src* as the associated value. +For the second binding, specify *target* as the name of the bucket, and specify *tgt* as the associated value. +. After providing all the required information in the *ADD FUNCTION* dialog, click *Next: Add Code*. +The *add_timer_before_expiry* dialog appears. +. The *add_timer_before_expiry* dialog initially contains a placeholder code block. +You will substitute your actual *add_timer_before_expiry code* in this block. ++ +image::casacade_del_withcode.png[,600] + +. Copy the following Function, and paste it in the placeholder code block of *add_timer_before_expiry* dialog. ++ +---- +function OnUpdate(doc, meta) { + if (meta.expiration > 0 ) //do only for those documents that have a non-zero TTL + { + var expiry = new Date(meta.expiration); + // Compute 2 minutes from the TTL timestamp + var twoMinsPrior = new Date(expiry.setMinutes(expiry.getMinutes()-2)); + var context = {docID : meta.id}; + createTimer(DocTimerCallback, twoMinsPrior , meta.id, context); + log('Added Doc Timer to DocId:', meta.id); + } +} +function DocTimerCallback(context) + { + log('DocTimerCallback Executed for DocId:', String(context.docID)); + tgt[context.docID] = "To Be Expired Key's Value is:" + JSON.stringify(src[context.docID]); + log('Doc Timer Executed for DocId', String(context.docID)); + } +---- ++ +After pasting, the screen appears as displayed below: ++ +image::casacade_del_withcode.png[,600] + +. Click *Save*. +. To return to the Eventing screen, click *Eventing* tab. +. From the *Eventing* screen, click *Deploy*. +. In the *Confirm Deploy Function* dialog, select *Everything from the Feed boundary* option. +. Click *Deploy*. +The function is deployed and starts running within a few seconds. ++ +image::cascade_delete_buckets.png[,600] ++ +As a result, a new document — like the original, named *SourceDocument2* — is created, with a value based on that of the original. +After two minutes has elapsed, check the documents within the source bucket: the original *SourceDocument2* is no longer visible, having been removed at its defined expiration-time. diff --git a/modules/eventing/pages/eventing-examples-high-risk.adoc b/modules/eventing/pages/eventing-examples-high-risk.adoc new file mode 100644 index 0000000000..aa0675515b --- /dev/null +++ b/modules/eventing/pages/eventing-examples-high-risk.adoc @@ -0,0 +1,137 @@ += Risk Assessment + +*Goal*: This example illustrates how to leverage Eventing Service in the Banking and Financial domain. +When a credit card transaction exceeds the user’s available credit limit, to indicate a high-risk transaction, an alert can be generated. + +This example requires you to create four buckets: *flagged_transactions, users, metadata*_and_ *transactions*_buckets_. +For steps on how to create buckets, see https://developer.couchbase.com/documentation/server/5.1/clustersetup/create-bucket.html[[.underline]#Create Bucket#^]. + +*Implementation*: Create a JavaScript Function that contains an *OnUpdate* handler. +The handler listens to data-changes within a specified, *transactions* source bucket. +When a document within the source bucket is created or mutated, the handler executes a user-defined routine. +In this example, if the created or mutated document contains a high-risk transaction, a new document gets created in a specified, *flagged_transactions* bucket. + +Proceed as follows: + +. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*,to add a new Function. +The *ADD FUNCTION* dialog appears. +. In the *ADD FUNCTION* dialog, for individual Function elements provide the below information: + ** For the Source Bucket drop-down, select *transactions* that was created for this purpose. + ** For the Metadata Bucket drop-down, select *metadata* that was created for this purpose. + ** Enter *high_risks_transactions* as the name of the Function you are creating in the *Function**Name* text-box. + ** Enter *Functions that computes risky transaction and flags them,* in the *Description* text-box. + ** For the *Settings* option, use the default values. + ** For the *Bindings* option, add two bindings. +For the first binding specify *users* as the *name* of the bucket, and specify *user* as the associated *value*. +For the second binding, specify *flagged_transactions* as the *name* of the bucket, and specify *high_risk* as the associated *value*. +. After providing all the required information in the *ADD FUNCTION* dialog, click *Next*: *Add Code*. +The *high_risks_transactions* dialog appears. ++ +The *high_risks_transactions* dialog initially contains a placeholder code block. +You will substitute your actual *high_risks_transactions* code in this block. ++ +image::add_functions_code_exp2.png[,400] + +. Copy the following Function, and paste it in the placeholder code block of the *high_risks_transactions* dialog: ++ +---- +function OnUpdate(doc, meta) { + try + { + //log('txn id:', meta.id, '; user_id:', doc.user_id , ', doc.amount:', doc.amount); + var this_user = getUser(doc.user_id); + if (this_user) + { + if(this_user['creditlimit'] < doc.amount) + { + log('Txn['+String(meta.id)+']*****High Risk Transaction as Txn Amount:'+ String(doc.amount)+' exceeds Credit Limit:',this_user['creditlimit']); + doc["comments"] = "High Risk Transaction as Txn Amount exceeds Credit Limit " +String(this_user['creditlimit']); + doc["reason_code"] = "X-CREDIT"; + high_risk[meta.id] = doc; + return; + } + else + { + if(doc.txn_currency != this_user['currency']) + { + log('Txn['+ String(meta.id) +']*****High Risk Transaction - Currency Mismatch:'+ this_user['currency']); + doc["comments"] = "High Risk Transaction - Currency Mismatch:" + this_user['currency']; + doc["reason_code"] = "XE-MISMATCH"; + high_risk[meta.id] = doc; + return; + } + } + //log('Acceptable Transaction:',doc.amount, ' for Credit Limit:', this_user['creditlimit']); + } + else + { + log('Txn['+ String(meta.id) + "] User Does not Exist:" + String(doc.user_id) ); + } + } + catch (e) + { + log('Error OnUpdate :', String(meta.id), e); + } +} + +function OnDelete(meta) { + log('Document OnDelete:', meta.id); +} + +function getUser(userId) +{ + try + { + if(userId != null) + { + return user[userId]; + } + } + catch (e) + { + log('Error getUser :', userId,'; Exception:', e); + } + return null; +} +---- ++ +After pasting, the screen appears as displayed below: ++ +image::high_risks_transactions_handler_code.png[,600] ++ +The OnUpdate handler is triggered for every transaction. +The handler checks if the transaction amount is less than the user’s available credit limit. +When this condition is breached, then this transaction is flagged as a high-risk transaction. +The Function _high_risks_transactions_ then moves this transaction to a different bucket, _flagged_transactions_ bucket. +When the transaction is moved to a new bucket, the handler enriches the document with predefined _comments_ and also provides a _reason code_*.* In the last part, the handler performs a currency validation step. +If the transaction currency is other than the preconfigured home currency of the user, then the handler flags the transactions and moves it to a different bucket. + +. Click *Save*. +. To return to the Eventing screen, click *Eventing*. ++ +image::high_risks_transactions_handler_deploy.png[,400] ++ +The Function __high_risks_transactions__is listed as a defined Function. +Currently, it is listed as *Undeployed* and *Paused*. + +. Click *Deploy*. +. From the *Confirm Deploy Function* dialog, click *Deploy Function*. +This deploys the Function and displays the main *Eventing* screen. +From this point, the defined Function is executed on all existing documents and on subsequent mutations. +. To check results of the deployed Function, after a sufficient time elapse, from the *Couchbase Web Console* > *Eventing* page, click *Buckets*. +. Click _flagged_transactions_ bucket. +All documents available in this bucket are transactions that are flagged as high-risk transactions. ++ +image::buckets.png[,600] ++ +This indicates that transactions which were flagged as high risk gets moved to the _flagged_transactions_ bucket. + +. From the *Couchbase Web Console* > *Query* page, execute the below N1QL query: ++ +---- +SELECT reason_code, COUNT(1), num_txns, SUM(amount) amount +FROM `flagged_transactions` +GROUP BY reason_code; +---- ++ +image::N1QL-Query.png[,400] diff --git a/modules/eventing/pages/eventing-examples.adoc b/modules/eventing/pages/eventing-examples.adoc index d5dae6498d..c7cc326706 100644 --- a/modules/eventing/pages/eventing-examples.adoc +++ b/modules/eventing/pages/eventing-examples.adoc @@ -3,479 +3,7 @@ [abstract] This page contains examples of how to use the Eventing Service, using the Couchbase Web Console. -[#example-1] -== Example 1 - -*Goal*: A document contains attributes whose form makes them difficult to search on. -Therefore, on the document's creation or modification, a new attribute should be created to accompany each original attribute; this new attribute being instantiated with a value that directly corresponds to that of its associated, original attribute; but takes a different form, thereby becoming more supportive of search. -Original attributes are subsequently retrievable based on successful searches performed on new attributes. - -*Implementation*: Create a JavaScript function that contains an *OnUpdate* handler. -The handler listens for data-changes within a specified, source bucket. -When any document within the bucket is created or mutated, the handler executes a user-defined routine. -In this example, if the created or mutated document contains two specifically named fields containing IP addresses (these respectively corresponding to the beginning and end of an address-range), the handler-routine converts each IP address to an integer. -A new document is created in a specified, target bucket: this new document is identical to the old, except that it has two additional fields, which contain integers that correspond to the IP addresses. -The original document, in the source bucket, is not changed. - -*Preparations* - -This example requires the creation of three buckets: metadata, target and source buckets. - -Proceed as follows: - -. Create target and metadata buckets. -To create a bucket, refer to xref:clustersetup:create-bucket.adoc[Create a Bucket]. -The target bucket contains documents that will be created by the Function. -Don’t add any documents explicitly to this bucket. -. Follow Step 1. -and create a source bucket. -In the Source bucket screen: - .. Click *Add Document*. - .. In the *Add Document* window, specify the name *SampleDocument* as the *New**Document ID* - .. Click *Save*. - .. In the *Edit Document* dialog, paste the following within the edit panel. -+ -[cols=1*] -|=== -| { - -"country": "AD", - -"ip_end": "5.62.60.9", - -"ip_start": "5.62.60.1" - -} -|=== - - .. Click *Save*. -This step concludes all required preparations. - -*Procedure* - -Proceed as follows: - -. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*, to add a new Function. -. In the *ADD FUNCTION* dialog, for individual Function elements provide the below information: - ** For the Source Bucket drop-down, select the *source* _option_ that was created for this purpose. - ** For the Metadata Bucket drop-down, select the *metadata* _option_ that was created for this purpose. - ** Enter *enrich_ip_nums* as the name of the Function you are creating in the *Function**Name* text-box. - ** Enter *Enrich a document, converts IP Strings to Integers that are stored in new attributes,* in the *Description* text-box. - ** For the *Settings* option, use the default values. - ** For the Bindings option, specify *target* as the name of the bucket, and specify *tgt* as the associated value. -. After providing all the required information in the *ADD FUNCTION* dialog, click *Next*: *Add Code*. -The *enrich_ip_nums* dialog appears. -The *enrich_ip_nums* dialog initially contains a placeholder code block. -You will substitute your actual *enrich_ip_nums* code in this block. -+ -image::addfunctions_ex1.png[,400] - -. Copy the following Function, and paste it in the placeholder code block of the *enrich_ip_nums* dialog: -+ ----- - function OnUpdate(doc, meta) { - log('document', doc); - doc["ip_num_start"] = get_numip_first_3_octets(doc["ip_start"]); - doc["ip_num_end"] = get_numip_first_3_octets(doc["ip_end"]); - tgt[meta.id]=doc; -} - -function get_numip_first_3_octets(ip) -{ - var return_val = 0; - if (ip) - { - var parts = ip.split('.'); - - //IP Number = A x (256*256*256) + B x (256*256) + C x 256 + D - return_val = (parts[0]*(256*256*256)) + (parts[1]*(256*256)) + (parts[2]*256) + parseInt(parts[3]); - return return_val; - } -} ----- -+ -After pasting, the screen appears as displayed below: -+ -image::enrich_ip_nums.png[,500] -+ -The *OnUpdate* routine specifies that when a change occurs to data within the bucket, the routine *get_numip_first_3_octets* is run on each document that contains *ip_start* and *ip_end*. -A new document is created whose data and metadata are based on those of the document on which *get_numip_first_3_octets* is run; but with the addition of *ip_num_start* and *ip_num_end data-fields*, which contain the numeric values returned by *get_numip_first_3_octets*. -The *get_numip_first_3_octets* routine splits the IP address, converts each fragment to a numeral, and adds the numerals together, to form a single value; which it returns. - -. Click *Save*. -. To return to the Eventing screen, click *Eventing* and click on the newly created Function name. -The Function *enrich_ip_nums* is listed as a defined Function. -+ -image::deploy_enrich_ip_nums.png[,400] - -. Click *Deploy*. -. From the *Confirm Deploy Function* dialog, click *Deploy Function*. -From this point, the defined Function is executed on all existing documents and on subsequent mutations. -. To check results of the deployed Function, click the *Documents* tab. -. Select *target* bucket from the *Bucket* drop-down.As this shows, a version of *SampleDocument* has been added to the *target* bucket. -It contains all the attributes of the original document, with the addition of *ip_num_start* and *ip_num_end*; which contain the numeric values that correspond to *ip_start* and *ip_end*, respectively. -Additional documents added to the *source* bucket, which share the *ip_start* and *ip_end* attributes, will be similarly handled by the defined Function: creating such a document, and changing any attribute in such a document both cause the Function's execution. - -[#example-2] -== Example 2 - -*Goal*: This example illustrates how to leverage Eventing Service in the Banking and Financial domain. -When a credit card transaction exceeds the user’s available credit limit, to indicate a high-risk transaction, an alert can be generated. - -This example requires you to create four buckets: *flagged_transactions, users, metadata*_and_ *transactions*_buckets_. -For steps on how to create buckets, see https://developer.couchbase.com/documentation/server/5.1/clustersetup/create-bucket.html[[.underline]#Create Bucket#^]. - -*Implementation*: Create a JavaScript Function that contains an *OnUpdate* handler. -The handler listens to data-changes within a specified, *transactions* source bucket. -When a document within the source bucket is created or mutated, the handler executes a user-defined routine. -In this example, if the created or mutated document contains a high-risk transaction, a new document gets created in a specified, *flagged_transactions* bucket. - -Proceed as follows: - -. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*,to add a new Function. -The *ADD FUNCTION* dialog appears. -. In the *ADD FUNCTION* dialog, for individual Function elements provide the below information: - ** For the Source Bucket drop-down, select *transactions* that was created for this purpose. - ** For the Metadata Bucket drop-down, select *metadata* that was created for this purpose. - ** Enter *high_risks_transactions* as the name of the Function you are creating in the *Function**Name* text-box. - ** Enter *Functions that computes risky transaction and flags them,* in the *Description* text-box. - ** For the *Settings* option, use the default values. - ** For the *Bindings* option, add two bindings. -For the first binding specify *users* as the *name* of the bucket, and specify *user* as the associated *value*. -For the second binding, specify *flagged_transactions* as the *name* of the bucket, and specify *high_risk* as the associated *value*. -. After providing all the required information in the *ADD FUNCTION* dialog, click *Next*: *Add Code*. -The *high_risks_transactions* dialog appears. -+ -The *high_risks_transactions* dialog initially contains a placeholder code block. -You will substitute your actual *high_risks_transactions* code in this block. -+ -image::add_functions_code_exp2.png[,400] - -. Copy the following Function, and paste it in the placeholder code block of the *high_risks_transactions* dialog: -+ ----- -function OnUpdate(doc, meta) { - try - { - //log('txn id:', meta.id, '; user_id:', doc.user_id , ', doc.amount:', doc.amount); - var this_user = getUser(doc.user_id); - if (this_user) - { - if(this_user['creditlimit'] < doc.amount) - { - log('Txn['+String(meta.id)+']*****High Risk Transaction as Txn Amount:'+ String(doc.amount)+' exceeds Credit Limit:',this_user['creditlimit']); - doc["comments"] = "High Risk Transaction as Txn Amount exceeds Credit Limit " +String(this_user['creditlimit']); - doc["reason_code"] = "X-CREDIT"; - high_risk[meta.id] = doc; - return; - } - else - { - if(doc.txn_currency != this_user['currency']) - { - log('Txn['+ String(meta.id) +']*****High Risk Transaction - Currency Mismatch:'+ this_user['currency']); - doc["comments"] = "High Risk Transaction - Currency Mismatch:" + this_user['currency']; - doc["reason_code"] = "XE-MISMATCH"; - high_risk[meta.id] = doc; - return; - } - } - //log('Acceptable Transaction:',doc.amount, ' for Credit Limit:', this_user['creditlimit']); - } - else - { - log('Txn['+ String(meta.id) + "] User Does not Exist:" + String(doc.user_id) ); - } - } - catch (e) - { - log('Error OnUpdate :', String(meta.id), e); - } -} - -function OnDelete(meta) { - log('Document OnDelete:', meta.id); -} - -function getUser(userId) -{ - try - { - if(userId != null) - { - return user[userId]; - } - } - catch (e) - { - log('Error getUser :', userId,'; Exception:', e); - } - return null; -} ----- -+ -After pasting, the screen appears as displayed below: -+ -image::high_risks_transactions_handler_code.png[,600] -+ -The OnUpdate handler is triggered for every transaction. -The handler checks if the transaction amount is less than the user’s available credit limit. -When this condition is breached, then this transaction is flagged as a high-risk transaction. -The Function _high_risks_transactions_ then moves this transaction to a different bucket, _flagged_transactions_ bucket. -When the transaction is moved to a new bucket, the handler enriches the document with predefined _comments_ and also provides a _reason code_*.* In the last part, the handler performs a currency validation step. -If the transaction currency is other than the preconfigured home currency of the user, then the handler flags the transactions and moves it to a different bucket. - -. Click *Save*. -. To return to the Eventing screen, click *Eventing*. -+ -image::high_risks_transactions_handler_deploy.png[,400] -+ -The Function __high_risks_transactions__is listed as a defined Function. -Currently, it is listed as *Undeployed* and *Paused*. - -. Click *Deploy*. -. From the *Confirm Deploy Function* dialog, click *Deploy Function*. -This deploys the Function and displays the main *Eventing* screen. -From this point, the defined Function is executed on all existing documents and on subsequent mutations. -. To check results of the deployed Function, after a sufficient time elapse, from the *Couchbase Web Console* > *Eventing* page, click *Buckets*. -. Click _flagged_transactions_ bucket. -All documents available in this bucket are transactions that are flagged as high-risk transactions. -+ -image::buckets.png[,600] -+ -This indicates that transactions which were flagged as high risk gets moved to the _flagged_transactions_ bucket. - -. From the *Couchbase Web Console* > *Query* page, execute the below N1QL query: -+ ----- -SELECT reason_code, COUNT(1), num_txns, SUM(amount) amount -FROM `flagged_transactions` -GROUP BY reason_code; ----- -+ -image::N1QL-Query.png[,400] - -[#example-3] -== Example 3 - -*Goal*: This example illustrates how to leverage the Eventing Service to perform a cascade delete operation. -When a user is deleted, Couchbase Functions provide a reliable method to delete all the associated documents with the deleted user. - -This example requires you to create three buckets: users, metadata and transactions buckets. - -For steps to create buckets, see https://developer.couchbase.com/documentation/server/5.1/clustersetup/create-bucket.html[[.underline]#Create Bucket#^]. - -*Implementation*: Create a JavaScript Function that contains an *OnDelete* handler. -The handler listens to data-changes within a specified, *users* source bucket. -When a user within the source bucket gets deleted, the handler executes a routine to remove the deleted user. -When the delete operation is complete, all associated documents of the delete users get removed. - -Proceed as follows: - -. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*,to add a new Function. -+ -image::functions_add_4exp3.png[,400] - -. In the *ADD FUNCTION* dialog, for individual Function elements, provide the below information: - ** For the *Source Bucket* drop-down, select the *Users* that was created for this purpose. - ** For the *Metadata Bucket* drop-down, select the *metadata* that was created for this purpose. - ** Enter *delete_orphaned_txns* as the name of the Function you are creating in the *Function**Name* text-box. - ** Enter *Delete Orphaned Transactions from the `transactions’ bucket when user_id is less than 10* in the *Description* text-box. - ** For the *Settings* option, use the default values. - ** For the *Bindings* option, specify *users* as the *name* of the bucket and specify **src**_**user** as the associated *value*. -. After providing all the required information in the *ADD FUNCTION* dialog, click *Next*: *Add Code*. -The *delete_orphaned_txns* dialog appears. -+ -The *delete_orphaned_txns* dialog initially contains a placeholder code block. -You will substitute your actual *delete_orphaned_txns* code in this block. -+ -image::addfunctions-code_exp3.png[,400] - -. Copy the following Function, and paste it in the placeholder code block of the *delete_orphaned_txns* screen: -+ ----- -function OnUpdate(doc, meta) { - log('OnUpdate document:', meta.id); -} - -function OnDelete(meta) { - log('Document Deleted:', meta.id); - if(meta.id < 10) - { - try - { - var this_user_id = meta.id; - var del = delete from `transactions` where user_id = TONUMBER($this_user_id); - del.execQuery(); - log('Deleted Orphaned Transactions for User:', this_user_id); - } - catch(e) - { - log('Exception:', e) - } - } -} ----- -+ -After pasting, the screen appears as displayed below: -+ -image::ondelete-functions.png[,600] -+ -The *OnDelete* handler is triggered for user delete transaction. -The handler checks if the *user_id* is less than 10. -When this condition is fulfilled, then an N1QL query is triggered to delete all user related information. -The handler is then configured to record this delete operation in a Function specific application log file. -. - -. To return to the Eventing screen, click *Eventing*. -The Function *delete_orphaned_txns* is listed as a defined Function. -Currently, it is listed as *Undeployed* and *Paused*. -. Click *Deploy*. -. From the *Confirm Deploy Function* dialog, click *Deploy Function*. -From this point, the defined Function is executed on all existing documents and on subsequent mutations. -. Navigate to the *Couchbase Web Console* > *Query* page. -Before deleting a user, a snapshot of *Query Result* from the *users* bucket is displayed: -+ -image::queryresults_ondelerte.png[,400] - -. The *Query Results* display users with **user_id**s from 1 to 10. -. Navigate to the *Couchbase Web Console* > *Buckets* page. -Delete two users from the *Users* bucket: - ** Select *User4* from the list and click the *delete* icon. - ** Select *User10* from the list and click the *delete* icon. -. From the *Query Editor*, execute an N1QL query to check that all related records for the deleted users are removed from the cluster. -+ ----- -SELECT user_id, COUNT(1) FROM `Users` GROUP BY user_id ORDER BY user_id ASC; ----- -+ -image::query-results-ondelete.png[,400] - -. In the *Query Results* pane notice that user_ids, *user_id4* and *user_id 10* are removed as part of the cascade user delete operation. - -== Example 4 - -*Goal*: When a document in an existing bucket is about to expire, a new document is created in a newly created bucket. - -*Implementation*: Write an OnUpdate handler, which runs whenever a document is created or mutated. -The handler calls a timer routine, which executes a callback function, two minutes prior to any document’s established expiration. -This function retrieves a specified value from the document, and stores it in a document of the same name, in a specified target bucket. -The original document in the source bucket is not changed.. - -For this example, the buckets created such as source, target, and metadata buckets, are used. -A new document is created within the source bucket, and this document has its expiration — or Time To Live (TTL) — set to occur ten minutes after the document's creation. - -Python script for this Example is provided for reference. -Using the Couchbase SDK, you can create or modify the document expiration. -In this example, the Couchbase SDK Python client creates a document and sets the document's expiration. - ----- -from couchbase.cluster import Cluster -from couchbase.cluster import PasswordAuthenticator -import time -cluster = Cluster('couchbase://localhost:8091') -authenticator = PasswordAuthenticator('Administrator', 'password') -cluster.authenticate(authenticator) - -cb = cluster.open_bucket('source') -cb.upsert('SampleDocument2', {'a_key': 'a_value'}) -cb.touch('SampleDocument2', ttl=10*60) ----- - -The script imports a Couchbase cluster object, and authenticates against it, using (for demonstration purposes) the Full Administrator username and password (the cluster is assumed to be accessible on localhost). -The script then opens the existing source bucket, and inserts a new document, named *SampleDocument2*, whose body is *{'a_key': 'a_value'}*. - -For information on installing the Couchbase Python SDK, refer to xref:java-sdk::start-using-sdk.adoc[Start Using the SDK]. -For information on using the Couchbase Python SDK to establish bucket-expiration, refer to xref:dotnet-sdk::document-operations.adoc[Document Operations]. - -*Procedure* - -Proceed as follows: - -. Install the Couchbase SDK Python client and from the appropriate folder, start Python. -+ ----- -./python ----- - -. On the Python prompt, enter the provided code. -+ ----- ->>> from couchbase.cluster import Cluster ->>> from couchbase.cluster import PasswordAuthenticator ->>> import time ->>> cluster = Cluster('couchbase://localhost:8091') ->>> authenticator = PasswordAuthenticator('Administrator', 'password') ->>> cluster.authenticate(authenticator) ->>> cb = cluster.open_bucket('source') ->>> cb.upsert('SampleDocument2', {'a_key': 'a_value'}) -OperationResult ->>> cb.touch('SampleDocument2', ttl=10*60) -OperationResult ->>> ----- - -. To verify bucket creation, access the *Buckets* screen from the *Couchbase Web Console* and click the *Document* tab of the *Source* bucket. -The new document gets displayed. -. [Optional Step] Click on a document's id to view the metadata information. -. From the *Couchbase Web Console* > *Eventing* page, click *ADD FUNCTION*, to add a new Function. -The *ADD FUNCTION* dialog appears. -. In the *ADD FUNCTION* dialog, for individual Function elements provide the below information: - ** For the *Source Bucket* drop-down, select *source*. - ** For the *Metadata Bucket* drop-down, select *metadata*. - ** Enter *add_timer_before_expiry* as the name of the Function you are creating in the *FunctionName* text-box. - ** Enter text *Function that adds timer before document expiry*, in the *Description* text-box. - ** For the *Settings* option, use the default values. - ** For the *Bindings* option, add two bindings. -For the first binding specify *source* as the name of the bucket, and specify *src* as the associated value. -For the second binding, specify *target* as the name of the bucket, and specify *tgt* as the associated value. -. After providing all the required information in the *ADD FUNCTION* dialog, click *Next: Add Code*. -The *add_timer_before_expiry* dialog appears. -. The *add_timer_before_expiry* dialog initially contains a placeholder code block. -You will substitute your actual *add_timer_before_expiry code* in this block. -+ -image::cascasde_delete.png[,600] -+ --\-> - -. Copy the following Function, and paste it in the placeholder code block of *add_timer_before_expiry* dialog. -+ ----- -function OnUpdate(doc, meta) { - if (meta.expiration > 0 ) //do only for those documents that have a non-zero TTL - { - var expiry = new Date(meta.expiration); - // Compute 2 minutes from the TTL timestamp - var twoMinsPrior = new Date(expiry.setMinutes(expiry.getMinutes()-2)); - var context = {docID : meta.id}; - createTimer(DocTimerCallback, twoMinsPrior , meta.id, context); - log('Added Doc Timer to DocId:', meta.id); - } -} -function DocTimerCallback(context) - { - log('DocTimerCallback Executed for DocId:', String(context.docID)); - tgt[context.docID] = "To Be Expired Key's Value is:" + JSON.stringify(src[context.docID]); - log('Doc Timer Executed for DocId', String(context.docID)); - } ----- -+ -After pasting, the screen appears as displayed below: -+ -image::casacade_del_withcode.png[,600] -+ --\-> - -. Click *Save*. -. To return to the Eventing screen, click *Eventing* tab. -. From the *Eventing* screen, click *Deploy*. -. In the *Confirm Deploy Function* dialog, select *Everything from the Feed boundary* option. -. Click *Deploy*. -The function is deployed and starts running within a few seconds. -+ -image::cascade_delete_buckets.png[,600] -+ --\-> -+ -As a result, a new document — like the original, named *SourceDocument2* — is created, with a value based on that of the original. -After two minutes has elapsed, check the documents within the source bucket: the original *SourceDocument2* is no longer visible, having been removed at its defined expiration-time. +. xref:eventing:eventing-example-data-enrichment.adoc[Data Enrichment] +. xref:eventing:eventing-examples-high-risk.adoc[Risk Assessment] +. xref:eventing:eventing-examples-cascade-delete.adoc[Cascade Delete] +. xref:eventing:eventing-examples-docexpiry.adoc[Document Expiry and Archival] diff --git a/modules/eventing/pages/eventing-faq.adoc b/modules/eventing/pages/eventing-faq.adoc index fcd5f5e29a..515ec3771c 100644 --- a/modules/eventing/pages/eventing-faq.adoc +++ b/modules/eventing/pages/eventing-faq.adoc @@ -1,44 +1,65 @@ = Frequently Asked Questions [abstract] -Some questions, frequently asked about the Eventing service and Functions. +This section provides answers to commonly asked questions pertaining to the Eventing Service and Functions. -== Frequently Asked Questions -* What languages are supported? -+ -Javascript is the language to be used while creating Functions. -Node modules cannot be imported: only simple Javascript can be used. +== Generic * Is Eventing an MDS-enabled service? + + -Yes. -MDS enables workload isolation in a Couchbase cluster. +Yes. MDS enables workload isolation in a Couchbase cluster. Eventing nodes can be added as more Functions are added, or if the mutation rates are high. + +* What kind of nodes do I run my Eventing Service on? ++ +Eventing leverages the latest trends in multi-core CPUs; therefore nodes that you select for the Eventing Service should optimally have a higher number of cores than those for indexing. + + +* Can the Eventing Service be co-located with other services? ++ +Yes, the Eventing Service node can be co-located with other MDS-enabled services. + + +* Is it supported for both text and binary formats of the documents? ++ +The value or the document must always be text in the form of a JSON document. + + +== Change Capture + * Will all the updates to a document be seen in DCP? + When a document is updated multiple times in a small time-window, not all updates are seen in DCP. This means that the event handlers see only those deduplicated events that are published in DCP. -* Is it supported for both text and binary formats of the documents? + +* Can a Function listen to all changes across Couchbase Server? + -The value or the document must always be text in the form of a JSON document. +A defined Function listens only to changes that are exposed using the DCP for the buckets (Couchbase and Ephemeral) in the data-nodes. +Memcached buckets are not supported. +The Function cannot listen to changes happening in other Couchbase components, such as FTS, Views, or GSI; nor can it listen to system events. + * Can we determine what has changed on the document? + -No. -But you can implement versioning as part of the application logic. +No. But you can implement versioning as part of the application logic. If versioning is important, include it as part of the data architecture. -* What kind of node do I run my Eventing Service on? + +* Can I get old and new values of the document inside the Function? + + -Eventing leverages the latest trends in multi-core CPUs; therefore nodes selected for Eventing should optimally have a higher number of cores than those for indexing. +No. We do not support versioning of documents; therefore, this feature is not available out of the box. However, you can have a parent bucket and its replica version. The replica bucket would retain the version of documents before the current set of changes. -* How does the Functions offering compare with the Couchbase’s Kafka Connector? +* Is the ordering of document mutations enforced? + -The Functions offering is about server-side processing or compute; it does not require any middleware to be deployed or managed. -Couchbase’s Kafka connector is an SDK component that needs an application container or middleware to run. +All changes from a document are always processed in order. + + +== Functions * Are Functions similar to a Database Trigger? + @@ -46,6 +67,7 @@ In a rough sense, Functions are similar to the Post-Triggers of the database wor But with Functions, the action is already completed at the data-layer, and the event handler gives an interface by which developers can key in the logic of what needs to happen ‘after’ the action is done. What a Function sees is the actual event of the change, and hence it does not directly correlate with Database Triggers. + * Are Functions similar to a Rules Engine? + Not exactly. @@ -53,6 +75,7 @@ A Rules Engine enforces ordering and other semantics that is not possible out of However, Functions can be used to implement one - even Rules Engine. The Function in its purest form offers a rule to implemented closer to the data, but cannot trigger another Function directly. + * Are Functions similar to a Stored Procedure? + Stored Procedures enforce a request-response model. @@ -61,72 +84,53 @@ Functions are based on the idea of events. Changes to the data are events. These events trigger Functions, and hence this is not a request-response model in its purest sense. -* Do I need a separate middleware to consume the Functions? -How do I consume changes in my middleware/application? + +* Do I need a separate middleware to consume the Functions? How do I consume changes in my middleware/application? + -Database changes are consumed using the Function defined: there is no other programmatic way of accessing the changes (such as by using an SDK, or some other form of middleware). -REST endpoints are exposed, to perform administrative operations. +Database changes are consumed using the Function defined: there is no other programmatic way of accessing the changes (such as by using an SDK, or some other form of middleware). REST endpoints are exposed, to perform administrative operations + * Can I import my own JS libraries or Node Modules? + No. We do not allow import of node modules or external JS libraries. -* Can a Function listen to all changes across Couchbase Server? -+ -A defined Function listens only to changes that are exposed using the DCP for the buckets (Couchbase and Ephemeral) in the data-nodes. -Memcached buckets are not supported. -The Function cannot listen to changes happening in other Couchbase components, such as FTS, Views, or GSI; nor can it listen to system events. -* Can there be more than one Function listening to changes to a bucket? +* Are Functions supported for both Data Node and Document storage? + -Yes. -More than one Function can be defined for the same bucket. -This lets you process the change according to the business logic that you enforce. -But there is no ordering enforced; for example, if bucket 'wine' has three different Functions, which are FunctionA, FunctionB, and FunctionC, you cannot enforce the order in which these Functions are executed. -Also, database triggers suffered from scalability and diagnosability issues. -Functions offer multiple diagnosability solutions and is highly scalable and performant. +The Eventing Service listens to changes that appear in the DCP. +DCP is valid for the Data Service, and Functions operate on documents that are either in key-value or in document (JSON) format. -* What is the metadata bucket? -Do I need to create a separate bucket? -+ -To provide better restartability semantics when an Eventing node is offline, some metadata needs to be stored: a Couchbase bucket solves this problem. -Setting up the metadata bucket is a one-time activity that is done cluster-wide. -It is recommended that the metadata bucket not be used for any other data-storage (which is to say, it should not be accessed by any other application). -* What is in the "meta" Function parameter (OnUpdate, OnDelete)? -Is this the metadata we currently write in order to figure out what has changed in the document? +* What happens when a Function is debugged? + -These are the meta fields associated with the document. -For more information, refer to the https://developer.couchbase.com/documentation/server/3.x/developer/dev-guide-3.0/keys-values.html[Link^] section. +We block one of the mutations alone and hand it over to the debugger session. +The rest of the mutations continue to be serviced by the event handler. -* Are Functions supported for both KV and Document storage? -+ -The Eventing Service listens to changes that appear in the DCP. -DCP is valid for the Data Service, and Functions operate on documents that are either in key-value or in document (JSON) format. -* Is it possible to get additional state during a Function execution? -For example, can you read from the data service in a Function to fetch related data? -For example, can we enrich the updated document with data from another document (using a document id)? +* How to perform Functions lifecycle operations from CI/CD? + -Yes. -You can read from any other bucket, and enrich the document. +To perform Functions lifecycle operations from CI/CD, refer to https://developer.couchbase.com/documentation/server/6.0/cli/cbcli/couchbase-cli-eventing-function-setup.html[CLI Eventing] section. -* Can I get old and new values of the document inside the Function? + +* How to invoke a REST endpoint from inside the Function? + -No. -We do not support versioning of documents; therefore, this feature is not available out of the box. -Though customers can have another ‘Mother’ bucket that stores documents that could be looked up, in order to determine the difference between the current document and the last modified. +To invoke a REST Endpoint from inside the Function, refer to https://developer.couchbase.com/documentation/server/6.0/eventing/eventing-api.html[Functions REST API] section. -* Does a rebalance have any effect on the firing of events? + +* How does the Functions offering compare with the Couchbase’s Kafka Connector? + -No. -Functions do not lose any mutations during a rebalance operations. +The Functions offering is about server-side processing or compute; it does not require any middleware to be deployed or managed. +Couchbase’s Kafka connector is an SDK component that needs an application container or middleware to run. -* What happens to the Eventing Service during a failover condition? + +== Function Handler Code + +* What languages are supported? + -When the Data Service experiences a failover condition, mutations may be lost and these lost mutations are not processed by the Eventing service. -When the Eventing node experiences a failover condition, few mutations may be processed more than once. +Javascript is the language to be used while creating Functions. +Node modules cannot be imported: only simple Javascript can be used. + * Why can’t I create global variables? + @@ -134,31 +138,43 @@ We restrict the language model in such a way that chances of going wrong are min As Functions are stateless compute entities, global variables do not have a good use-case, and therefore, they are not supported. Though you can define Javascript functions inside a Function (outside the scope of OnUpdate and OnDelete) that can be invoked any number of times from either of the event handlers. -* Is the ordering of document mutations enforced? + +* What is in the "meta" Function parameter (OnUpdate, OnDelete)? Is this the metadata we currently write in order to figure out what has changed in the document? + -All changes from a document are always processed in order. +These are the meta fields associated with the document. For more information, refer to the Link section. -* What happens when a Function is debugged? + +* What is the metadata bucket? Do I need to create a separate bucket? + -We block one of the mutations alone and hand it over to the debugger session. -The rest of the mutations continue to be serviced by the event handler. +To provide better restartability semantics when an Eventing node is offline, some metadata needs to be stored: a Couchbase bucket solves this problem. +Setting up the metadata bucket is a one-time activity that is done cluster-wide. +It is recommended that the metadata bucket not be used for any other data-storage (which is to say, it should not be accessed by any other application). -* Are timers scalable? + +* Can there be more than one Function listening to changes to a bucket? + -Timers get automatically sharded across Eventing nodes and therefore are elastically scalable. -Due to sharding, triggering of timers at or after a specified time interval is guaranteed. -However, triggering of timers may either be on the same node where the time was created, or on a different node. -Relative ordering between two specific timers cannot be maintained. +Yes. +More than one Function can be defined for the same bucket. +This lets you process the change according to the business logic that you enforce. +But there is no ordering enforced; for example, if bucket 'wine' has three different Functions, which are FunctionA, FunctionB, and FunctionC, you cannot enforce the order in which these Functions are executed. +Also, database triggers suffered from scalability and diagnosability issues. +Functions offer multiple diagnosability solutions and is highly scalable and performant. + -* Can I use Debugger to debug timers? +* Is it possible to get additional state during a Function execution? For example, can you read from the data service in a Function to fetch related data? For example, can we enrich the updated document with data from another document (using a document id)? + -Timers cannot be debugged using the Visual Debugger. +Yes. +You can read from any other bucket, and enrich the document. + -* What happens when the Function handler code contains a timestamp in the past? +== Cluster Behavior + +* What happens to the Eventing Service during a failover condition? + -When a Function handler code contains a timestamp in the past, upon a successful Function deployment, the system executes the code in the next available time slot. +When the Data service experiences a failover condition, mutations may be lost and these lost mutations are not processed by the Eventing service. +When the Eventing node experiences a failover condition, few mutations may be processed more than once. + -* What is the Timer behavior post reboot? +* Does a rebalance have any effect on the firing of events? + -During a boot operation, all clocks in the cluster nodes get synchronized. -Post-startup, cluster nodes get periodically synchronized using clock synchronization tools such as Network Time Protocol (NTP). +No. Functions do not lose any mutations during a rebalance operation. diff --git a/modules/eventing/pages/eventing-language-constructs.adoc b/modules/eventing/pages/eventing-language-constructs.adoc index 54d9d75978..1f4b9af4d3 100644 --- a/modules/eventing/pages/eventing-language-constructs.adoc +++ b/modules/eventing/pages/eventing-language-constructs.adoc @@ -113,6 +113,48 @@ function OnUpdate(doc, meta) { break; // Cancel streaming query by breaking out. } } + +---- +The Function handler code supports N1QL queries. +Top level N1QL keywords, such as SELECT, UPDATE, and INSERT, are available as keywords in Functions. + +During deployment, if a handler code includes an N1QL query, then the system generates a warning message. +[.out]``Warning Message: "Handler uses Beta features. +Do not use in production environments."``However, the warning message does not prevent the Function deployment. + +You must use [.var]`$`, as per N1QL specification, to use a JavaScript variable in the query statement. +The object expressions for substitution are not supported and therefore you cannot use the [.param]`meta.id` expression in the query statement. + +Instead of [.param]`meta.id` expression, you can use `var id = meta.id` in an N1QL query. + +* Invalid N1QL query ++ +---- +DELETE FROM `transactions` WHERE username = $meta.id; +---- + +* Valid N1QL query ++ +---- +var id = meta.id; +DELETE FROM `transactions` WHERE username = $id; +---- + +When you use a N1QL query inside a Function handler, remember to use an escaped identifier for bucket names with special characters (`[.var]`bucket-name``). +Escaped identifiers are surrounded by backticks and support all identifiers in JSON + +For example: + +* If the bucket name is [.param]`beer-sample`, then use the N1QL query such as: ++ +---- +SELECT * FROM `beer-sample` WHERE type... +---- + +* If bucket name is [.param]`beersample`, then use the N1QL query such as: ++ +---- +SELECT * FROM beersample WHERE type ... ---- [#handler-signatures] @@ -238,56 +280,12 @@ Reserved words as a property bindings value image::reserved-words.png[,300] -== *Support for N1QL in Function Handlers* - -IMPORTANT: The N1QL queries in events are a BETA feature and may have some rough edges and bugs, and may change significantly before the final GA release. -This Beta version of Couchbase Server is intended for development purposes only; no Enterprise Support is provided for Beta features. - -The Function handler code supports N1QL queries. -Top level N1QL keywords, such as SELECT, UPDATE, and INSERT, are available as keywords in Functions. - -During deployment, if a handler code includes an N1QL query, then the system generates a warning message. -[.out]``Warning Message: "Handler uses Beta features. -Do not use in production environments."``However, the warning message does not prevent the Function deployment. - -You must use [.var]`$`, as per N1QL specification, to use a JavaScript variable in the query statement. -The object expressions for substitution are not supported and therefore you cannot use the [.param]`meta.id` expression in the query statement. - -Instead of [.param]`meta.id` expression, you can use `var id = meta.id` in an N1QL query. - -* Invalid N1QL query -+ ----- -DELETE FROM `transactions` WHERE username = $meta.id; ----- - -* Valid N1QL query -+ ----- -var id = meta.id; -DELETE FROM `transactions` WHERE username = $id; ----- - -When you use a N1QL query inside a Function handler, remember to use an escaped identifier for bucket names with special characters (`[.var]`bucket-name``). -Escaped identifiers are surrounded by backticks and support all identifiers in JSON - -For example: - -* If the bucket name is [.param]`beer-sample`, then use the N1QL query such as: -+ ----- -SELECT * FROM `beer-sample` WHERE type... ----- - -* If bucket name is [.param]`beersample`, then use the N1QL query such as: -+ ----- -SELECT * FROM beersample WHERE type ... ----- [#timers] == Timers +Author's Note: The Timer content to be removed + *Creating a Timer* To create a timer use the below syntax: diff --git a/modules/eventing/pages/eventing-timers.adoc b/modules/eventing/pages/eventing-timers.adoc new file mode 100644 index 0000000000..ac09478ab8 --- /dev/null +++ b/modules/eventing/pages/eventing-timers.adoc @@ -0,0 +1,77 @@ += Timers + +Timers are asynchronous compute which offers Function execution in reference to wall-clock events. Timers also measure and track the amount of elapsed time and can be used while archiving of expired documents at a preconfigured time. + +Few important aspects related to timers are listed below: + +* Timers follow the same timeout semantics as their Parent Functions. So, if a Function has an execution timeout of 60 seconds, each of the timers created from the Function inherits the same execution timeout value of 60 seconds. +* Timers may run on a different node than the one on which it was created. +* One execution of timers is guaranteed despite node failures and cluster rebalances. +* During Function backlogs, timers get eventually executed. +* The metadata bucket stores information about timers and its association with a Function. +* Ensure that the metadata bucket is not deleted or flushed, or the keys in metadata bucket gets updated. +* With an increase in the usage of timers, the metadata memory assignment must also be increased. Due to runtime or programmatic errors in the function handler code, if triggering of a timer fails, then timer execution may get permanently blocked. +* Bindings can be reused in timers. Bindings, created during the Function definition, can be accessed by the timer constructs in the Function handler code. +* Timers get deleted when the associated Function is deleted or undeployed. + +== Language Constructs + +The timers language construct is added to support requirements of Couchbase Functions. + +To create a timer use the below syntax: + +---- +createTimer(callback, timestamp, reference, context) +---- +In the createTimer syntax: + +* callback - is the function called when the timer gets triggered. You need to ensure that the callback function is the top-level function that takes a single argument, the context. +* timestamp - is the JavaScript Date object timestamp at which the Function handler code must be executed. +* reference - is a unique string that gets passed. This string helps to identify the timer that is being created. All callback and references are scoped to the Function definition. Also, all references must be unique within the Function scope. When multiple timers are created with the same unique reference, old timers (with the same unique reference) get canceled. +* context - is any JavaScript object that can be serialized. When the timer gets triggered, the context specified during timer creation gets passed to the callback Function. For optimal performance, the context object payload needs to be lesser than 100 KB. + +A sample createTimer language construct is provided for reference. +---- +createTimer(DocTimerCallback, twoMinsPrior, meta.id, context) +---- +In the sample construct: + +* DocTimerCallback is the name of the function used in the Function handler code. +* twoMinsPrior is a JavaScript Date object. +* meta.id is a generic reference string that can be used in the Couchbase cluster. +* context is the JavaScript object that is used in the Function handler code. + + +=== Sharding of Timers + +Timers get automatically sharded across Eventing nodes and therefore are elastically scalable. Triggering of timers at or after a specified time interval is guaranteed. However, triggering of timers may either be on the same node (where the timer was created), or on a different node. Relative ordering between two specific timers cannot be maintained. + +=== Debugging and Logs + +Timers cannot be debugged using the Visual Debugger. For debugging, Couchbase recommends enclosing of timers in a try-catch block. When logging is enabled, timer related logs get captured as part of the Application logs. + +=== Elapsed Timestamps + +During runtime, when a Function handler code contains a timestamp in the past (elapsed timestamp), the system executes the code in the next available time window, as soon as the required resources are available. + +=== Handling Delays + +During Function backlogs, execution of timers may be delayed. To handle these delays, you need to program additional time window in your code. If your business logic is time-sensitive after this additional time window the code should refrain from its Function execution. + +The following is a sample code snippet which performs a timestamp check (my_deadline) before code execution. + +---- +func callback(context) +{ + //context.my_deadline is the parameter in the timer payload + if new Date().getTime() > context.my_deadline + { + // timestamp is back-dated, do not execute the rest of the timer + return; + } +} +---- + +== Examples + +The Eventing Example section provides an example for Timers, xref:eventing-examples-docexpiry.adoc[Document Expiry and Archival]. diff --git a/modules/eventing/pages/troubleshooting-best-practices.adoc b/modules/eventing/pages/troubleshooting-best-practices.adoc index 59787faf15..ef6f49df96 100644 --- a/modules/eventing/pages/troubleshooting-best-practices.adoc +++ b/modules/eventing/pages/troubleshooting-best-practices.adoc @@ -1,6 +1,6 @@ = Troubleshooting and Best Practices -== *What happens when more Workers are allocated for a Function?* +== What happens when more Workers are allocated for a Function? Couchbase Server for a specific Function limits the maximum number of workers to 10. This upper limit is configured for system optimization purposes. @@ -10,7 +10,7 @@ However, the warning message does not prevent the Function deployment. *Warning Message*: "There are eventing workers configured to run. System performance may be impacted." -== *Can this release of Couchbase Server process cURL commands?* +== Can this release of Couchbase Server process cURL commands? Support for cURL commands in Functions is for demo purposes. Ensure not to use cURL commands in your production environment. @@ -23,7 +23,7 @@ Do not use in production environments." IMPORTANT: cURL commands are a Developer Preview feature intended for development purposes only, do not use them in production; no Enterprise Support is provided for Developer Preview features. -== *When should developers use the try-catch block in Function handlers?* +== When should developers use the try-catch block in Function handlers? As a best practice, while programming the Function handler code, for basic error handling and debugging operations, it is recommended that application developers use the try-catch block. @@ -34,6 +34,9 @@ These error logs get stored in the application log file. By default, JavaScript runtime errors get stored in the system logs. Unlike system logs, troubleshooting and debugging operations are easy when you use the try-catch block and application log options. +During runtime, Application logs, by default, do not capture any handler code exceptions. To log exceptions, it is recommended to encapsulate your code in a try catch block. + + A sample try catch block is provided for reference: ---- @@ -51,19 +54,17 @@ function OnUpdate(doc, meta) } ---- -== *What are bucket allocation considerations during a Function definition?* +[#cyclicredun] +== What are bucket allocation considerations during a Function definition? -Function handlers can trigger data mutations. -To avoid a cyclic generation of data changes, ensure that you carefully consider the below aspects when you select the source and destination buckets: +Function handlers can trigger data mutations. To avoid a cyclic generation of data changes, ensure that you carefully consider the below aspects while specifying source and destination buckets: * Avoid infinite recursions. -If you are using a series of handlers, then ensure that destination buckets to which event handlers perform a write operation, do not have other Function handlers configured to listen and track data mutations. -* Couchbase Server can flag simple infinite recursions. -However, in a long chain of the source and destination buckets with a series of handlers, a complex infinite recursion condition may occur. -As a developer, carefully consider these cases while allocating source and destination buckets. -* As a best practice, ensure that buckets to which Function handlers perform a write operation do not have other handlers configured for tracking data mutations. +If you are using a series of handlers, then ensure that destination buckets to which event handlers perform a write operation, do not have other Function handlers configured to listen and track data mutations. + +Couchbase Server can flag simple infinite recursions. However, in a long chain of source and destination buckets with a series of handlers, a complex infinite recursion condition may occur. As a developer, carefully consider these cases. +* As a best practice, ensure that buckets to which the Function handler performs a write operation do not have other handlers configured for tracking data mutations. -== *In the cluster, I notice a sharp increase in the Timeouts Statistics. What are my next steps?* +== In the cluster, I notice a sharp increase in the Timeouts Statistics. What are my next steps? When the Timeout Statistics shows a sharp increase, it may be due to two possible scenarios: @@ -75,25 +76,3 @@ Ensure that you configure the script timeout value after carefully evaluating th As a best practice use a combination of try-catch block and the application log options. This way you can monitor, debug and troubleshoot errors during the Function execution. - -== What are a few best practices while passing timer related timestamps? - -Perform a timestamp check to avoid triggering of timers during the stale-time period. - -To handle delays during Function backlogs, in the Function handler code, you can program some additional time to allow triggering of timers. -If this additional time period is also breached, then the time status is considered as stale-time. -If your business logic is time-sensitive, then the Function handler code should refrain from triggering of timers during this stale-time period. - -The following code snippet ensures a valid timestamp check is performed before the Function handler code gets executed. - ----- -func callback(context) -{ - //context.my_deadline is the parameter in the timer payload - if new Date().getTime() > context.my_deadline - { - // timestamp is back-dated, do not execute the rest of the timer - return; - } -} ----- diff --git a/modules/install/pages/best-practices-vm.adoc b/modules/install/pages/best-practices-vm.adoc index 40160292ec..a6647367f9 100644 --- a/modules/install/pages/best-practices-vm.adoc +++ b/modules/install/pages/best-practices-vm.adoc @@ -88,7 +88,7 @@ To set the ulimits in your container, you need to run Couchbase Docker container ---- docker run -d --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 ---name db -p 8091-8094:8091-8094 -p 11210:11210 couchbase +--name db -p 8091-8096:8091-8096 -p 11210-11211:11210-11211 couchbase ---- Since `unlimited` is not supported as a value, it sets the `core` and `memlock` values to 100 GB. diff --git a/modules/install/pages/getting-started-docker.adoc b/modules/install/pages/getting-started-docker.adoc index d0e077fd44..5556200b99 100644 --- a/modules/install/pages/getting-started-docker.adoc +++ b/modules/install/pages/getting-started-docker.adoc @@ -5,7 +5,8 @@ Using the official Couchbase Server images on Docker Hub, it's easy to get start If you're trying Couchbase Server for the first time and just want to explore a Couchbase configuration, the quickest way to install a pre-configured single-node using Docker is to follow the xref:getting-started:start-here.adoc[Start Here!] tutorial. -For more traditional Docker deployments, review the <> and <> deployment instructions in this topic, which use the official Couchbase Docker images available on https://hub.docker.com/_/couchbase/[Docker Hub^]. +For more traditional Docker deployments, review the <> and <> deployment instructions in this topic, which use the official Couchbase Docker images available on https://hub.docker.com/_/couchbase/[Docker Hub^]. The Official Couchbase Server containers on Docker Hub are based on Ubuntu 16.04. + [#section_jvt_zvj_42b] == Deploying a Single-Node Cluster