diff --git a/Getting Started/Getting Started with FastScore/index.md b/Getting Started/Getting Started with FastScore/index.md index ff5a897..6af9e7b 100644 --- a/Getting Started/Getting Started with FastScore/index.md +++ b/Getting Started/Getting Started with FastScore/index.md @@ -228,7 +228,7 @@ This will install the required dependencies. The FastScore CLI is a Python tool, > ``` -Once you've installed the FastScore CLI, check that it works by executing the following command in your terminal. Also see [FastScore Command Line Interface](https://opendatagroup.github.io/Product%20Documentation/FastScore%20Command%20Line%20Interface.html) for more information on subcommands. +Once you've installed the FastScore CLI, check that it works by executing the following command in your terminal. Also see [FastScore Command Line Interface](https://opendatagroup.github.io/Reference/FastScore%20CLI/) for more information on subcommands. ``` bash $ fastscore help @@ -427,7 +427,7 @@ def end(): pass ``` -This model returns the sum of two numbers. Note that we are able to import Python's standard modules, such as the `pickle` module. Non-default packages can also be added using [Import Policies, as described here](https://opendatagroup.github.io/Product%20Documentation/Import%20Policies.html). Custom classes and packages can be loaded using attachments, as described in the [Gradient Boosting Regressor tutorial](https://opendatagroup.github.io/Examples%20and%20Tutorials/Gradient%20Boosting%20Regressor.html). +This model returns the sum of two numbers. Note that we are able to import Python's standard modules, such as the `pickle` module. Non-default packages can also be added using [Import Policies, as described here](https://opendatagroup.github.io/Product%20Manuals/Import%20Policies/). Custom classes and packages can be loaded using attachments, as described in the [Gradient Boosting Regressor tutorial](https://opendatagroup.github.io/Knowledge%20Center/Tutorials/Gradient%20Boosting%20Regressor/). #### R Models R models feature much of the same functionality as Python models, as well as the same constraint: the user must define an action function to perform the actual scoring. For example, the analogous model to the Python model above is @@ -505,7 +505,7 @@ Stream Descriptors are small JSON files containing information about the stream. } ``` -Stream descriptors are documented in more detail [on the stream descriptor page](stream-descriptors). The easiest type of stream to use is a file stream, which reads or writes records directly from/to a file inside of the FastScore engine container. Here is an example of such a stream: +Stream descriptors are documented in more detail [on the stream descriptor page](https://opendatagroup.github.io/Product%20Manuals/Stream%20Descriptors/). The easiest type of stream to use is a file stream, which reads or writes records directly from/to a file inside of the FastScore engine container. Here is an example of such a stream: ``` json { @@ -603,4 +603,4 @@ To run a model using the FastScore CLI, use the `fastscore job` sequence of comm * `fastscore job status` and `fastscore job statistics` display various information about the currently running job. Some of the statistics displayed by the `fastscore job statistics` command, such as memory usage, are also shown on the Dashboard. -This concludes the FastScore Getting Started guide. Additional FastScore API documentation is available at [https://opendatagroup.github.io/API/](https://opendatagroup.github.io/API/). Happy scoring! +This concludes the FastScore Getting Started guide. Additional FastScore API documentation is available at [https://opendatagroup.github.io/Reference/FastScore%20API/](https://opendatagroup.github.io/Reference/FastScore%20API/). Happy scoring! diff --git a/Knowledge Center/Tutorials/Deploy a Workflow with Composer/index.md b/Knowledge Center/Tutorials/Deploy a Workflow with Composer/index.md index dbf8a80..0edeec9 100644 --- a/Knowledge Center/Tutorials/Deploy a Workflow with Composer/index.md +++ b/Knowledge Center/Tutorials/Deploy a Workflow with Composer/index.md @@ -361,7 +361,7 @@ to skip ahead to the next section---you can It's easy to deploy FastScore Composer using Docker Compose or Docker Swarm. For this example, we'll use Swarm. Docker Swarm uses the same YAML definition files as Docker Compose, so you can re-use the example Docker Compose file from the -[Getting Started Guide](../../Getting Started/Getting Started with FastScore/) or +[Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/) or download [all the files needed for this step here](https://s3-us-west-1.amazonaws.com/fastscore-examples/tf_composer_files.tar.gz). Composer consists of three microservices: Designer (a web GUI), Composer (the diff --git a/Knowledge Center/Tutorials/Gradient Boosting Regressor/index.md b/Knowledge Center/Tutorials/Gradient Boosting Regressor/index.md index 84ae577..731e1ed 100644 --- a/Knowledge Center/Tutorials/Gradient Boosting Regressor/index.md +++ b/Knowledge Center/Tutorials/Gradient Boosting Regressor/index.md @@ -202,7 +202,7 @@ Loading our GBR model to FastScore can be broken into two steps: preparing the m In the previous section, we created a small Python script to score our incoming auto records using the trained gradient boosting regressor and our custom feature transformer. In this example, the training of the model has already been done, so we'll only need to adapt the trained model to produce scores. -As discussed in the [Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html), Python models in FastScore must deliver scores using an `action` method. Note that the `action` method operates as a generator, so scores are obtained from `yield` statements, rather than `return` statements. Additionally, because we don't want to re-load our trained model with every score, we'll define a `begin` method to do all of the model initialization. If a model defines a `begin` method, this method will be called at the start of the job. +As discussed in the [Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/), Python models in FastScore must deliver scores using an `action` method. Note that the `action` method operates as a generator, so scores are obtained from `yield` statements, rather than `return` statements. Additionally, because we don't want to re-load our trained model with every score, we'll define a `begin` method to do all of the model initialization. If a model defines a `begin` method, this method will be called at the start of the job. After these alterations, our model looks like this: @@ -315,7 +315,7 @@ The input stream descriptor includes the more complicated schema, encapsulating ### Starting and Configuring FastScore -This step may differ if you're using a custom FastScore deployment. If you're just using the [standard deployment from the Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html#section-start-fastscore-microservices-suite-with-docker-compose-recommended-), starting up FastScore is as easy as executing the following command: +This step may differ if you're using a custom FastScore deployment. If you're just using the [standard deployment from the Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/), starting up FastScore is as easy as executing the following command: ``` bash docker-compose up -d @@ -415,7 +415,7 @@ fastscore model add GBM score_auto_gbm.py fastscore attachment upload GBM gbm.tar.gz ``` -Steps for setting configuration through the Dashboard are covered in the [Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html#section-using-the-fastscore-dashboard). +Steps for setting configuration through the Dashboard are covered in the [Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/#section-using-the-fastscore-dashboard). After adding the model, attachment, and streams to FastScore, you can inspect them from the FastScore Dashboard: diff --git a/Product Manuals/Import Policies/index.md b/Product Manuals/Import Policies/index.md index aa0e20c..a465b8e 100644 --- a/Product Manuals/Import Policies/index.md +++ b/Product Manuals/Import Policies/index.md @@ -44,4 +44,4 @@ A model runner's import policy manifest is loaded from the `import.policy` file In FastScore 1.4, the import policy for a model runner is fixed as soon as a model is loaded into the engine, so any changes to import policies must be made _before_ running a model. To copy a new manifest into the container, use the [`docker cp`](https://docs.docker.com/engine/reference/commandline/cp/) command or an equivalent. -Adding import policies to an engine through the command `fastscore policy set my-policy.yml` is now available with v1.6. See [FastScore Command Line Interface](https://opendatagroup.github.io/Product%20Documentation/FastScore%20Command%20Line%20Interface.html) for more information on subcommands. \ No newline at end of file +Adding import policies to an engine through the command `fastscore policy set my-policy.yml` is now available with v1.6. See [FastScore Command Line Interface](https://opendatagroup.github.io/Reference/FastScore%20CLI/) for more information on subcommands. \ No newline at end of file diff --git a/Product Manuals/LDAP Authentication/index.md b/Product Manuals/LDAP Authentication/index.md index 0b0f893..28be2ed 100644 --- a/Product Manuals/LDAP Authentication/index.md +++ b/Product Manuals/LDAP Authentication/index.md @@ -11,7 +11,7 @@ Starting with FastScore v1.4, the FastScore Dashboard and Proxy support Microsof This section assumes you already possess an existing Vault service. If you haven't configured Vault yet, [read the Vault configuration section below](#configuring-vault-in-docker). -Authentication in FastScore is achieved through the Dashboard service. Recall from the [Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html) that Dashboard is designed to serve as a proxy for the FastScore fleet's REST API, as well as a visual configuration and diagnostic aid. By default, authentication is not enabled in Dashboard. To enable it, set the following environment variables: +Authentication in FastScore is achieved through the Dashboard service. Recall from the [Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/) that Dashboard is designed to serve as a proxy for the FastScore fleet's REST API, as well as a visual configuration and diagnostic aid. By default, authentication is not enabled in Dashboard. To enable it, set the following environment variables: | Name | Default Value | Description | | --- | --- | --- | diff --git a/Product Manuals/Multiple Input and Output Streams/index.md b/Product Manuals/Multiple Input and Output Streams/index.md index a18cad1..e1932d2 100644 --- a/Product Manuals/Multiple Input and Output Streams/index.md +++ b/Product Manuals/Multiple Input and Output Streams/index.md @@ -10,7 +10,7 @@ The Engine has multiple slots to attached streams where the even slot numbers starting at 0 are for inputs and the odd slot numbers starting with 1 are for outputs. -![Stream slots diagram](multi1.png) +![Stream slots diagram](multi2.png) This is particularly useful when the output of the models provides data for multiple purposes or the model requires data from multiple data sources to run. @@ -70,6 +70,6 @@ expects. The following model uses three input (0, 2, and 4) and two output slots # fastscore.schema.1: schema-2 # fastscore.slot.3: in-use ``` -See [Model Annotations](../Model Annotations.md) for more information about the +See [Model Annotations](https://opendatagroup.github.io/Product%20Manuals/Model%20Annotations/) for more information about the new-style model annotations. diff --git a/Product Manuals/Multiple Input and Output Streams/multi2.png b/Product Manuals/Multiple Input and Output Streams/multi2.png new file mode 100644 index 0000000..4bd7d2b Binary files /dev/null and b/Product Manuals/Multiple Input and Output Streams/multi2.png differ diff --git a/Product Manuals/Schema Reference/index.md b/Product Manuals/Schema Reference/index.md index a21b86d..52d5756 100644 --- a/Product Manuals/Schema Reference/index.md +++ b/Product Manuals/Schema Reference/index.md @@ -8,7 +8,7 @@ excerpt: "" FastScore enforces strict typing of engine inputs and outputs at two levels: stream input/output, and model input/output. Types are declared using [AVRO schema](https://avro.apache.org/docs/1.8.1/). -To support this functionality, FastScore's Model Manage maintains a database of named AVRO schemas. Python and R models must then reference their input and output schemas using smart comments. (PrettyPFA and PFA models instead explicitly include their AVRO types as part of the model format.) [Stream descriptors](https://opendatagroup.github.io/Product%20Documentation/Stream%20Descriptors.html) may either reference a named schema from Model Manage, or they may explicitly declare schemas. +To support this functionality, FastScore's Model Manage maintains a database of named AVRO schemas. Python and R models must then reference their input and output schemas using smart comments. (PrettyPFA and PFA models instead explicitly include their AVRO types as part of the model format.) [Stream descriptors](https://opendatagroup.github.io/Product%20Manuals/Stream%20Descriptors/) may either reference a named schema from Model Manage, or they may explicitly declare schemas. In either case, FastScore performs the following type checks: @@ -66,13 +66,13 @@ and score this record to produce {"name":"Bob", "product":"6.0"} ``` -[Once FastScore is running](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html), we can add the model and associated schemas to model manage with the following commands: +[Once FastScore is running](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/), we can add the model and associated schemas to model manage with the following commands: ``` fastscore schema add named-array named-array.avsc fastscore schema add named-double named-double.avsc fastscore model add my_model model.py ``` -Assuming that additionally, we have [configured the input and output stream descriptors](https://opendatagroup.github.io/Product%20Documentation/Stream%20Descriptors.html) to use our schemas, we can then run the job with +Assuming that additionally, we have [configured the input and output stream descriptors](https://opendatagroup.github.io/Product%20Manuals/Stream%20Descriptors/) to use our schemas, we can then run the job with ``` fastscore job run my_model ``` diff --git a/Product Manuals/State Sharing and Snapshotting/index.md b/Product Manuals/State Sharing and Snapshotting/index.md index 78af598..942870a 100644 --- a/Product Manuals/State Sharing and Snapshotting/index.md +++ b/Product Manuals/State Sharing and Snapshotting/index.md @@ -93,7 +93,7 @@ fastscore snapshot list fastscore snapshot restore ``` -The `snapshot list` command shows the saved snapshots for a given model. The `snapshot restore` command restores the specified snapshot for a particular model. Snapshots are automatically created upon receipt of an end-of-stream message, but these end-of-stream messages can be introduced as control records into the data stream for streaming transports (e.g. Kafka). For more information on control records, see [Record Sets and Control Records](https://opendatagroup.github.io/Product%20Documentation/Record%20Sets%20and%20Control%20Records.html). +The `snapshot list` command shows the saved snapshots for a given model. The `snapshot restore` command restores the specified snapshot for a particular model. Snapshots are automatically created upon receipt of an end-of-stream message, but these end-of-stream messages can be introduced as control records into the data stream for streaming transports (e.g. Kafka). For more information on control records, see [Record Sets and Control Records](https://opendatagroup.github.io/Product%20Manuals/Record%20Sets%20and%20Control%20Records/). To enable snapshots, use the `fastscore.snapshots` smart comment: ```