Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 4 additions & 4 deletions Getting Started/Getting Started with FastScore/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ This will install the required dependencies. The FastScore CLI is a Python tool,
> ```


Once you've installed the FastScore CLI, check that it works by executing the following command in your terminal. Also see [FastScore Command Line Interface](https://opendatagroup.github.io/Product%20Documentation/FastScore%20Command%20Line%20Interface.html) for more information on subcommands.
Once you've installed the FastScore CLI, check that it works by executing the following command in your terminal. Also see [FastScore Command Line Interface](https://opendatagroup.github.io/Reference/FastScore%20CLI/) for more information on subcommands.

``` bash
$ fastscore help
Expand Down Expand Up @@ -427,7 +427,7 @@ def end():
pass
```

This model returns the sum of two numbers. Note that we are able to import Python's standard modules, such as the `pickle` module. Non-default packages can also be added using [Import Policies, as described here](https://opendatagroup.github.io/Product%20Documentation/Import%20Policies.html). Custom classes and packages can be loaded using attachments, as described in the [Gradient Boosting Regressor tutorial](https://opendatagroup.github.io/Examples%20and%20Tutorials/Gradient%20Boosting%20Regressor.html).
This model returns the sum of two numbers. Note that we are able to import Python's standard modules, such as the `pickle` module. Non-default packages can also be added using [Import Policies, as described here](https://opendatagroup.github.io/Product%20Manuals/Import%20Policies/). Custom classes and packages can be loaded using attachments, as described in the [Gradient Boosting Regressor tutorial](https://opendatagroup.github.io/Knowledge%20Center/Tutorials/Gradient%20Boosting%20Regressor/).

#### R Models
R models feature much of the same functionality as Python models, as well as the same constraint: the user must define an action function to perform the actual scoring. For example, the analogous model to the Python model above is
Expand Down Expand Up @@ -505,7 +505,7 @@ Stream Descriptors are small JSON files containing information about the stream.
}
```

Stream descriptors are documented in more detail [on the stream descriptor page](stream-descriptors). The easiest type of stream to use is a file stream, which reads or writes records directly from/to a file inside of the FastScore engine container. Here is an example of such a stream:
Stream descriptors are documented in more detail [on the stream descriptor page](https://opendatagroup.github.io/Product%20Manuals/Stream%20Descriptors/). The easiest type of stream to use is a file stream, which reads or writes records directly from/to a file inside of the FastScore engine container. Here is an example of such a stream:

``` json
{
Expand Down Expand Up @@ -603,4 +603,4 @@ To run a model using the FastScore CLI, use the `fastscore job` sequence of comm
* `fastscore job status` and `fastscore job statistics` display various information about the currently running job.
Some of the statistics displayed by the `fastscore job statistics` command, such as memory usage, are also shown on the Dashboard.

This concludes the FastScore Getting Started guide. Additional FastScore API documentation is available at [https://opendatagroup.github.io/API/](https://opendatagroup.github.io/API/). Happy scoring!
This concludes the FastScore Getting Started guide. Additional FastScore API documentation is available at [https://opendatagroup.github.io/Reference/FastScore%20API/](https://opendatagroup.github.io/Reference/FastScore%20API/). Happy scoring!
Original file line number Diff line number Diff line change
Expand Up @@ -361,7 +361,7 @@ to skip ahead to the next section---you can
It's easy to deploy FastScore Composer using Docker Compose or Docker Swarm. For
this example, we'll use Swarm. Docker Swarm uses the same YAML definition files
as Docker Compose, so you can re-use the example Docker Compose file from the
[Getting Started Guide](../../Getting Started/Getting Started with FastScore/) or
[Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/) or
download [all the files needed for this step here](https://s3-us-west-1.amazonaws.com/fastscore-examples/tf_composer_files.tar.gz).

Composer consists of three microservices: Designer (a web GUI), Composer (the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@ Loading our GBR model to FastScore can be broken into two steps: preparing the m

In the previous section, we created a small Python script to score our incoming auto records using the trained gradient boosting regressor and our custom feature transformer. In this example, the training of the model has already been done, so we'll only need to adapt the trained model to produce scores.

As discussed in the [Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html), Python models in FastScore must deliver scores using an `action` method. Note that the `action` method operates as a generator, so scores are obtained from `yield` statements, rather than `return` statements. Additionally, because we don't want to re-load our trained model with every score, we'll define a `begin` method to do all of the model initialization. If a model defines a `begin` method, this method will be called at the start of the job.
As discussed in the [Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/), Python models in FastScore must deliver scores using an `action` method. Note that the `action` method operates as a generator, so scores are obtained from `yield` statements, rather than `return` statements. Additionally, because we don't want to re-load our trained model with every score, we'll define a `begin` method to do all of the model initialization. If a model defines a `begin` method, this method will be called at the start of the job.

After these alterations, our model looks like this:

Expand Down Expand Up @@ -315,7 +315,7 @@ The input stream descriptor includes the more complicated schema, encapsulating

### Starting and Configuring FastScore

This step may differ if you're using a custom FastScore deployment. If you're just using the [standard deployment from the Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html#section-start-fastscore-microservices-suite-with-docker-compose-recommended-), starting up FastScore is as easy as executing the following command:
This step may differ if you're using a custom FastScore deployment. If you're just using the [standard deployment from the Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/), starting up FastScore is as easy as executing the following command:

``` bash
docker-compose up -d
Expand Down Expand Up @@ -415,7 +415,7 @@ fastscore model add GBM score_auto_gbm.py
fastscore attachment upload GBM gbm.tar.gz
```

Steps for setting configuration through the Dashboard are covered in the [Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html#section-using-the-fastscore-dashboard).
Steps for setting configuration through the Dashboard are covered in the [Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/#section-using-the-fastscore-dashboard).

After adding the model, attachment, and streams to FastScore, you can inspect them from the FastScore Dashboard:

Expand Down
2 changes: 1 addition & 1 deletion Product Manuals/Import Policies/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ A model runner's import policy manifest is loaded from the `import.policy` file

In FastScore 1.4, the import policy for a model runner is fixed as soon as a model is loaded into the engine, so any changes to import policies must be made _before_ running a model. To copy a new manifest into the container, use the [`docker cp`](https://docs.docker.com/engine/reference/commandline/cp/) command or an equivalent.

Adding import policies to an engine through the command `fastscore policy set my-policy.yml` is now available with v1.6. See [FastScore Command Line Interface](https://opendatagroup.github.io/Product%20Documentation/FastScore%20Command%20Line%20Interface.html) for more information on subcommands.
Adding import policies to an engine through the command `fastscore policy set my-policy.yml` is now available with v1.6. See [FastScore Command Line Interface](https://opendatagroup.github.io/Reference/FastScore%20CLI/) for more information on subcommands.
2 changes: 1 addition & 1 deletion Product Manuals/LDAP Authentication/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Starting with FastScore v1.4, the FastScore Dashboard and Proxy support Microsof

This section assumes you already possess an existing Vault service. If you haven't configured Vault yet, [read the Vault configuration section below](#configuring-vault-in-docker).

Authentication in FastScore is achieved through the Dashboard service. Recall from the [Getting Started Guide](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html) that Dashboard is designed to serve as a proxy for the FastScore fleet's REST API, as well as a visual configuration and diagnostic aid. By default, authentication is not enabled in Dashboard. To enable it, set the following environment variables:
Authentication in FastScore is achieved through the Dashboard service. Recall from the [Getting Started Guide](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/) that Dashboard is designed to serve as a proxy for the FastScore fleet's REST API, as well as a visual configuration and diagnostic aid. By default, authentication is not enabled in Dashboard. To enable it, set the following environment variables:

| Name | Default Value | Description |
| --- | --- | --- |
Expand Down
4 changes: 2 additions & 2 deletions Product Manuals/Multiple Input and Output Streams/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ The Engine has multiple slots to attached streams where the even slot numbers
starting at 0 are for inputs and the odd slot numbers starting with 1 are for
outputs.

![Stream slots diagram](multi1.png)
![Stream slots diagram](multi2.png)

This is particularly useful when the output of the models provides data for
multiple purposes or the model requires data from multiple data sources to run.
Expand Down Expand Up @@ -70,6 +70,6 @@ expects. The following model uses three input (0, 2, and 4) and two output slots
# fastscore.schema.1: schema-2
# fastscore.slot.3: in-use
```
See [Model Annotations](../Model Annotations.md) for more information about the
See [Model Annotations](https://opendatagroup.github.io/Product%20Manuals/Model%20Annotations/) for more information about the
new-style model annotations.

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 3 additions & 3 deletions Product Manuals/Schema Reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ excerpt: ""

FastScore enforces strict typing of engine inputs and outputs at two levels: stream input/output, and model input/output. Types are declared using [AVRO schema](https://avro.apache.org/docs/1.8.1/).

To support this functionality, FastScore's Model Manage maintains a database of named AVRO schemas. Python and R models must then reference their input and output schemas using smart comments. (PrettyPFA and PFA models instead explicitly include their AVRO types as part of the model format.) [Stream descriptors](https://opendatagroup.github.io/Product%20Documentation/Stream%20Descriptors.html) may either reference a named schema from Model Manage, or they may explicitly declare schemas.
To support this functionality, FastScore's Model Manage maintains a database of named AVRO schemas. Python and R models must then reference their input and output schemas using smart comments. (PrettyPFA and PFA models instead explicitly include their AVRO types as part of the model format.) [Stream descriptors](https://opendatagroup.github.io/Product%20Manuals/Stream%20Descriptors/) may either reference a named schema from Model Manage, or they may explicitly declare schemas.

In either case, FastScore performs the following type checks:

Expand Down Expand Up @@ -66,13 +66,13 @@ and score this record to produce
{"name":"Bob", "product":"6.0"}
```

[Once FastScore is running](https://opendatagroup.github.io/Guides/Getting%20Started%20with%20FastScore%20v1-6-1.html), we can add the model and associated schemas to model manage with the following commands:
[Once FastScore is running](https://opendatagroup.github.io/Getting%20Started/Getting%20Started%20with%20FastScore/), we can add the model and associated schemas to model manage with the following commands:
```
fastscore schema add named-array named-array.avsc
fastscore schema add named-double named-double.avsc
fastscore model add my_model model.py
```
Assuming that additionally, we have [configured the input and output stream descriptors](https://opendatagroup.github.io/Product%20Documentation/Stream%20Descriptors.html) to use our schemas, we can then run the job with
Assuming that additionally, we have [configured the input and output stream descriptors](https://opendatagroup.github.io/Product%20Manuals/Stream%20Descriptors/) to use our schemas, we can then run the job with
```
fastscore job run my_model <input stream name> <output stream name>
```
Expand Down
2 changes: 1 addition & 1 deletion Product Manuals/State Sharing and Snapshotting/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,7 +93,7 @@ fastscore snapshot list <model name>
fastscore snapshot restore <model name> <snapshot id>
```

The `snapshot list` command shows the saved snapshots for a given model. The `snapshot restore` command restores the specified snapshot for a particular model. Snapshots are automatically created upon receipt of an end-of-stream message, but these end-of-stream messages can be introduced as control records into the data stream for streaming transports (e.g. Kafka). For more information on control records, see [Record Sets and Control Records](https://opendatagroup.github.io/Product%20Documentation/Record%20Sets%20and%20Control%20Records.html).
The `snapshot list` command shows the saved snapshots for a given model. The `snapshot restore` command restores the specified snapshot for a particular model. Snapshots are automatically created upon receipt of an end-of-stream message, but these end-of-stream messages can be introduced as control records into the data stream for streaming transports (e.g. Kafka). For more information on control records, see [Record Sets and Control Records](https://opendatagroup.github.io/Product%20Manuals/Record%20Sets%20and%20Control%20Records/).

To enable snapshots, use the `fastscore.snapshots` smart comment:
```
Expand Down