Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
data/
.vscode/
secrets.json

Expand Down
147 changes: 132 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,27 +4,31 @@

# Kern AI API for Python

This is the official Python SDK for Kern AI, your IDE for programmatic data enrichment and management.
This is the official Python SDK for [*refinery*](https://github.com/code-kern-ai/refinery), your **open-source** data-centric IDE for NLP.

## Installation

You can set up this library via either running `$ pip install kern-sdk`, or via cloning this repository and running `$ pip install -r requirements.txt` in this repository.
You can set up this SDK either via running `$ pip install kern-sdk`, or by cloning this repository and running `$ pip install -r requirements.txt`.

## Usage
Once you installed the package, you can access the application from any Python terminal as follows:

### Creating a `Client` object
Once you installed the package, you can create a `Client` object from any Python terminal as follows:

```python
from kern import Client

username = "your-username"
user_name = "your-username"
password = "your-password"
project_id = "your-project-id" # can be found in the URL of the web application

client = Client(username, password, project_id)
client = Client(user_name, password, project_id)
# if you run the application locally, please use the following instead:
# client = Client(username, password, project_id, uri="http://localhost:4455")
```

The `project_id` can be found in your browser, e.g. if you run the app on your localhost: `http://localhost:4455/app/projects/{project_id}/overview`

Alternatively, you can provide a `secrets.json` file in your directory where you want to run the SDK, looking as follows:
```json
{
Expand All @@ -33,26 +37,140 @@ Alternatively, you can provide a `secrets.json` file in your directory where you
"project_id": "your-project-id"
}
```
Again, if you run on your local machine, you should also provide `"uri": "http://localhost:4455"`. Afterwards, you can access the client like this:

Again, if you run on your localhost, you should also provide `"uri": "http://localhost:4455"`. Afterwards, you can access the client like this:

```python
client = Client.from_secrets_file("secrets.json")
```

With the `Client`, you easily integrate your data into any kind of system; may it be a custom implementation, an AutoML system or a plain data analytics framework 🚀

### Fetching labeled data

Now, you can easily fetch the data from your project:
```python
df = client.get_record_export()
df = client.get_record_export(tokenize=False)
# if you set tokenize=True (default), the project-specific
# spaCy tokenizer will process your textual data
```

Alternatively, you can also just run `kern pull` in your CLI given that you have provided the `secrets.json` file in the same directory.

The `df` contains data of the following scheme:
- all your record attributes are stored as columns, e.g. `headline` or `running_id` if you uploaded records like `{"headline": "some text", "running_id": 1234}`
- per labeling task three columns:
- `<attribute_name|None>__<labeling_task_name>__MANUAL`: those are the manually set labels of your records
- `<attribute_name|None>__<labeling_task_name>__WEAK SUPERVISION`: those are the weakly supervised labels of your records
- `<attribute_name|None>__<labeling_task_name>__WEAK SUPERVISION_confidence`: those are the probabilities or your weakly supervised labels
The `df` contains both your originally uploaded data (e.g. `headline` and `running_id` if you uploaded records like `{"headline": "some text", "running_id": 1234}`), and a triplet for each labeling task you create. This triplet consists of the manual labels, the weakly supervised labels, and their confidence. For extraction tasks, this data is on token-level.

An example export file looks like this:
```json
[
{
"running_id": "0",
"Headline": "T. Rowe Price (TROW) Dips More Than Broader Markets",
"Date": "Jun-30-22 06:00PM\u00a0\u00a0",
"Headline__Sentiment Label__MANUAL": null,
"Headline__Sentiment Label__WEAK_SUPERVISION": "Negative",
"Headline__Sentiment Label__WEAK_SUPERVISION__confidence": "0.6220"
}
]
```

In this example, there is no manual label, but a weakly supervised label `"Negative"` has been set with 62.2% confidence.

### Fetch lookup lists
- [ ] Todo

### Upload files
- [ ] Todo

### Adapters

#### Rasa
*refinery* is perfect to be used for building chatbots with [Rasa](https://github.com/RasaHQ/rasa). We've built an adapter with which you can easily create the required Rasa training data directly from *refinery*.

To do so, do the following:

```python
from kern.adapter import rasa

rasa.build_intent_yaml(
client,
"text",
"__intent__WEAK_SUPERVISION"
)
```

This will create a `.yml` file looking as follows:

```yml
nlu:
- intent: check_balance
examples: |
- how much do I have on my savings account
- how much money is in my checking account
- What's the balance on my credit card account
```

If you want to provide a metadata-level label (such as sentiment), you can provide the optional argument `metadata_label_task`:

```python
from kern.adapter import rasa

rasa.build_intent_yaml(
client,
"text",
"__intent__WEAK_SUPERVISION",
metadata_label_task="__sentiment__WEAK_SUPERVISION"
)
```

This will create a file like this:
```yml
nlu:
- intent: check_balance
metadata:
sentiment: neutral
examples: |
- how much do I have on my savings account
- how much money is in my checking account
- What's the balance on my credit card account
```

And if you have entities in your texts which you'd like to recognize, simply add the `tokenized_label_task` argument:

```python
from kern.adapter import rasa

rasa.build_intent_yaml(
client,
"text",
"__intent__WEAK_SUPERVISION",
metadata_label_task="__sentiment__WEAK_SUPERVISION",
tokenized_label_task="text__entities__WEAK_SUPERVISION"
)
```

This will not only inject the label names on token-level, but also creates lookup lists for your chatbot:

```yml
nlu:
- intent: check_balance
metadata:
sentiment: neutral
examples: |
- how much do I have on my [savings](account) account
- how much money is in my [checking](account) account
- What's the balance on my [credit card account](account)
- lookup: account
examples: |
- savings
- checking
- credit card account
```

Please make sure to also create the further necessary files (`domain.yml`, `data/stories.yml` and `data/rules.yml`) if you want to train your Rasa chatbot. For further reference, see their [documentation](https://rasa.com/docs/rasa).

### What's missing?
Let us know what open-source/closed-source NLP framework you are using, for which you'd like to have an adapter implemented in the SDK. To do so, simply create an issue in this repository with the tag "enhancement".

With the `client`, you easily integrate your data into any kind of system; may it be a custom implementation, an AutoML system or a plain data analytics framework 🚀

## Roadmap
- [ ] Register heuristics via wrappers
Expand All @@ -66,7 +184,6 @@ If you want to have something added, feel free to open an [issue](https://github
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!

1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
Expand Down
Empty file added kern/adapter/__init__.py
Empty file.
Loading