Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
node_modules
Empty file added What is Future AGI?.mdx
Empty file.
26 changes: 26 additions & 0 deletions api-reference/examples-notebook.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
---
title: "Interactive Examples Notebook"
description: "Explore a diverse collection of examples showcasing the versatility of the Julep platform for app creation and process automation."
---

<CardGroup cols={2}>
<Card title="Engaging Chat Interactions" icon="comments" href="https://github.com/future-agi/client/blob/main/fi/examples/chat.ipynb">
Master the art of creating dynamic chat-based applications
</Card>

<Card title="Multimodal Data Handling" icon="image" href="https://github.com/future-agi/client/blob/main/fi/examples/image.ipynb">
Learn to seamlessly integrate image and text data in your projects
</Card>

<Card title="Advanced RAG Techniques" icon="brain-circuit" href="https://github.com/future-agi/client/blob/main/fi/examples/rag_file.py">
Dive into Retrieval Augmented Generation for enhanced content creation
</Card>

<Card title="Efficient Text Summarization" icon="compress" href="https://github.com/future-agi/client/blob/main/fi/examples/summerizaton.ipynb">
Discover techniques for concise and accurate text summarization
</Card>

<Card title="Customizable Prompt Engineering" icon="wand-magic-sparkles" href="https://github.com/future-agi/client/blob/main/fi/examples/prompt_template.py">
Unlock the power of flexible prompt templates for diverse applications
</Card>
</CardGroup>
33 changes: 33 additions & 0 deletions api-reference/introduction.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
---
title: 'Introduction'
description: 'Example section for showcasing API endpoints'
---

<Note>
If you're not looking to build API reference documentation, you can delete
this section by removing the api-reference folder.
</Note>

## Welcome

There are two ways to build API documentation: [OpenAPI](https://mintlify.com/docs/api-playground/openapi/setup) and [MDX components](https://mintlify.com/docs/api-playground/mdx/configuration). For the starter kit, we are using the following OpenAPI specification.

<Card
title="Plant Store Endpoints"
icon="leaf"
href="https://github.com/mintlify/starter/blob/main/api-reference/openapi.json"
>
View the OpenAPI specification file
</Card>

## Authentication

All API endpoints are authenticated using Bearer tokens and picked up from the specification file.

```json
"security": [
{
"bearerAuth": []
}
]
```
195 changes: 195 additions & 0 deletions api-reference/openapi.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,195 @@
{
"openapi": "3.0.1",
"info": {
"title": "OpenAPI Plant Store",
"description": "A sample API that uses a plant store as an example to demonstrate features in the OpenAPI specification",
"license": {
"name": "MIT"
},
"version": "1.0.0"
},
"servers": [
{
"url": "http://sandbox.mintlify.com"
}
],
"security": [
{
"bearerAuth": []
}
],
"paths": {
"/plants": {
"get": {
"description": "Returns all plants from the system that the user has access to",
"parameters": [
{
"name": "limit",
"in": "query",
"description": "The maximum number of results to return",
"schema": {
"type": "integer",
"format": "int32"
}
}
],
"responses": {
"200": {
"description": "Plant response",
"content": {
"application/json": {
"schema": {
"type": "array",
"items": {
"$ref": "#/components/schemas/Plant"
}
}
}
}
},
"400": {
"description": "Unexpected error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Error"
}
}
}
}
}
},
"post": {
"description": "Creates a new plant in the store",
"requestBody": {
"description": "Plant to add to the store",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/NewPlant"
}
}
},
"required": true
},
"responses": {
"200": {
"description": "plant response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Plant"
}
}
}
},
"400": {
"description": "unexpected error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Error"
}
}
}
}
}
}
},
"/plants/{id}": {
"delete": {
"description": "Deletes a single plant based on the ID supplied",
"parameters": [
{
"name": "id",
"in": "path",
"description": "ID of plant to delete",
"required": true,
"schema": {
"type": "integer",
"format": "int64"
}
}
],
"responses": {
"204": {
"description": "Plant deleted",
"content": {}
},
"400": {
"description": "unexpected error",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Error"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"Plant": {
"required": [
"name"
],
"type": "object",
"properties": {
"name": {
"description": "The name of the plant",
"type": "string"
},
"tag": {
"description": "Tag to specify the type",
"type": "string"
}
}
},
"NewPlant": {
"allOf": [
{
"$ref": "#/components/schemas/Plant"
},
{
"required": [
"id"
],
"type": "object",
"properties": {
"id": {
"description": "Identification number of the plant",
"type": "integer",
"format": "int64"
}
}
}
]
},
"Error": {
"required": [
"error",
"message"
],
"type": "object",
"properties": {
"error": {
"type": "integer",
"format": "int32"
},
"message": {
"type": "string"
}
}
}
},
"securitySchemes": {
"bearerAuth": {
"type": "http",
"scheme": "bearer"
}
}
}
}
86 changes: 86 additions & 0 deletions api-reference/python-sdk-client.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
---
title: " Python SDK Client"
description: "Comprehensive Manager Class for System Components and API Interaction"
---

## **Installation**

```python
pip install futureagi
```

### **1. Required Parameters:**

• **model\_id**: A unique identifier for your model. This helps in tracking and distinguishing between different models in your system.

• **model\_type**: The type of model you are using. For instance, ModelTypes.GENERATIVE\_LLM specifies that the model is a Generative Language Learning Model.

• **environment**: The environment in which the model is running, such as Environments.PRODUCTION for production environments. This helps in differentiating between development, testing, and production stages.

• **conversation**: A dictionary containing the conversation data. For example, chat\_history can include user interactions. This is essential for models that deal with conversational AI to log the context of interactions.

### 2. Optional Parameters:

• **model\_version**: The version of the model being used. This is useful for tracking changes and improvements in different versions of your model.

• **prediction\_timestamp**: A timestamp indicating when the prediction was made. This helps in logging the exact time of the event, which is crucial for time-series analysis and debugging.

• **tags**: A dictionary of tags for additional metadata. This can include project names, specific identifiers, or any other relevant information to categorize and filter events.

### 3. Supported Model Types:

• **Generative LLM**: For text data.

• **Generative Image**: For text and image data.

### 4. Supported Environments:

• **Training**: For models in the training phase.

• **Validation**: For models in the validation phase.

• **Production**: For models deployed in a production environment.

• **Corpus**: For models dealing with a large collection of data or corpus.

### Example Code:

Below is an example of how to log an event with the Fi AI Client, including the usage of each parameter:

```python
from fi.utils.types import ModelTypes, Environments

client.log(
model_id="your_model_id", # Unique identifier for the model
model_type=ModelTypes.GENERATIVE_LLM, # Type of model (e.g., Generative LLM)
environment=Environments.PRODUCTION, # Environment (e.g., Production)
model_version="1.0.0", # Version of the model (optional)
prediction_timestamp=1625216400, # Timestamp of the prediction (optional)
conversation={
"chat_history": [
{"role": "user", "content": "How do I implement a neural network in Python?"},
{"role": "assistant", "content": "To implement a neural network in Python, you can use libraries like TensorFlow or PyTorch. Here’s a simple example using PyTorch..."}
]
}, # Conversation data (optional)
tags={"project": "AI project"} # Additional metadata tags (optional)
)
```

## Explanation of Each Key:

• **model\_id**: This key is crucial for identifying the specific model making the predictions. It allows for detailed tracking and management of different models within your system.

• **model\_type**: Specifies the type of model being used, which helps in understanding the nature and purpose of the logged events. ModelTypes.GENERATIVE\_LLM indicates that the model generates language-based outputs.

• **environment**: Indicates the deployment stage of the model. By specifying Environments.PRODUCTION, you ensure that the logs are appropriately categorized, helping in monitoring and managing production models.

• **model\_version**: Helps in version control by logging which version of the model was used for a particular event. This is useful for tracking performance and debugging issues related to specific versions.

• **prediction\_timestamp**: Provides the exact time when the prediction was made. This is essential for time-based analysis and understanding the sequence of events.

• **conversation**: Contains the details of the conversation, including user inputs. This is particularly important for conversational models to retain context and improve the accuracy of responses.

• **tags**: Allows for additional customization and categorization of events. By using tags, you can add project names or other identifiers to make filtering and searching through logs more efficient.

By understanding and utilizing these parameters effectively, you can ensure comprehensive and organized logging of events, facilitating better monitoring, debugging, and analysis of your AI models.

Binary file added back2.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
15 changes: 15 additions & 0 deletions evaluations/overview.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: Overview
description: Evaluations are a way to evaluate the quality of your models.
---

<CardGroup>
<Card title="Running Evaluations using the Python SDK" icon="code" href="/evaluations/quickstart">
Run preset evaluations using the Python SDK
</Card>
<Card title="Running Evaluations on Future AGI Platform" icon="code" href="/evaluations/datasets">
Run preset evaluations on the Future AGI Platform
</Card>
</CardGroup>


Loading