Skip to content

Commit

Permalink
update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
snowmead committed Nov 3, 2023
1 parent 99fc1bc commit af7f446
Show file tree
Hide file tree
Showing 5 changed files with 154 additions and 13 deletions.
73 changes: 73 additions & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions Cargo.toml
Expand Up @@ -35,3 +35,4 @@ thiserror = "1.0.49"
async-trait = "0.1.73"
num-traits = "0.2.17"
bounded-integer = { version = "0.5.7", features = ["types", "num-traits02"] }
aquamarine = "0.3.2"
48 changes: 48 additions & 0 deletions src/architecture.rs
@@ -0,0 +1,48 @@
#[cfg_attr(doc, aquamarine::aquamarine)]
/// The following diagram shows a very high level slimmed down overview of the
/// architecture of the library and how an application might use it.
///
/// Only the traits [`Loom`](crate::Loom) and [`Config`](crate::Config) are expanded to show some of
/// their main associated types.
///
/// ```mermaid
/// graph TB
/// subgraph Chat Application
/// App
/// chat_gpt[Chat GPT]
/// bard[Bard]
/// end
/// App-. impl .- Loom
/// App-. impl .- Config
/// chat_gpt[Chat GPT]-. impl .- llm
/// bard[Bard]-. impl .- llm
/// subgraph LLM Weaver
/// llm>Llm]
/// subgraph Config
/// prompt_model[PromptModel]-- prompt --> chat_gpt
/// summary_model[SummaryModel]-- prompt --> bard
/// tapestry_chest_type[Chest]
/// end
/// subgraph Loom
/// weave-- save prompt and response --> tapestry_chest_type
/// weave-- generate summary --> summary_model
/// weave-- generate response --> prompt_model
/// end
/// tapestry_chest_handler>TapestryChestHandler]
/// tapestry_chest[TapestryChest]-. default impl .- tapestry_chest_handler
/// tapestry_chest_type --> tapestry_chest
/// tapestry_chest --> redis
/// redis[Redis]
/// end
/// ```
///
/// The application must implement the [`Loom`](crate::Loom) and [`Config`](crate::Config) traits in
/// order to utilize the library. This includes but is not limited to providing the types that
/// implement the [`Llm`](crate::Llm) trait which defines the LLMs which will be used to
/// prompt and generate summaries.
///
/// The [`Config`](crate::Config) trait also requires the application to supply an implementation
/// for [`Chest`](crate::Config::Chest) which is responsible for storing and retrieving the
/// [`TapestryFragment`](crate::TapestryFragment)s, but is not required since llm_weaver provides a
/// default implementation that uses Redis as the storage backend.
pub struct Diagram;
43 changes: 31 additions & 12 deletions src/lib.rs
Expand Up @@ -9,11 +9,28 @@
//! history as [`TapestryFragment`] instances. This trait is highly configurable through the
//! [`Config`] trait to support a wide range of use cases.
//!
//! # Nomenclature
//!
//! - **Tapestry**: A collection of [`TapestryFragment`] instances.
//! - **TapestryFragment**: A single part of a conversation containing a list of messages along with
//! other metadata.
//! - **ContextMessage**: Represents a single message in a [`TapestryFragment`] instance.
//! - **Loom**: The machine that drives all of the core methods that should be used across any
//! service that needs to prompt LLM and receive a response.
//! - **LLM**: Language Model.
//!
//! # Architecture
//!
//! Please refer to the [`architecture::Diagram`] for a visual representation of the core
//! components of this library.
//!
//! # Usage
//!
//! You must implement the [`Config`] trait, which defines the necessary types and methods needed by
//! [`Loom`].
//!
//! If you are using the default implementation of [`Config::TapestryChest`], it is expected that a
//! Redis instance is running and that the following environment variables are set:
//! This library uses Redis as the default storage backend for storing [`TapestryFragment`]. It is
//! expected that a Redis instance is running and that the following environment variables are set:
//!
//! - `REDIS_PROTOCOL`
//! - `REDIS_HOST`
Expand All @@ -22,11 +39,12 @@
//!
//! Should there be a need to integrate a distinct storage backend, you have the flexibility to
//! create a custom handler by implementing the [`TapestryChestHandler`] trait and injecting it
//! into the [`Config::TapestryChest`] associated type.
//! into the [`Config::Chest`] associated type.
#![feature(async_closure)]
#![feature(associated_type_defaults)]
#![feature(more_qualified_paths)]
#![feature(const_option)]

use std::{
collections::VecDeque,
fmt::{Debug, Display},
Expand All @@ -45,6 +63,7 @@ use serde::{Deserialize, Serialize};
use storage::TapestryChest;
use tracing::{debug, error, instrument};

pub mod architecture;
pub mod storage;
pub mod types;

Expand Down Expand Up @@ -159,7 +178,7 @@ pub trait Llm<T: Config>:
let tokens = max_tokens.saturating_mul(&token_threshold);
tokens.checked_div(&Self::Tokens::from_u8(100).unwrap()).unwrap()
}
/// [`ContextMessage`]s to [`Llm::PromptRequest`] conversion.
/// [`ContextMessage`]s to [`Llm::Request`] conversion.
fn ctx_msgs_to_prompt_requests(&self, msgs: &[ContextMessage<T>]) -> Vec<Self::Request> {
msgs.iter().map(|m| m.clone().into()).collect()
}
Expand Down Expand Up @@ -192,7 +211,7 @@ pub trait Config: Debug + Sized + Clone + Default + Send + Sync + 'static {
///
/// Defaults to [`TapestryChest`]. Using this default requires you to supply the `hostname`,
/// `port` and `credentials` to connect to your instance.
type TapestryChest: TapestryChestHandler<Self> = TapestryChest;
type Chest: TapestryChestHandler<Self> = TapestryChest;

/// Convert [`Config::PromptModel`] to [`Config::SummaryModel`] tokens.
fn convert_prompt_tokens_to_summary_model_tokens(
Expand Down Expand Up @@ -268,11 +287,11 @@ pub trait Loom<T: Config> {
///
/// # Parameters
///
/// - `tapestry_id`: The [`TapestryId`] to prompt and save context messages to.
/// - `system`: The system message to prompt LLM with.
/// the current [`Config::PromptModel`].
/// - `msgs`: The list of [`ContextMessage`]s to prompt LLM with.
/// - `prompt_params`: The [`Config::PromptParameters`] to use when prompting LLM.
/// - `prompt_config`: The [`Config::PromptModel`] to use for prompting LLM.
/// - `summary_model_config`: The [`Config::SummaryModel`] to use for generating summaries.
/// - `tapestry_id`: The [`TapestryId`] to use for storing the [`TapestryFragment`] instance.
/// - `system`: The system message to be used for the current [`TapestryFragment`] instance.
/// - `msgs`: The messages to prompt the LLM with.
#[instrument]
async fn weave<TID: TapestryId>(
prompt_config: LlmConfig<T, T::PromptModel>,
Expand All @@ -286,7 +305,7 @@ pub trait Loom<T: Config> {
let sys_req_msg: PromptModelRequest<T> = system_ctx_msg.clone().into();

// get latest tapestry fragment instance from storage
let tapestry_fragment = T::TapestryChest::get_tapestry_fragment(tapestry_id.clone(), None)
let tapestry_fragment = T::Chest::get_tapestry_fragment(tapestry_id.clone(), None)
.await?
.unwrap_or_default();

Expand Down Expand Up @@ -386,7 +405,7 @@ pub trait Loom<T: Config> {

// save tapestry fragment to storage
// when summarized, the tapestry_fragment will be saved under a new instance
T::TapestryChest::save_tapestry_fragment(
T::Chest::save_tapestry_fragment(
tapestry_id,
tapestry_fragment_to_persist,
is_summary_generated,
Expand Down
2 changes: 1 addition & 1 deletion src/storage.rs
Expand Up @@ -84,7 +84,7 @@ pub trait TapestryChestHandler<T: Config> {
async fn delete_tapestry<TID: TapestryId>(tapestry_id: TID) -> crate::Result<()>;
}

/// Default implementation of [`Config::TapestryChest`]
/// Default implementation of [`Config::Chest`]
///
/// Storing and retrieving data using a Redis instance.
pub struct TapestryChest;
Expand Down

0 comments on commit af7f446

Please sign in to comment.