diff --git a/edge-cloud/README.md b/edge-cloud/README.md new file mode 100644 index 000000000..927316ca2 --- /dev/null +++ b/edge-cloud/README.md @@ -0,0 +1,38 @@ +# Enable Gen AI from Cloud to the Edge with OCI Edge Cloud + +## Introduction + + +Get hands-on learning with training labs about Oracle Edge cloud solutions. The workshops featured cover various solutions, skill levels, and categories based on Oracle Cloud Infrastructure (OCI)and Oracle Edge Cloud Solutions like Compute Cloud@Customer and Roving Edge Device. + +## What is Oracle Distributed Cloud? + +The Oracle Cloud Infrastructure(OCI) is the fastest growing cloud in the world, with over 48+ cloud regions across the globe enabling customers to rapidly onboard, deploy and scale their workloads. While the adoption of the public cloud is steadily increasing, organizations around the globe are also actively looking for distributed hybrid cloud solutions that offer them flexibility, scalability while also being able to meet strict data residency and latency requirements. + +Oracle offers the best core-to-edge cloud portfolio with distributed cloud offerings spanning the public cloud, Cloud@Customer and Edge with identical development, deployment and consumption methodologies. + + +## Oracle Edge Cloud – Consume OCI services anywhere + +Oracle’s Edge Cloud delivers the same OCI IaaS – compute, storage and networking capabilities at both air-gapped and connected locations – in a customer’s datacenter or at the edge. Oracle Edge Cloud offers customers the flexibility to scale from single node deployments using the Roving Edge Device and get up to 6600+ cores with the Oracle Compute Cloud@Customer. Oracle Edge Cloud also enables customers to develop applications once and deploy either in the public cloud, on-premises in their datacenter or at the edge – all while benefitting from cloud economics, ease of management and control over data residency and security. + + +![Edge Portfolio](/edge-cloud/ai-edge-rover/images/edgeportfolio.jpg) + +### Edge Cloud Deployment Scenarios + +With Oracle Edge Cloud, customers can consume OCI IaaS services - right up to the edge while also having the options to be connected, disconnected or intemittently connected to the Oracle cloud region. Here is a sample mesh deployment scenario with Oracle Roving Edge Device and Compute Cloud@Customer being tethered to their home Oracle Cloud regions. + + +![Edge Cloud Deployment](/edge-cloud/ai-edge-rover/images/edge_cloud_deploy.png) + +## Oracle Compute Cloud@Customer +As part of Oracle’s Distributed Cloud Strategy, the Oracle Compute Cloud@Customer delivers a fully managed OCI-in-a-rack in your datacenter. + +With Compute Cloud@Customer, you can run middleware and applications with high performance and high availability at a fraction of the cost. For deploying Oracle packaged applications like PeopleSoft, E-Business Suite or Siebel, Oracle Compute Cloud@Customer uniquely offers direct connect capabilities to Oracle Exadata Cloud@Customer, delivering up to 2x better application performance and enhanced security between the App and DB tier. + +![Compute Cloud at Customer](/edge-cloud/ai-edge-rover/images/computecloudatcustomer.png) +## Oracle Roving Edge Device +Oracle Roving Edge Device 2nd Generation (RED.2), the latest addition to Oracle’s core-to-edge cloud portfolio delivers unparalleled processing power, seamless connectivity, and ironclad security at the edge of networks and in disconnected locations. With Oracle RED.2, run any workload from enterprise apps to AI and over 100+ OCI services at the edge with simplified deployment, cloud economics and superior control over data residency and security. + +![Roving Edge Device](/edge-cloud/ai-edge-rover/images/red2.png) \ No newline at end of file diff --git a/edge-cloud/ai-edge-rover/accessing_ragbot/accessing_ragbot_querying_application.md b/edge-cloud/ai-edge-rover/accessing_ragbot/accessing_ragbot_querying_application.md new file mode 100644 index 000000000..810c9db53 --- /dev/null +++ b/edge-cloud/ai-edge-rover/accessing_ragbot/accessing_ragbot_querying_application.md @@ -0,0 +1,16 @@ +# Accessing the RagBot and querying the application + +### Step 1: Access the RagBot from your local browser thru the mentioned port +![Access RagBot](/edge-cloud/ai-edge-rover/accessing_ragbot/images/18_access_ragbot.png) + +### Step 2: Browse and Upload a PDF file into the Bot +![Using RagBot](/edge-cloud/ai-edge-rover/accessing_ragbot/images/19_using_rag-bot.png) + +### Step 3: Click Process and add to the database, this step embeds the PDF document and stores it in the vector database +![Using RagBot](/edge-cloud/ai-edge-rover/accessing_ragbot/images/21_uploading_db.png) + +### Step 4: Query the application to get answers +![Using RagBot](/edge-cloud/ai-edge-rover/accessing_ragbot/images/22_querying_ragbot.png) + + +You have now successfully installed a RagBot on the Oracle Roving Edge Device and are able to run AI workloads at the Edge in a disconnected mode. \ No newline at end of file diff --git a/edge-cloud/ai-edge-rover/accessing_ragbot/images/18_access_ragbot.png b/edge-cloud/ai-edge-rover/accessing_ragbot/images/18_access_ragbot.png new file mode 100644 index 000000000..b05ff60a3 Binary files /dev/null and b/edge-cloud/ai-edge-rover/accessing_ragbot/images/18_access_ragbot.png differ diff --git a/edge-cloud/ai-edge-rover/accessing_ragbot/images/19_using_rag-bot.png b/edge-cloud/ai-edge-rover/accessing_ragbot/images/19_using_rag-bot.png new file mode 100644 index 000000000..6bbd14a50 Binary files /dev/null and b/edge-cloud/ai-edge-rover/accessing_ragbot/images/19_using_rag-bot.png differ diff --git a/edge-cloud/ai-edge-rover/accessing_ragbot/images/21_uploading_db.png b/edge-cloud/ai-edge-rover/accessing_ragbot/images/21_uploading_db.png new file mode 100644 index 000000000..944e8ce22 Binary files /dev/null and b/edge-cloud/ai-edge-rover/accessing_ragbot/images/21_uploading_db.png differ diff --git a/edge-cloud/ai-edge-rover/accessing_ragbot/images/22_querying_ragbot.png b/edge-cloud/ai-edge-rover/accessing_ragbot/images/22_querying_ragbot.png new file mode 100644 index 000000000..22f0d6968 Binary files /dev/null and b/edge-cloud/ai-edge-rover/accessing_ragbot/images/22_querying_ragbot.png differ diff --git a/edge-cloud/ai-edge-rover/deployGPUinstance/images/1_redlogin.png b/edge-cloud/ai-edge-rover/deployGPUinstance/images/1_redlogin.png new file mode 100644 index 000000000..516b4ae42 Binary files /dev/null and b/edge-cloud/ai-edge-rover/deployGPUinstance/images/1_redlogin.png differ diff --git a/edge-cloud/ai-edge-rover/deployGPUinstance/images/2_create_compute_instance.png b/edge-cloud/ai-edge-rover/deployGPUinstance/images/2_create_compute_instance.png new file mode 100644 index 000000000..b1b84852b Binary files /dev/null and b/edge-cloud/ai-edge-rover/deployGPUinstance/images/2_create_compute_instance.png differ diff --git a/edge-cloud/ai-edge-rover/deployGPUinstance/images/3_vm_provisioning.png b/edge-cloud/ai-edge-rover/deployGPUinstance/images/3_vm_provisioning.png new file mode 100644 index 000000000..039be6de4 Binary files /dev/null and b/edge-cloud/ai-edge-rover/deployGPUinstance/images/3_vm_provisioning.png differ diff --git a/edge-cloud/ai-edge-rover/deployGPUinstance/images/4_vm_running.png b/edge-cloud/ai-edge-rover/deployGPUinstance/images/4_vm_running.png new file mode 100644 index 000000000..abf2333c0 Binary files /dev/null and b/edge-cloud/ai-edge-rover/deployGPUinstance/images/4_vm_running.png differ diff --git a/edge-cloud/ai-edge-rover/deployGPUinstance/images/5_vm_login.png b/edge-cloud/ai-edge-rover/deployGPUinstance/images/5_vm_login.png new file mode 100644 index 000000000..6612a5c45 Binary files /dev/null and b/edge-cloud/ai-edge-rover/deployGPUinstance/images/5_vm_login.png differ diff --git a/edge-cloud/ai-edge-rover/deployGPUinstance/setupGPUinstance.md b/edge-cloud/ai-edge-rover/deployGPUinstance/setupGPUinstance.md new file mode 100644 index 000000000..3949cdb99 --- /dev/null +++ b/edge-cloud/ai-edge-rover/deployGPUinstance/setupGPUinstance.md @@ -0,0 +1,29 @@ +# Creating your GPU VM instance using the Oracle Roving Edge Device + +## Introduction + +The Oracle Roving Edge Device supports both Fixed VM as well as GPU shape which users can provision using the RED.2 console, OCI CLI or Terraform scripts. For the purpose of this demonstration, the underlying assumptions are: + +- User has an OCI Account and access to a tenancy +- User has already installed all the required OCI CLI and stored their public and private keys to connect to the tenancy +- User has subscribed to the Oracle Roving Edge device and has completed the setup with their OCI tenancy + +In the following workshop, we will be using the Customer Enclave of the Roving Edge Device to provision a VM based off the VM.GPU.1.RED1.4 GPU shape and will be using the Ubuntu guest operating system to spin-up the VM. + +### Step 1: Login to RED using the UI Console + +![Login to Console](/edge-cloud/ai-edge-rover/deployGPUinstance/images/1_redlogin.png) + +### Step 2: Create a GPU instance using the UI Console + +![Create GPU Instance](/edge-cloud/ai-edge-rover/deployGPUinstance/images/2_create_compute_instance.png) + +### Step 3: Provisioning the Compute Instance +![Provisioning Instance](/edge-cloud/ai-edge-rover/deployGPUinstance/images/3_vm_provisioning.png) + +![Instance Details](/edge-cloud/ai-edge-rover/deployGPUinstance/images/4_vm_running.png) + +### Step 4: Login to the VM instance that you have created +![Logging into Instance](/edge-cloud/ai-edge-rover/deployGPUinstance/images/5_vm_login.png) + +Once you have the VM provisioned and you can access the image using your local console, you can proceed to the next steps. diff --git a/edge-cloud/ai-edge-rover/edge_ai_overview.md b/edge-cloud/ai-edge-rover/edge_ai_overview.md new file mode 100644 index 000000000..4e8494e76 --- /dev/null +++ b/edge-cloud/ai-edge-rover/edge_ai_overview.md @@ -0,0 +1,37 @@ +# Introduction + +## Enable Gen AI from Cloud to the Edge with Oracle Edge Cloud + +Get hands-on learning with training labs about Oracle Edge cloud solutions. The workshops featured cover various solutions, skill levels, and categories based on Oracle Cloud Infrastructure (OCI)and Oracle Edge Cloud Solutions like Compute Cloud@Customer and Roving Edge Device. + + +Estimated Time: 15 minutes + + + +### Objectives + +In this workshop, we will be using the Oracle Roving Edge Device to: +* Provision a VM based off the VM.GPU.1.RED1.4 GPU shape +* Install and Deploy a basic Retrieval Augmented Generation (RAG)bot +* Query the RagBot to get real-time results + + +### Prerequisites + +The Oracle Roving Edge Device supports both Fixed VM as well as GPU shape which users can provision using the RED.2 console, OCI CLI or Terraform scripts. For the purpose of this demonstration, the underlying assumptions are: + +- User has an OCI Account and access to a tenancy +- User has already installed all the required OCI CLI and stored their public and private keys to connect to the tenancy +- User has subscribed to the Oracle Roving Edge device and has completed the setup with their OCI tenancy + +## Learn More + + +* [Oracle Roving Edge Infrastructure Documentation](https://docs.oracle.com/en-us/iaas/Content/Rover/overview.htm) +* [Oracle Roving Edge Infrastructure](https://www.oracle.com/cloud/roving-edge-infrastructure/#rc30p2) + +## Acknowledgements +* **Author** - Jeevan Srinivas, Principal Technical Product Manager, Roving Edge Infastructure +* **Contributor** - Tanmay Dhuri, Senior Product Manager, Oracle Edge Cloud + diff --git a/edge-cloud/ai-edge-rover/files/app.py b/edge-cloud/ai-edge-rover/files/app.py new file mode 100644 index 000000000..0c40bd0dd --- /dev/null +++ b/edge-cloud/ai-edge-rover/files/app.py @@ -0,0 +1,42 @@ +import streamlit as st +import os +from populate_database import load_documents, split_documents, add_to_chroma, clear_database +from query_data import query_rag + +DATA_PATH = "data" +if not os.path.exists(DATA_PATH): + os.makedirs(DATA_PATH) + +def save_uploaded_file(uploaded_file): + with open(os.path.join(DATA_PATH, uploaded_file.name), "wb") as f: + f.write(uploaded_file.getbuffer()) + return st.success(f"File {uploaded_file.name} saved successfully!") + +def main(): + st.title("RagBot") + + if st.button("Reset Database"): + clear_database() + st.success("Database cleared successfully!") + + uploaded_file = st.file_uploader("Choose a document", type=["pdf", "txt", "docx"]) + + if uploaded_file is not None: + save_uploaded_file(uploaded_file) + + if st.button("Process and Add to Database"): + documents = load_documents() + chunks = split_documents(documents) + add_to_chroma(chunks) + st.success("Document processed and added to database successfully!") + + query_text = st.text_input("Enter your query:") + if st.button("Submit Query") and query_text: + try: + response = query_rag(query_text) + st.write(response) + except Exception as e: + st.error(f"An error occurred: {e}") + +if __name__ == "__main__": + main() diff --git a/edge-cloud/ai-edge-rover/files/data b/edge-cloud/ai-edge-rover/files/data new file mode 100644 index 000000000..e69de29bb diff --git a/edge-cloud/ai-edge-rover/files/get_embedding_function.py b/edge-cloud/ai-edge-rover/files/get_embedding_function.py new file mode 100644 index 000000000..a5e333e12 --- /dev/null +++ b/edge-cloud/ai-edge-rover/files/get_embedding_function.py @@ -0,0 +1,5 @@ +from langchain_community.embeddings.ollama import OllamaEmbeddings + +def get_embedding_function(): + embeddings = OllamaEmbeddings(model="nomic-embed-text") + return embeddings \ No newline at end of file diff --git a/edge-cloud/ai-edge-rover/files/populate_database.py b/edge-cloud/ai-edge-rover/files/populate_database.py new file mode 100644 index 000000000..1dc050154 --- /dev/null +++ b/edge-cloud/ai-edge-rover/files/populate_database.py @@ -0,0 +1,68 @@ +import os +import shutil +from langchain_community.document_loaders.pdf import PyPDFDirectoryLoader +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain.schema.document import Document +from get_embedding_function import get_embedding_function +from langchain.vectorstores.chroma import Chroma + +CHROMA_PATH = "chroma" +DATA_PATH = "data" + +def clear_database(): + if os.path.exists(CHROMA_PATH): + shutil.rmtree(CHROMA_PATH) + +def load_documents(): + document_loader = PyPDFDirectoryLoader(DATA_PATH) + documents = document_loader.load() + print(f"Loaded {len(documents)} documents") + for doc in documents: + print(doc.page_content[:500]) # Print first 500 characters for verification + return documents + +def split_documents(documents: list[Document]): + text_splitter = RecursiveCharacterTextSplitter( + chunk_size=800, + chunk_overlap=80, + length_function=len, + is_separator_regex=False, + ) + chunks = text_splitter.split_documents(documents) + print(f"Split into {len(chunks)} chunks") + return chunks + +def add_to_chroma(chunks: list[Document]): + db = Chroma( + persist_directory=CHROMA_PATH, embedding_function=get_embedding_function() + ) + chunks_with_ids = calculate_chunk_ids(chunks) + existing_items = db.get(include=[]) + existing_ids = set(existing_items["ids"]) + print(f"Number of existing documents in DB: {len(existing_ids)}") + + new_chunks = [chunk for chunk in chunks_with_ids if chunk.metadata["id"] not in existing_ids] + + if new_chunks: + print(f"👉 Adding new documents: {len(new_chunks)}") + new_chunk_ids = [chunk.metadata["id"] for chunk in new_chunks] + db.add_documents(new_chunks, ids=new_chunk_ids) + db.persist() + else: + print("✅ No new documents to add") + +def calculate_chunk_ids(chunks): + last_page_id = None + current_chunk_index = 0 + for chunk in chunks: + source = chunk.metadata.get("source") + page = chunk.metadata.get("page") + current_page_id = f"{source}:{page}" + if current_page_id == last_page_id: + current_chunk_index += 1 + else: + current_chunk_index = 0 + chunk_id = f"{current_page_id}:{current_chunk_index}" + last_page_id = current_page_id + chunk.metadata["id"] = chunk_id + return chunks diff --git a/edge-cloud/ai-edge-rover/files/query_data.py b/edge-cloud/ai-edge-rover/files/query_data.py new file mode 100644 index 000000000..0bde3fb27 --- /dev/null +++ b/edge-cloud/ai-edge-rover/files/query_data.py @@ -0,0 +1,38 @@ +from langchain.vectorstores.chroma import Chroma +from langchain.prompts import ChatPromptTemplate +from langchain_community.llms.ollama import Ollama +from get_embedding_function import get_embedding_function + +CHROMA_PATH = "chroma" + +PROMPT_TEMPLATE = """ +Answer the question based only on the following context: + +{context} + +--- + +Answer the question based on the above context: {question} +""" + +def query_rag(query_text: str): + embedding_function = get_embedding_function() + db = Chroma(persist_directory=CHROMA_PATH, embedding_function=embedding_function) + + results = db.similarity_search_with_score(query_text, k=5) + print(f"Found {len(results)} results") + for i, (doc, score) in enumerate(results): + print(f"Result {i+1}:") + print(f"Score: {score}") + print(doc.page_content[:500]) # Print first 500 characters of each result + + context_text = "\n\n---\n\n".join([doc.page_content for doc, _score in results]) + prompt_template = ChatPromptTemplate.from_template(PROMPT_TEMPLATE) + prompt = prompt_template.format(context=context_text, question=query_text) + + model = Ollama(model="mistral") + response_text = model.invoke(prompt) + + sources = [doc.metadata.get("id", None) for doc, _score in results] + formatted_response = f"Response: {response_text}\nSources: {sources}" + return formatted_response diff --git a/edge-cloud/ai-edge-rover/images/computecloudatcustomer.png b/edge-cloud/ai-edge-rover/images/computecloudatcustomer.png new file mode 100644 index 000000000..6f9805eac Binary files /dev/null and b/edge-cloud/ai-edge-rover/images/computecloudatcustomer.png differ diff --git a/edge-cloud/ai-edge-rover/images/edge_cloud_deploy.png b/edge-cloud/ai-edge-rover/images/edge_cloud_deploy.png new file mode 100644 index 000000000..cc5a1fff1 Binary files /dev/null and b/edge-cloud/ai-edge-rover/images/edge_cloud_deploy.png differ diff --git a/edge-cloud/ai-edge-rover/images/edgeportfolio.jpg b/edge-cloud/ai-edge-rover/images/edgeportfolio.jpg new file mode 100644 index 000000000..3ac8282af Binary files /dev/null and b/edge-cloud/ai-edge-rover/images/edgeportfolio.jpg differ diff --git a/edge-cloud/ai-edge-rover/images/red2.png b/edge-cloud/ai-edge-rover/images/red2.png new file mode 100644 index 000000000..544c4dbd1 Binary files /dev/null and b/edge-cloud/ai-edge-rover/images/red2.png differ diff --git a/edge-cloud/ai-edge-rover/images/requirements.jpg b/edge-cloud/ai-edge-rover/images/requirements.jpg new file mode 100644 index 000000000..3cafbe1f9 Binary files /dev/null and b/edge-cloud/ai-edge-rover/images/requirements.jpg differ diff --git a/edge-cloud/ai-edge-rover/installing_application/creating_directory_installing_application.md b/edge-cloud/ai-edge-rover/installing_application/creating_directory_installing_application.md new file mode 100644 index 000000000..a0d90c649 --- /dev/null +++ b/edge-cloud/ai-edge-rover/installing_application/creating_directory_installing_application.md @@ -0,0 +1,236 @@ +# Creating the directory and installing the application + +### Step 1: Creating the Directory: Installing the requirements + +``` +pip install -r requirements.txt +``` + +![requirements](/edge-cloud/ai-edge-rover/installing_application/images/requirements.jpg) + + + +
+ +![Install Directory](/edge-cloud/ai-edge-rover/installing_application/images/11_installrequirements.png) + + + +### Step 2: Create an embedding function as embed_data.py + +``` +from langchain_community.embeddings.ollama import OllamaEmbeddings + +def embed_data(): + embeddings = OllamaEmbeddings(model="all-minilm") #Update to use any embedding model + return embeddings + +``` +
+ +![Embedding Function](/edge-cloud/ai-edge-rover/installing_application/images/13_create_embedding_function.png) + +### Step 3: Create a python script load_data.py the database + +``` +import os +import shutil +from langchain_community.document_loaders.pdf import PyPDFDirectoryLoader +from langchain.text_splitter import RecursiveCharacterTextSplitter +from langchain.schema.document import Document +from embed_data import embed_data +from langchain.vectorstores.chroma import Chroma + +CHROMA_PATH = "chroma" +DATA_PATH = "data" + +def clear_database(): + if os.path.exists(CHROMA_PATH): + shutil.rmtree(CHROMA_PATH) + +def load_documents(): + document_loader = PyPDFDirectoryLoader(DATA_PATH) + documents = document_loader.load() + print(f"Loaded {len(documents)} documents") + for doc in documents: + print(doc.page_content[:500]) + return documents + +def split_documents(documents: list[Document]): + text_splitter = RecursiveCharacterTextSplitter( + chunk_size=1500, + chunk_overlap=100, + length_function=len, + is_separator_regex=False, + ) + chunks = text_splitter.split_documents(documents) + print(f"Split into {len(chunks)} chunks") + return chunks + +def add_to_chroma(chunks: list[Document]): + db = Chroma( + persist_directory=CHROMA_PATH, embedding_function=embed_data() + ) + chunks_with_ids = calculate_chunk_ids(chunks) + existing_items = db.get(include=[]) + existing_ids = set(existing_items["ids"]) + print(f"Number of existing documents in DB: {len(existing_ids)}") + + new_chunks = [chunk for chunk in chunks_with_ids if chunk.metadata["id"] not in existing_ids] + + if new_chunks: + print(f"Adding new documents: {len(new_chunks)}") + new_chunk_ids = [chunk.metadata["id"] for chunk in new_chunks] + db.add_documents(new_chunks, ids=new_chunk_ids) + db.persist() + print(f"Documents Added") + else: + print("No new documents to add") + +def calculate_chunk_ids(chunks): + last_page_id = None + current_chunk_index = 0 + for chunk in chunks: + source = chunk.metadata.get("source") + page = chunk.metadata.get("page") + current_page_id = f"{source}:{page}" + if current_page_id == last_page_id: + current_chunk_index += 1 + else: + current_chunk_index = 0 + chunk_id = f"{current_page_id}:{current_chunk_index}" + last_page_id = current_page_id + chunk.metadata["id"] = chunk_id + return chunks + +``` +
+ +![Install DB](/edge-cloud/ai-edge-rover/installing_application/images/12_populate_database.png) + + + + +### Step 4: Create a script to query the dat as query_data.py + +``` +from langchain.vectorstores.chroma import Chroma +from langchain.prompts import ChatPromptTemplate +from langchain_community.llms.ollama import Ollama +from embed_data import embed_data + +CHROMA_PATH = "chroma" + +PROMPT_TEMPLATE = """ +Answer the question based only on the following context: + +{context} + +--- + +Answer the question based on the above context: {question} +""" + +def query_rag(query_text: str): + embedding_function = embed_data() + db = Chroma(persist_directory=CHROMA_PATH, embedding_function=embedding_function) + + results = db.similarity_search_with_score(query_text, k=5) + print(f"Found {len(results)} results") + for i, (doc, score) in enumerate(results): + print(f"Result {i+1}:") + print(f"Score: {score}") + print(doc.page_content[:500]) + + context_text = "\n\n---\n\n".join([doc.page_content for doc, _score in results]) + prompt_template = ChatPromptTemplate.from_template(PROMPT_TEMPLATE) + prompt = prompt_template.format(context=context_text, question=query_text) + + model = Ollama(model="mistral") #Update to use any LLM + response_text = model.invoke(prompt) + + #sources = [doc.metadata.get("id", None) for doc, _score in results] + formatted_response = f"{response_text}" + return formatted_response + + +``` +
+ +![Querying Data](/edge-cloud/ai-edge-rover/installing_application/images/14_create_script.png) + +### Step 5: Create the main program as app.py + +``` +import streamlit as st +import os +from load_data import load_documents, split_documents, add_to_chroma, clear_database +from query_data import query_rag + +DATA_PATH = "data" +if not os.path.exists(DATA_PATH): + os.makedirs(DATA_PATH) + +def save_uploaded_file(uploaded_file): + with open(os.path.join(DATA_PATH, uploaded_file.name), "wb") as f: + f.write(uploaded_file.getbuffer()) + return st.success(f"File {uploaded_file.name} saved successfully!") + +def main(): + st.title("RagBot") + + + + if "query_history" not in st.session_state: + st.session_state.query_history = [] + + if st.button("Reset Database"): + clear_database() + st.session_state.query_history.clear() + st.success("Database cleared successfully!") + + uploaded_file = st.file_uploader("Choose a document", type=["pdf", "txt", "docx"]) + + if uploaded_file is not None: + save_uploaded_file(uploaded_file) + + if st.button("Process and Add to Database"): + documents = load_documents() + chunks = split_documents(documents) + add_to_chroma(chunks) + st.success("Document processed and added to database successfully!") + + query_text = st.text_input("Enter your query:") + + if st.button("Submit Query") and query_text: + try: + response = query_rag(query_text) + st.write(f"**Response:** {response}") + st.session_state.query_history.insert(0, {"query": query_text, "response": response}) + except Exception as e: + st.error(f"An error occurred: {e}") + + if len(st.session_state.query_history) > 1: + st.subheader("Previous Queries:") + for i, entry in enumerate(st.session_state.query_history[1:], start=1): + st.write(f"**Query:** {entry['query']}") + st.write(f"**Response:** {entry['response']}") + st.write("---") + + st.markdown("
", unsafe_allow_html=True) + st.markdown("

Powered by Oracle Roving Edge Infrastructure

", unsafe_allow_html=True) + +if __name__ == "__main__": + main() + + +``` +
+ +![Main program](/edge-cloud/ai-edge-rover/installing_application/images/16_creat_program.png) + +### Step 6: Create a directory called data to load and store the pdfs + + +### Step 7: Running the RagBot program as streamlit run app.py +![Run RagBot](/edge-cloud/ai-edge-rover/installing_application/images/17_run_ragbot.png) diff --git a/edge-cloud/ai-edge-rover/installing_application/images/11_installrequirements.png b/edge-cloud/ai-edge-rover/installing_application/images/11_installrequirements.png new file mode 100644 index 000000000..14f209e63 Binary files /dev/null and b/edge-cloud/ai-edge-rover/installing_application/images/11_installrequirements.png differ diff --git a/edge-cloud/ai-edge-rover/installing_application/images/12_populate_database.png b/edge-cloud/ai-edge-rover/installing_application/images/12_populate_database.png new file mode 100644 index 000000000..57dce663b Binary files /dev/null and b/edge-cloud/ai-edge-rover/installing_application/images/12_populate_database.png differ diff --git a/edge-cloud/ai-edge-rover/installing_application/images/13_create_embedding_function.png b/edge-cloud/ai-edge-rover/installing_application/images/13_create_embedding_function.png new file mode 100644 index 000000000..b8d90ab5b Binary files /dev/null and b/edge-cloud/ai-edge-rover/installing_application/images/13_create_embedding_function.png differ diff --git a/edge-cloud/ai-edge-rover/installing_application/images/14_create_script.png b/edge-cloud/ai-edge-rover/installing_application/images/14_create_script.png new file mode 100644 index 000000000..1dd9ab409 Binary files /dev/null and b/edge-cloud/ai-edge-rover/installing_application/images/14_create_script.png differ diff --git a/edge-cloud/ai-edge-rover/installing_application/images/16_creat_program.png b/edge-cloud/ai-edge-rover/installing_application/images/16_creat_program.png new file mode 100644 index 000000000..b25271448 Binary files /dev/null and b/edge-cloud/ai-edge-rover/installing_application/images/16_creat_program.png differ diff --git a/edge-cloud/ai-edge-rover/installing_application/images/17_run_ragbot.png b/edge-cloud/ai-edge-rover/installing_application/images/17_run_ragbot.png new file mode 100644 index 000000000..a3371eb35 Binary files /dev/null and b/edge-cloud/ai-edge-rover/installing_application/images/17_run_ragbot.png differ diff --git a/edge-cloud/ai-edge-rover/installing_application/images/requirements.jpg b/edge-cloud/ai-edge-rover/installing_application/images/requirements.jpg new file mode 100644 index 000000000..3cafbe1f9 Binary files /dev/null and b/edge-cloud/ai-edge-rover/installing_application/images/requirements.jpg differ diff --git a/edge-cloud/ai-edge-rover/pre_requisites_VM/images/6_install_ollama.png b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/6_install_ollama.png new file mode 100644 index 000000000..0f6f84032 Binary files /dev/null and b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/6_install_ollama.png differ diff --git a/edge-cloud/ai-edge-rover/pre_requisites_VM/images/7_pull_mistral.png b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/7_pull_mistral.png new file mode 100644 index 000000000..4efdfb495 Binary files /dev/null and b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/7_pull_mistral.png differ diff --git a/edge-cloud/ai-edge-rover/pre_requisites_VM/images/8_pull_llama3.png b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/8_pull_llama3.png new file mode 100644 index 000000000..7c6b73c83 Binary files /dev/null and b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/8_pull_llama3.png differ diff --git a/edge-cloud/ai-edge-rover/pre_requisites_VM/images/9_pull_nomic_embed_text.png b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/9_pull_nomic_embed_text.png new file mode 100644 index 000000000..18a7ec7cd Binary files /dev/null and b/edge-cloud/ai-edge-rover/pre_requisites_VM/images/9_pull_nomic_embed_text.png differ diff --git a/edge-cloud/ai-edge-rover/pre_requisites_VM/installing_pre-requisites.md b/edge-cloud/ai-edge-rover/pre_requisites_VM/installing_pre-requisites.md new file mode 100644 index 000000000..6138e0954 --- /dev/null +++ b/edge-cloud/ai-edge-rover/pre_requisites_VM/installing_pre-requisites.md @@ -0,0 +1,30 @@ +# Installing pre-requisites on your VM instance + +## Introduction + +Once you have completed the VM installation steps in Lab-1, the next steps will go over the installation of the following components: +- Ollama + - Ollama is an open-source project that serves as a powerful and user-friendly platform for running LLMs on your local machine. It acts as a bridge between the complexities of LLM technology and the desire for an accessible and customizable AI experience. +- Required AI Models and Dependencies + - Mistral + - Llama 3 + - nomic-emebed-text + + +### Step 1: Install Ollama on your instance + +![Install Ollama](/edge-cloud/ai-edge-rover/pre_requisites_VM/images/6_install_ollama.png) + +### Step 2: Using Ollama to pull required models: Pulling Mistral + +![Install Pulling Mistral](/edge-cloud/ai-edge-rover/pre_requisites_VM/images/7_pull_mistral.png) + +### Step 3: Using Ollama to pull required models: Pulling Llama3 + +![Install Pulling Llama3](/edge-cloud/ai-edge-rover/pre_requisites_VM/images/8_pull_llama3.png) + +### Step 4: Using Ollama to pull required models: Pulling nomic-embed-text + +![Install Pulling Llama3](/edge-cloud/ai-edge-rover/pre_requisites_VM/images/9_pull_nomic_embed_text.png) + + diff --git a/edge-cloud/workshops/index.html b/edge-cloud/workshops/index.html new file mode 100644 index 000000000..7dd5d30b1 --- /dev/null +++ b/edge-cloud/workshops/index.html @@ -0,0 +1,62 @@ + + + + + + + + + Oracle LiveLabs + + + + + + + + + + + + +
+
+
+
+
+
+
+
+ + + + + \ No newline at end of file diff --git a/edge-cloud/workshops/manifest.json b/edge-cloud/workshops/manifest.json new file mode 100644 index 000000000..a9cac91e1 --- /dev/null +++ b/edge-cloud/workshops/manifest.json @@ -0,0 +1,46 @@ +{ + "workshoptitle": "AI Services: Introduction to OCI Vision", + "help": "livelabs-help-oci_us@oracle.com", + "tutorials": [ + { + "title": "Introduction: OCI Vision", + "description": "Learn about the features provided by Vision services", + "filename": "./intro.md" + }, + { + "title": "Get Started", + "description": "This is the prerequisites for customers using Free Trial and Paid tenancies, and Always Free accounts (if applicable). The title of the lab and the Contents Menu title (the title above) match for Prerequisite lab. This lab is always first.", + "filename": "https://oracle-livelabs.github.io/common/labs/cloud-login/cloud-login.md" + }, + { + "title": "Learn about AI at the Edge with Oracle Edge Cloud", + "description": "Learn you can deploy AI workloads at the Edge with Compute Cloud@Customer and Roving Edge Infrastructure", + "filename": "/edge-cloud/ai-edge-rover/edge_ai_overview.md" + }, + { + "title": "Lab 1: Create a GPU instance using the Roving Edge Device", + "description": "Learn how to login, define and deploy a GPU instance on the Roving Edge Infrastructure", + "filename": "/edge-cloud/ai-edge-rover/deployGPUinstance/setupGPUinstance.md" + }, + { + "title": "Lab 2: Pre-requisites for deploying a RagBot", + "description": "Learn how to download and install AI models and dependencies on your Roving Edge Device", + "filename": "/edge-cloud/ai-edge-rover/pre_requisites_VM/installing_pre-requisites.m" + }, + { + "title": "Lab 3: Creating the directory and installing the application", + "description": "Learn how to run the RagBot program", + "filename": "/ai-edge-rover/installing_application/creating_directory_installing_application.md" + }, + { + "title": "Lab 4: Accessing the RagBot and querying the application", + "description": "Learn how to login to your RagBot application and access it at the edge", + "filename": "/edge-cloud/ai-edge-rover/accessing_ragbot/accessing_ragbot_querying_application.md" + }, + { + "title": "Need Help?", + "description": "Solutions to Common Problems and Directions for Receiving Live Help", + "filename": "https://oracle-livelabs.github.io/common/labs/need-help/need-help-livelabs.md" + } + ] +} \ No newline at end of file