From 3e2cc8861c26906793bfec6f13337610031afda7 Mon Sep 17 00:00:00 2001 From: Michele Baldessari Date: Fri, 30 Aug 2024 16:07:01 +0200 Subject: [PATCH] Fix a bunch of spellcheck errors --- .wordlist.txt | 69 +++++++++++++++++++ content/blog/2024-07-12-in-cluster-git.md | 2 +- .../patterns/gaudi-rag-chat-qna/_index.adoc | 4 +- .../gaudi-rag-chat-qna-getting-started.adoc | 2 +- .../rag-llm-gitops/GPU_provisioning.md | 2 +- .../rag-llm-gitops/getting-started.md | 8 +-- 6 files changed, 78 insertions(+), 9 deletions(-) diff --git a/.wordlist.txt b/.wordlist.txt index 2050ac5ee..42a10f44b 100644 --- a/.wordlist.txt +++ b/.wordlist.txt @@ -1,3 +1,6 @@ +tei +synapseai +atest aab aap abd @@ -20,6 +23,7 @@ aegitops afce afnmju agentless +agof aiml akiaiosfodnn akiaiosfodnn7example @@ -48,6 +52,7 @@ architecturedetail argocd argocd's argoproj +aro arptn arskhan arslan @@ -102,9 +107,12 @@ centos centric ceph cephfs +cephobjectstoreuser cfengine cgo changelog +chatbot +chatqna chown chroot cicd @@ -144,6 +152,7 @@ conf config configmanagement configmap +configmaps configs configur conjur @@ -152,6 +161,7 @@ controlplane coreos cp crd +creationdate credentialtype creds crohn's @@ -173,11 +183,13 @@ customizations cves cyber dach +daemonsets dasv datacenter dataflow datagrid dataset +datasets datasheet datasource datasrc @@ -216,6 +228,7 @@ diag dien differentiator dihv +displayname distro distros dnf @@ -227,6 +240,7 @@ donut downloader doxid drawio +dropdown dsa dsl dsparwyc @@ -240,6 +254,7 @@ eb ecf ecff eda +edb edd edsv edv @@ -251,6 +266,7 @@ efg efrqurrkuojrlsqhi eg ejuat +embeddings enablement endcomment endraw @@ -258,6 +274,7 @@ entityoperator entrypoint env envi +envs ep erp eso @@ -301,9 +318,15 @@ fsv fsync fvsm fx +gaudi +gaudillm gcp +gcqna +genai +genaiexamples genrsa gf +gh gib gid gitea @@ -316,17 +339,22 @@ gkxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx godebug golang googlegroups +gpuprovisioning gpus +gradio grafana grafanadatasources gramine gramineproject graminized +grcq grpc gsc gsis gz +habana habanaai +habanaaisynapseai hardcoded hariharan hashicorp @@ -357,6 +385,7 @@ httpd https hubcluster hubclusterdomain +huggingface hvs hxakf hybridcloudpatterns @@ -409,9 +438,11 @@ iotsensor iov ipbabble ipi +ipovw ipykernel ipynb ishubcluster +isnicup isr istio istio's @@ -454,6 +485,7 @@ kie kieserver kioskextraparams kk +kmm knative kno koppa @@ -473,6 +505,7 @@ kust kustomization kustomize kzwzqibmynhrxxxxxxxxx +langchain lastlog latencies latestcreated @@ -481,10 +514,13 @@ ld ldap ldfowf ldhs +letsencrypt leveloffset lhipbabbledummyhykhq lifecycle lk +llm +llms lnzw localclusterdomain localdomaincluster @@ -496,6 +532,7 @@ lsv lvm machineapi machineconfigpool +machineconfigs machineset macosx mailto @@ -527,6 +564,7 @@ mhjacks mhjacks's mib microcontrollers +microservice microservices middleware militaries @@ -545,6 +583,7 @@ mq mqtt multicloud multicluster +multisourceconfig musthave mustnothave mustonlyhave @@ -566,6 +605,8 @@ netlify nfd ng nginx +nics +nim nlp nodb nodejs @@ -600,6 +641,8 @@ onfg onmissingvalue onwards oopafbglkiwyapzuhc +opea +openai opendatahub openid openjdk @@ -638,9 +681,12 @@ patternsh pbivukilnpoe pci pem +performant persistentvolumeclaim +persistentvolumeclaims persistentvolumes pgsql +pgvector pii pki plaintext @@ -653,6 +699,7 @@ portworx posix postgres postgresql +poweredge powershell ppid pre @@ -660,6 +707,7 @@ prebuilt predeploy prem preplay +preprocess prereqs prerequisitesrequirements privatekey @@ -677,12 +725,14 @@ pubkey publickey purpu pv +pvcs pxe py qat qatlib qe ql +qna qtgmclkdlnkwcdpvyxarm quantized quarkus @@ -699,6 +749,7 @@ readthedocs realtime rebase recommender +reconciler redhat reencrypt rekey @@ -709,11 +760,15 @@ repo repolist repo's repos +reranked +reranking resync +reusability revisiontimestamp rfe rgw rhacm +rhcm rhdm rhecoeng rhel @@ -735,7 +790,10 @@ rollouts rpms rsa runnable +running spellchecking on the tree +running task: markdown... runtime +runtimeerror runtimes rxpm saas @@ -808,6 +866,8 @@ submodules subtree subtrees sudo +supermicro +superserver superset supportmatrix sur @@ -815,6 +875,7 @@ svc svg synched syncpolicy +sys syscall targetbucket targetnamespace @@ -832,11 +893,15 @@ templa templated templating tengvall +testfile testid +testidtgi testlab testsource tf tgfqgvpdh +tgi +tgis tgz thanos thanos's @@ -846,6 +911,7 @@ tldr tls tmm tmp +tnr toc todo toh @@ -880,6 +946,7 @@ unsealvault untrusted upstreaming upstream's +ure uri usecsv userdata @@ -900,6 +967,8 @@ vcpu vcpus vdjtkgams vdvkgdtuxkeqf +vectordb +vectorized vfio vhjpkievife virtualmachine diff --git a/content/blog/2024-07-12-in-cluster-git.md b/content/blog/2024-07-12-in-cluster-git.md index 0c881bf15..f3cea4adf 100644 --- a/content/blog/2024-07-12-in-cluster-git.md +++ b/content/blog/2024-07-12-in-cluster-git.md @@ -73,7 +73,7 @@ Once logged in with the `gitea_admin` user whose password is contained in the `g you will see the repository that has been configured inside gitea: ![gitea-repository-list](/images/gitea-repository-list.png) -Clickin on the repository will show the actual code and the usual git related information (branch, tags, etc): +Clicking on the repository will show the actual code and the usual git related information (branch, tags, etc): ![gitea-repository-show](/images/gitea-repository-show.png) ## Gitea usage diff --git a/content/patterns/gaudi-rag-chat-qna/_index.adoc b/content/patterns/gaudi-rag-chat-qna/_index.adoc index 4f0505cda..b52152280 100644 --- a/content/patterns/gaudi-rag-chat-qna/_index.adoc +++ b/content/patterns/gaudi-rag-chat-qna/_index.adoc @@ -28,7 +28,7 @@ include::modules/comm-attributes.adoc[] = About {gcqna-pattern} Background:: -Validated pattern is based on https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA[OPEA [Open Platform for Enterprise AI\] example - Chat QnA]. OPEA is an ecosystem orchestration framework to integrate performant GenAI technologies & workflows leading to quicker GenAI adoption and business value. Another purpose of this pattern is to deploy whole infrastructre stack enabling Intel Gaudi accelerator. Accelerator is used in the AI inferencing process. Pattern makes use of GitOps approach. GitOps uses Git repositories as a single source of truth to deliver infrastructure-as-code. Submitted code will be checked by the continuous integration (CI) process, while the continuous delivery (CD) process checks and applies requirements for things like security, infrastructure-as-code, or any other boundaries set for the application framework. All changes to code are tracked, making updates easy while also providing version control should a rollback be needed. +Validated pattern is based on https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA[OPEA [Open Platform for Enterprise AI\] example - Chat QnA]. OPEA is an ecosystem orchestration framework to integrate performant GenAI technologies & workflows leading to quicker GenAI adoption and business value. Another purpose of this pattern is to deploy whole infrastructure stack enabling Intel Gaudi accelerator. Accelerator is used in the AI inferencing process. Pattern makes use of GitOps approach. GitOps uses Git repositories as a single source of truth to deliver infrastructure-as-code. Submitted code will be checked by the continuous integration (CI) process, while the continuous delivery (CD) process checks and applies requirements for things like security, infrastructure-as-code, or any other boundaries set for the application framework. All changes to code are tracked, making updates easy while also providing version control should a rollback be needed. Components:: @@ -60,7 +60,7 @@ image::/images/gaudi-rag-chat-qna/gaudi-rag-chat-qna-vp-overview.png[OPEA QnA ch * Gaudi enablement stack consists of Kernel Module Management and HabanaAI operators. -* Additionaly, in this Validated Pattern OpenShift Data Foundation is used to provide an S3-like storage through Ceph RGW, and Image Registry is used to store all built images and Hashicorp Vault to safely keep HuggingFace User token. +* Additionally, in this Validated Pattern OpenShift Data Foundation is used to provide an S3-like storage through Ceph RGW, and Image Registry is used to store all built images and Hashicorp Vault to safely keep HuggingFace User token. * Node Feature Discovery labels Gaudi 2 nodes so the workload can be placed on the right node and benefit from the acceleration. diff --git a/content/patterns/gaudi-rag-chat-qna/gaudi-rag-chat-qna-getting-started.adoc b/content/patterns/gaudi-rag-chat-qna/gaudi-rag-chat-qna-getting-started.adoc index fb2868ad4..28e8fdcd5 100644 --- a/content/patterns/gaudi-rag-chat-qna/gaudi-rag-chat-qna-getting-started.adoc +++ b/content/patterns/gaudi-rag-chat-qna/gaudi-rag-chat-qna-getting-started.adoc @@ -244,7 +244,7 @@ image::/images/gaudi-rag-chat-qna/gaudi-rag-chat-qna-create-workbench.png[Create image::/images/gaudi-rag-chat-qna/gaudi-rag-chat-qna-deploy-notebook-1.png[Deploy Jupyter notebook 1] image::/images/gaudi-rag-chat-qna/gaudi-rag-chat-qna-deploy-notebook-2.png[Deploy Jupyter notebook 2] -. After workbench is created go to this Jupyter notebook dashboard and upload Jupter notebook file `/download-model.ipynb` to the file explorer so it looks like this: +. After workbench is created go to this Jupyter notebook dashboard and upload Jupyter notebook file `/download-model.ipynb` to the file explorer so it looks like this: + image::/images/gaudi-rag-chat-qna/gaudi-rag-chat-qna-notebook-view.png[Jupyter notebook view] diff --git a/content/patterns/rag-llm-gitops/GPU_provisioning.md b/content/patterns/rag-llm-gitops/GPU_provisioning.md index 9a3bee83e..2eabf13f0 100644 --- a/content/patterns/rag-llm-gitops/GPU_provisioning.md +++ b/content/patterns/rag-llm-gitops/GPU_provisioning.md @@ -54,7 +54,7 @@ Depending on type of EC2 instance creation of the new machines make take some ti From OperatorHub install Node Feature Discovery Operator , accepting defaults . Once Operator has been installed , create `NodeFeatureDiscovery`instance . Use default entries unless you something specific is needed . Node Feature Discovery Operator will add labels to nodes based on available hardware resources -## Install NVIDI GPU Operator +## Install NVIDIA GPU Operator NVIDIA GPU Operator will provision daemonsets with drivers for the GPU to be used by workload running on these nodes . Detailed instructions are available in NVIDIA Documentation [NVIDIA on OpenShift](https://docs.nvidia.com/datacenter/cloud-native/openshift/latest/index.html) . Following simplified steps for specific setup : diff --git a/content/patterns/rag-llm-gitops/getting-started.md b/content/patterns/rag-llm-gitops/getting-started.md index fa7a6beb3..8d386771d 100644 --- a/content/patterns/rag-llm-gitops/getting-started.md +++ b/content/patterns/rag-llm-gitops/getting-started.md @@ -95,7 +95,7 @@ Wait till the nodes are provisioned and running. ![Diagram](/images/rag-llm-gitops/nodes.png) -Alternatiely, follow the [instructions](../gpu_provisioning) to manually install GPU nodes, Node Feature Discovery Operator and NVIDIA GPU operator. +Alternatively, follow the [instructions](../gpu_provisioning) to manually install GPU nodes, Node Feature Discovery Operator and NVIDIA GPU operator. ### Deploy application @@ -150,7 +150,7 @@ Note: If the hf-text-generation-server is not running, make sure you have follow ### 3: Generate the proposal document -- It will use the default provider and model configured as part of the application deployment. The default provider is a Hugging Face model server running in the OpenShift. The model server is deployed with this valdiated pattern and requires a node with GPU. +- It will use the default provider and model configured as part of the application deployment. The default provider is a Hugging Face model server running in the OpenShift. The model server is deployed with this validated pattern and requires a node with GPU. - Enter any company name - Enter the product as `RedHat OpenShift` - Click the `Generate` button, a project proposal should be generated. The project proposal also contains the reference of the RAG content. The project proposal document can be Downloaded in the form of a PDF document. @@ -165,7 +165,7 @@ You can optionally add additional providers. The application supports the follow - OpenAI - NVIDIA -Click on the `Add Provider` tab to add a new provider. Fill in the details and click `Add Provider` button. The provider should be added in the `Providers` dropdown uder `Chatbot` tab. +Click on the `Add Provider` tab to add a new provider. Fill in the details and click `Add Provider` button. The provider should be added in the `Providers` dropdown under `Chatbot` tab. ![Routes](/images/rag-llm-gitops/add_provider.png) @@ -177,7 +177,7 @@ Follow the instructions in step 3 to generate the proposal document using the Op ### 6: Rating the provider -You can provide rating to the model by clicking on the `Rate the model` radio button. The rating will be captured as part of the metrics and can help the company which model to deploy in prodcution. +You can provide rating to the model by clicking on the `Rate the model` radio button. The rating will be captured as part of the metrics and can help the company which model to deploy in production. ### 7: Grafana Dashboard