diff --git a/beta/serverless-fleets/images/docling-highlevel-architecture.png b/beta/serverless-fleets/images/docling-highlevel-architecture.png new file mode 100644 index 00000000..0b92ecb7 Binary files /dev/null and b/beta/serverless-fleets/images/docling-highlevel-architecture.png differ diff --git a/beta/serverless-fleets/images/inferencing-highlevel-architecture.png b/beta/serverless-fleets/images/inferencing-highlevel-architecture.png new file mode 100644 index 00000000..75abd9ac Binary files /dev/null and b/beta/serverless-fleets/images/inferencing-highlevel-architecture.png differ diff --git a/beta/serverless-fleets/tutorials/docling/README.md b/beta/serverless-fleets/tutorials/docling/README.md index 12de7063..856acc5a 100644 --- a/beta/serverless-fleets/tutorials/docling/README.md +++ b/beta/serverless-fleets/tutorials/docling/README.md @@ -2,7 +2,9 @@ ![](../../images/docling-picture.png) -This tutorial provides a comprehensive guide on using Docling to convert PDFs into Markdown format using serverless fleets. It leverages cloud object storage for managing both the input PDFs and the resulting Markdown files. The process is streamlined using IBM’s Code Engine to build the Docling container, which is then pushed to a container registry. Users can run a serverless fleet, which autonomously spawns workers to run the Docling container for efficient, scalable conversion tasks. +This tutorial provides a comprehensive guide on using [Docling](https://docling-project.github.io/docling/) to convert PDFs into Markdown format using serverless fleets. It leverages cloud object storage for managing both the input PDFs and the resulting Markdown files. The process is streamlined using IBM’s Code Engine to build the Docling container, which is then pushed to a container registry. Users can run a serverless fleet, which autonomously spawns workers to run the Docling container for efficient, scalable conversion tasks. + +![](../../images/docling-highlevel-architecture.png) Key steps covered in the Tutorial: 1. Upload the examples PDFs to COS diff --git a/beta/serverless-fleets/tutorials/inferencing/README.md b/beta/serverless-fleets/tutorials/inferencing/README.md index b5a4721b..948b6d0b 100644 --- a/beta/serverless-fleets/tutorials/inferencing/README.md +++ b/beta/serverless-fleets/tutorials/inferencing/README.md @@ -2,9 +2,13 @@ This tutorial provides a comprehensive guide on using Serverless GPUs to perform batch inferencing which illustrates a generally applicable pattern where AI helps to extract information out of a set of unstructed data. +![](../../images/inferencing-highlevel-architecture.png) + + The concrete example extracts temperature and duration of a set of cookbook recipes (from [recipebook](https://github.com/dpapathanasiou/recipebook)) by using a LLM. Such a cookbook recipe looks like: ``` { + "title": "A-1 Chicken Soup", "directions": [ "In a large pot over medium heat, cook chicken pieces in oil until browned on both sides. Stir in onion and cook 2 minutes more. Pour in water and chicken bouillon and bring to a boil. Reduce heat and simmer 45 minutes.", "Stir in celery, carrots, garlic, salt and pepper. Simmer until carrots are just tender. Remove chicken pieces and pull the meat from the bone. Stir the noodles into the pot and cook until tender, 10 minutes. Return chicken meat to pot just before serving." @@ -24,7 +28,6 @@ The concrete example extracts temperature and duration of a set of cookbook reci "language": "en-US", "source": "allrecipes.com", "tags": [], - "title": "A-1 Chicken Soup", "url": "http://allrecipes.com/recipe/25651/a-1-chicken-soup/" } ```