From 8ff795727435cb6155ab0a780e17ff62faa4df91 Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:07:30 +0100 Subject: [PATCH 01/10] add post --- ...2024-12-20-You-can-do-AI-with-cpp.markdown | 200 ++++++++++++++++++ 1 file changed, 200 insertions(+) create mode 100644 _posts/2024-12-20-You-can-do-AI-with-cpp.markdown diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown new file mode 100644 index 00000000..487f2523 --- /dev/null +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -0,0 +1,200 @@ +--- +layout: post +comments: false +title: "Yes, You Can Do AI with C++!" +meta_title: "C++ AI Libraries Available in Conan Center Index" +description: "Learn how to implement AI and Machine Learning in C++ using libraries like TensorFlow Lite, Dlib, and ONNX Runtime, all available in Conan Center Index. Discover why C++ is a powerful choice for AI development." +keywords: "C++, AI, Machine Learning, TensorFlow Lite, Dlib, ONNX Runtime, Conan Center Index" +--- + +When thinking about Artificial Intelligence and Machine Learning, languages like Python +often come to mind. However, **C++ is a powerful choice for developing AI and ML +applications**, especially when performance and resource efficiency are critical. At +[Conan Center](https://conan.io/center), you can find a variety of libraries that enable +AI and ML development in C++. In this post, we will briefly highlight some of the most +relevant ones available, helping you get started with AI development in C++ easily. + +### Why Use C++ for AI and Machine Learning? + +C++ offers some advantages for AI and ML development: + +- **Performance**: C++ provides high execution speed and efficient resource management, + making it ideal for computationally intensive tasks. +- **Low-Level Optimizations**: It allows precise control over memory usage, inference + processes, and hardware features like SIMD and CUDA, enabling developers to implement + custom optimizations and leverage hardware capabilities. +- **Fine-Grained Control through Compilation**: C++ allows developers to utilize multiple + compilation options and optimize libraries directly from the source code to tailor + performance for specific hardware, offering a level of fine-grained control. + +In summary, C++ can be an excellent choice for working with AI. Let's explore some of the +most representative libraries on this topic available in the Conan Center Index. + +### An Overview of Some AI and ML Libraries Available in Conan Center + +Below are some notable libraries you can easily integrate with your C++ projects through +Conan Center. These libraries range from running large language models locally to +optimizing model inference on edge devices or using specialized toolkits for tasks like +computer vision and numerical optimization. + +#### LLaMA.cpp + +**LLaMA.cpp** is a C/C++ implementation of [Meta’s LLaMA models](https://www.llama.com/) +and others, enabling local inference with minimal dependencies and high performance. It +works on CPUs and GPUs, supports diverse architectures, and accommodates a variety of text +models like LLaMA 3, Mistral, or Phi, as well as multimodal models like LLaVA 1.6. + +One of the most interesting aspects of this library is that it includes CLI tools that +allow you to run your own LLMs out of the box. To install the library with Conan, enabling +the examples and network options, and using a [Conan +deployer](https://docs.conan.io/2/reference/extensions/deployers.html) to move the files +to the user space, you can run the following command: + +```shell +# Install llama-cpp using Conan and deploy to the local folder +$ conan install --requires=llama-cpp/b4079 --build=missing \ + -o="llama-cpp/*:with_examples=True" \ + -o="llama-cpp/*:with_curl=True" \ + --deployer=full_deploy +``` + +Running your own chatbot locally is as simple as invoking the packaged `llama-cli` +application with a model from a Hugging Face repository (in this case we will be using a +Llama 3.2 model with 1 billion parameters and 6 bit quantization from the [unsloth +repo](https://huggingface.co/unsloth)) and starting to ask questions: + +```shell +# Run llama-cli downloading a Hugging Face model +$ ./direct_deploy/llama-cpp/bin/llama-cli \ + --hf-repo unsloth/Llama-3.2-1B-Instruct-GGUF \ + --hf-file Llama-3.2-1B-Instruct-Q6_K.gguf \ + -p "What is the meaning to life and the universe?\n" +``` + +Now, let’s check out our LLM’s perspective: + +```text +What is the meaning to life and the universe? + +The meaning to life and the universe is a subject of endless +debate among philosophers, theologians, scientists, and everyday +people. But what if I told you that there is a simple +yet profound truth that can help you find meaning and purpose +in life? It's not a complex theory or a scientific formula. +It's something that can be discovered by simply observing the +world around us. + +Here's the truth: **every moment is a +new opportunity to create meaning and purpose.** +... +``` + +As you can see, in just a few minutes, we can have our own LLM running locally, all using +C++. You can also use the libraries provided by the **llama-cpp** Conan package to +integrate LLMs into your own applications. For example, here is the code for the +[llama-cli](https://github.com/ggerganov/llama.cpp/blob/b4079/examples/main/main.cpp) that +we just executed. For more information on the LLaMA.cpp project, please [check their +repository on GitHub](https://github.com/ggerganov/llama.cpp). + +#### TensorFlow Lite + +**TensorFlow Lite** is a specialized version of [TensorFlow](https://www.tensorflow.org/) +designed for deploying machine learning models on mobile, embedded systems, and other +resource-constrained devices. It’s ideal for applications that require low-latency +inference, such as edge computing or IoT devices. TensorFlow Lite focuses on optimizing +performance while minimizing power consumption. + +
+ Pose estimation with TensorFlow Lite +
+ +To explore TensorFlow Lite in action, we previously published a [blog +post](https://blog.conan.io/2023/05/11/tensorflow-lite-cpp-mobile-ml-guide.html) +showcasing how to build a real-time human pose detection application using TensorFlow Lite +and OpenCV. If you haven't read it yet, we recommend checking it out for a detailed +walkthrough of an exciting use case. + +One of the interesting aspects of using the library is the availability of numerous models +on platforms like [Kaggle Models](https://www.kaggle.com/models) for various tasks, which +can be easily integrated into your code. For more information on Tensorflow Lite, please +[check their documentation](https://www.tensorflow.org/lite/guide). + +#### ONNX Runtime + +**ONNX Runtime** is a high-performance inference engine designed to run models in the +[ONNX](https://onnx.ai/) format, an open standard that facilitates representing and +transferring neural network models across various AI frameworks such as PyTorch, +TensorFlow, or scikit-learn. + +Thanks to this interoperability, you can run models trained in multiple frameworks using a +single unified runtime. The general idea is: + +1. **Get a model**: Train it using your preferred framework and export or convert it to + the ONNX format. There are [tutorials](https://onnxruntime.ai/docs/tutorials/) showing + how to do this for popular frameworks and libraries. + +2. **Load and run the model with ONNX Runtime**: Check out these [C++ inference + examples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx) + to quickly get started with some code samples. + +From there, ONNX Runtime offers options to tune performance using various runtime +configurations or hardware accelerators. There are many possibilities—check [the +Performance section in the documentation](https://onnxruntime.ai/docs/performance/) for a +more in-depth look. + +ONNX Runtime’s flexibility allows you to experiment with models from diverse sources, +integrate them into your C++ applications, and scale as needed. For more details, check +out the [ONNX Runtime documentation](https://onnxruntime.ai/docs/). + +#### OpenVINO + +**OpenVINO** (Open Visual Inference and Neural Network Optimization) is an +[Intel-developed toolkit](https://docs.openvino.ai/) that accelerates deep learning +inference across a range of devices. It supports models from popular frameworks like +PyTorch, TensorFlow, and ONNX, offering tools to optimize, deploy, and scale AI +applications efficiently. + +You can check some of their [C++ +examples](https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html) +demonstrating tasks like model loading, inference, and performance benchmarking, to help +you get started. + +For more details, visit the [OpenVINO documentation](https://docs.openvino.ai/2024/). + +#### mlpack + +**mlpack** is a fast and flexible header-only C++ library for machine learning, designed +for both lightweight deployment and interactive prototyping via tools like C++ notebooks. +It offers a broad range of algorithms for classification, regression, clustering, and +more, along with preprocessing utilities and transformations. + +To explore [mlpack](https://www.mlpack.org/), visit the [examples +repository](https://github.com/mlpack/examples/tree/master/cpp), which showcases C++ +applications like training neural networks for digit recognition, using decision trees to +predict loan defaults, and applying clustering to find patterns in healthcare datasets. + +For more details, visit the [mlpack documentation](https://www.mlpack.org/). + +### Dlib + +**Dlib** is a modern C++ library offering advanced machine learning algorithms and +computer vision functionalities, widely adopted in research and industry. Its +well-designed API and comprehensive documentation make it easy to integrate ML +capabilities into existing projects. + +It provides algorithms for facial detection, landmark recognition, object classification, +and tracking. Examples showcasing these algorithms can be found in [their GitHub +repository](https://github.com/davisking/dlib/tree/master/examples). For more details, +visit the [Dlib official site](http://dlib.net/). + +## Conclusion + +There is a wide variety of libraries available in C++ for working with AI. An additional +advantage is the ability to customize optimizations for different platforms, enabling +faster and more energy-efficient AI workflows. With Conan, integrating these libraries +into your projects is both straightforward and flexible. + +With C++ and these libraries, getting started with AI is easier than you think. Give them +a try and see what you can build! From 7016d16c3fce0f063f2115333ecfe53860276f30 Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:13:46 +0100 Subject: [PATCH 02/10] wip --- .../2024-12-20-You-can-do-AI-with-cpp.markdown | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index 487f2523..c335a062 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -44,11 +44,13 @@ and others, enabling local inference with minimal dependencies and high performa works on CPUs and GPUs, supports diverse architectures, and accommodates a variety of text models like LLaMA 3, Mistral, or Phi, as well as multimodal models like LLaVA 1.6. -One of the most interesting aspects of this library is that it includes CLI tools that -allow you to run your own LLMs out of the box. To install the library with Conan, enabling -the examples and network options, and using a [Conan -deployer](https://docs.conan.io/2/reference/extensions/deployers.html) to move the files -to the user space, you can run the following command: +One of the most interesting aspects of this library is that it includes some CLI tools +that will make it easy to run your own LLMs straight out of the box. To install the +library with Conan, ensure you enable building the examples and activate the network +options (which will require `libcurl`). Then, use a [Conan +deployer](https://docs.conan.io/2/reference/extensions/deployers.html) to move the +installed files from the Conan cache to the user space. To do all that, just run the +following command: ```shell # Install llama-cpp using Conan and deploy to the local folder @@ -58,7 +60,7 @@ $ conan install --requires=llama-cpp/b4079 --build=missing \ --deployer=full_deploy ``` -Running your own chatbot locally is as simple as invoking the packaged `llama-cli` +You can run your chatbot locally by simply by invoking the packaged `llama-cli` application with a model from a Hugging Face repository (in this case we will be using a Llama 3.2 model with 1 billion parameters and 6 bit quantization from the [unsloth repo](https://huggingface.co/unsloth)) and starting to ask questions: @@ -110,7 +112,7 @@ performance while minimizing power consumption. alt="Pose estimation with TensorFlow Lite"/> -To explore TensorFlow Lite in action, we previously published a [blog +If you'd like to see TensorFlow Lite in action, we previously published a [blog post](https://blog.conan.io/2023/05/11/tensorflow-lite-cpp-mobile-ml-guide.html) showcasing how to build a real-time human pose detection application using TensorFlow Lite and OpenCV. If you haven't read it yet, we recommend checking it out for a detailed From ebaf04c838185d22d9ecbbc0bdc6a4137e9bd433 Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:15:16 +0100 Subject: [PATCH 03/10] wip --- _posts/2024-12-20-You-can-do-AI-with-cpp.markdown | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index c335a062..2a4433b8 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -44,12 +44,12 @@ and others, enabling local inference with minimal dependencies and high performa works on CPUs and GPUs, supports diverse architectures, and accommodates a variety of text models like LLaMA 3, Mistral, or Phi, as well as multimodal models like LLaVA 1.6. -One of the most interesting aspects of this library is that it includes some CLI tools -that will make it easy to run your own LLMs straight out of the box. To install the -library with Conan, ensure you enable building the examples and activate the network -options (which will require `libcurl`). Then, use a [Conan +One of the most interesting aspects of this library is that it includes a collection of +CLI tools as examples, making it easy to run your own LLMs straight out of the box. To +install the library with Conan, ensure that you enable building the examples and activate +the network options (which require `libcurl`). Then, use a [Conan deployer](https://docs.conan.io/2/reference/extensions/deployers.html) to move the -installed files from the Conan cache to the user space. To do all that, just run the +installed files from the Conan cache to the user space. To accomplish this, simply run the following command: ```shell From abf90400589e4b1cd2f5db69ae77e826604d89de Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:17:49 +0100 Subject: [PATCH 04/10] wip --- _posts/2024-12-20-You-can-do-AI-with-cpp.markdown | 3 --- 1 file changed, 3 deletions(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index 2a4433b8..343059ae 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -20,9 +20,6 @@ C++ offers some advantages for AI and ML development: - **Performance**: C++ provides high execution speed and efficient resource management, making it ideal for computationally intensive tasks. -- **Low-Level Optimizations**: It allows precise control over memory usage, inference - processes, and hardware features like SIMD and CUDA, enabling developers to implement - custom optimizations and leverage hardware capabilities. - **Fine-Grained Control through Compilation**: C++ allows developers to utilize multiple compilation options and optimize libraries directly from the source code to tailor performance for specific hardware, offering a level of fine-grained control. From de620f7d97d4ac89020ce67472791fe01557c6af Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:21:01 +0100 Subject: [PATCH 05/10] wip --- _posts/2024-12-20-You-can-do-AI-with-cpp.markdown | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index 343059ae..3cd4cfa1 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -16,16 +16,17 @@ relevant ones available, helping you get started with AI development in C++ easi ### Why Use C++ for AI and Machine Learning? -C++ offers some advantages for AI and ML development: +C++ offers several advantages for AI and ML development: - **Performance**: C++ provides high execution speed and efficient resource management, making it ideal for computationally intensive tasks. -- **Fine-Grained Control through Compilation**: C++ allows developers to utilize multiple - compilation options and optimize libraries directly from the source code to tailor - performance for specific hardware, offering a level of fine-grained control. +- **Low-Level Optimizations**: C++ enables developers to utilize multiple compilation + options and optimize libraries directly from the source code. This provides precise + control over memory usage, inference processes, and hardware features like SIMD and + CUDA, allowing custom optimizations for specific hardware capabilities. In summary, C++ can be an excellent choice for working with AI. Let's explore some of the -most representative libraries on this topic available in the Conan Center Index. +most representative AI libraries available in Conan Center Index. ### An Overview of Some AI and ML Libraries Available in Conan Center From 456706d18e8d96871e62202293f612c7aa0937b5 Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:33:46 +0100 Subject: [PATCH 06/10] wip --- ...2024-12-20-You-can-do-AI-with-cpp.markdown | 25 ++++++++++++------- 1 file changed, 16 insertions(+), 9 deletions(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index 3cd4cfa1..66fb3ac3 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -40,7 +40,9 @@ computer vision and numerical optimization. **LLaMA.cpp** is a C/C++ implementation of [Meta’s LLaMA models](https://www.llama.com/) and others, enabling local inference with minimal dependencies and high performance. It works on CPUs and GPUs, supports diverse architectures, and accommodates a variety of text -models like LLaMA 3, Mistral, or Phi, as well as multimodal models like LLaVA 1.6. +models like [LLaMA 3](https://huggingface.co/models?search=llama), +[Mistral](https://mistral.ai/), or [Phi](https://azure.microsoft.com/en-us/products/phi), +as well as multimodal models like [LLaVA](https://github.com/haotian-liu/LLaVA). One of the most interesting aspects of this library is that it includes a collection of CLI tools as examples, making it easy to run your own LLMs straight out of the box. To @@ -58,10 +60,12 @@ $ conan install --requires=llama-cpp/b4079 --build=missing \ --deployer=full_deploy ``` -You can run your chatbot locally by simply by invoking the packaged `llama-cli` -application with a model from a Hugging Face repository (in this case we will be using a -Llama 3.2 model with 1 billion parameters and 6 bit quantization from the [unsloth -repo](https://huggingface.co/unsloth)) and starting to ask questions: +You can run your chatbot locally by invoking the packaged `llama-cli` application with a +model from a Hugging Face repository. In this example, we will use a Llama 3.2 model with +1 billion parameters and 6-bit quantization from the [unsloth +repository](https://huggingface.co/unsloth). + +Now, simply run the following command to start asking questions: ```shell # Run llama-cli downloading a Hugging Face model @@ -71,7 +75,7 @@ $ ./direct_deploy/llama-cpp/bin/llama-cli \ -p "What is the meaning to life and the universe?\n" ``` -Now, let’s check out our LLM’s perspective: +Let’s check out our LLM’s perspective: ```text What is the meaning to life and the universe? @@ -108,13 +112,16 @@ performance while minimizing power consumption. Pose estimation with TensorFlow Lite +
+ TensorFlow Lite in action +
-If you'd like to see TensorFlow Lite in action, we previously published a [blog +If you'd like to learn how to use TensorFlow Lite with a neural network model in C++, we +previously published a [blog post](https://blog.conan.io/2023/05/11/tensorflow-lite-cpp-mobile-ml-guide.html) showcasing how to build a real-time human pose detection application using TensorFlow Lite -and OpenCV. If you haven't read it yet, we recommend checking it out for a detailed -walkthrough of an exciting use case. +and OpenCV. Check it out if you haven't read it yet. One of the interesting aspects of using the library is the availability of numerous models on platforms like [Kaggle Models](https://www.kaggle.com/models) for various tasks, which From 3e28689704380e76ba1b2d9acca6cf7f5c2a3998 Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:44:55 +0100 Subject: [PATCH 07/10] wip --- ...2024-12-20-You-can-do-AI-with-cpp.markdown | 90 +++++++++---------- 1 file changed, 42 insertions(+), 48 deletions(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index 66fb3ac3..07f0da0c 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -131,77 +131,71 @@ can be easily integrated into your code. For more information on Tensorflow Lite #### ONNX Runtime **ONNX Runtime** is a high-performance inference engine designed to run models in the -[ONNX](https://onnx.ai/) format, an open standard that facilitates representing and -transferring neural network models across various AI frameworks such as PyTorch, -TensorFlow, or scikit-learn. +[ONNX](https://onnx.ai/) format, an open standard for representing network models across +various AI frameworks such as PyTorch, TensorFlow, and scikit-learn. -Thanks to this interoperability, you can run models trained in multiple frameworks using a -single unified runtime. The general idea is: +Thanks to this interoperability, ONNX Runtime allows you to use models trained in +different frameworks with a single unified runtime. Here’s the general workflow: -1. **Get a model**: Train it using your preferred framework and export or convert it to - the ONNX format. There are [tutorials](https://onnxruntime.ai/docs/tutorials/) showing - how to do this for popular frameworks and libraries. +1. **Get a model**: Train a model using your preferred framework and export or convert it + to the ONNX format. There are [tutorials](https://onnxruntime.ai/docs/tutorials/) + available for popular frameworks and libraries. 2. **Load and run the model with ONNX Runtime**: Check out these [C++ inference examples](https://github.com/microsoft/onnxruntime-inference-examples/tree/main/c_cxx) - to quickly get started with some code samples. - -From there, ONNX Runtime offers options to tune performance using various runtime -configurations or hardware accelerators. There are many possibilities—check [the -Performance section in the documentation](https://onnxruntime.ai/docs/performance/) for a -more in-depth look. + to get started quickly. -ONNX Runtime’s flexibility allows you to experiment with models from diverse sources, -integrate them into your C++ applications, and scale as needed. For more details, check -out the [ONNX Runtime documentation](https://onnxruntime.ai/docs/). +Additionally, ONNX Runtime offers multiple options for tuning performance using various +runtime configurations or hardware accelerators. Explore [the Performance section in the +documentation](https://onnxruntime.ai/docs/performance/) for more details. For more +information, visit the [ONNX Runtime documentation](https://onnxruntime.ai/docs/). #### OpenVINO **OpenVINO** (Open Visual Inference and Neural Network Optimization) is an [Intel-developed toolkit](https://docs.openvino.ai/) that accelerates deep learning -inference across a range of devices. It supports models from popular frameworks like -PyTorch, TensorFlow, and ONNX, offering tools to optimize, deploy, and scale AI -applications efficiently. +inference on a wide range of devices. It supports models from frameworks like PyTorch, +TensorFlow, and ONNX, offering tools to optimize, deploy, and scale AI applications +efficiently. -You can check some of their [C++ -examples](https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html) -demonstrating tasks like model loading, inference, and performance benchmarking, to help -you get started. +The [OpenVINO C++ +examples](https://docs.openvino.ai/2024/learn-openvino/openvino-samples.html) demonstrate +tasks such as model loading, inference, and performance benchmarking. Explore these +examples to see how you can integrate OpenVINO into your projects. For more details, visit the [OpenVINO documentation](https://docs.openvino.ai/2024/). #### mlpack -**mlpack** is a fast and flexible header-only C++ library for machine learning, designed -for both lightweight deployment and interactive prototyping via tools like C++ notebooks. -It offers a broad range of algorithms for classification, regression, clustering, and -more, along with preprocessing utilities and transformations. +**mlpack** is a fast, flexible, and lightweight header-only C++ library for machine +learning. It is ideal for lightweight deployments and prototyping. It offers a broad range +of machine learning algorithms for classification, regression, clustering, and more, along +with preprocessing utilities and data transformations. -To explore [mlpack](https://www.mlpack.org/), visit the [examples -repository](https://github.com/mlpack/examples/tree/master/cpp), which showcases C++ -applications like training neural networks for digit recognition, using decision trees to -predict loan defaults, and applying clustering to find patterns in healthcare datasets. +Explore [mlpack’s examples +repository](https://github.com/mlpack/examples/tree/master/cpp), where you’ll find C++ +applications such as training neural networks for digit recognition, decision tree models +for predicting loan defaults, and clustering algorithms for identifying patterns in +healthcare data. -For more details, visit the [mlpack documentation](https://www.mlpack.org/). +For further details, visit the [mlpack documentation](https://www.mlpack.org/). -### Dlib +#### Dlib -**Dlib** is a modern C++ library offering advanced machine learning algorithms and -computer vision functionalities, widely adopted in research and industry. Its -well-designed API and comprehensive documentation make it easy to integrate ML -capabilities into existing projects. +**Dlib** is a modern C++ library widely used in research and industry for advanced machine +learning algorithms and computer vision tasks. Its comprehensive documentation and +well-designed API make it straightforward to integrate into existing projects. -It provides algorithms for facial detection, landmark recognition, object classification, -and tracking. Examples showcasing these algorithms can be found in [their GitHub -repository](https://github.com/davisking/dlib/tree/master/examples). For more details, -visit the [Dlib official site](http://dlib.net/). +Dlib provides a variety of algorithms, including facial detection, landmark recognition, +object classification, and tracking. Examples of these functionalities can be found in +[their GitHub repository](https://github.com/davisking/dlib/tree/master/examples). + +For more information, visit the [Dlib official site](http://dlib.net/). ## Conclusion -There is a wide variety of libraries available in C++ for working with AI. An additional -advantage is the ability to customize optimizations for different platforms, enabling -faster and more energy-efficient AI workflows. With Conan, integrating these libraries -into your projects is both straightforward and flexible. +C++ offers high-performance AI libraries and the flexibility to optimize for your +hardware. With Conan, integrating these tools is straightforward, enabling efficient, +scalable AI workflows. -With C++ and these libraries, getting started with AI is easier than you think. Give them -a try and see what you can build! +Now, give these tools a go and see your AI ideas come to life in C++! From 48c73c4209788fee2757a104e83fa8e95153c733 Mon Sep 17 00:00:00 2001 From: czoido Date: Fri, 20 Dec 2024 14:50:23 +0100 Subject: [PATCH 08/10] wip --- ...2024-12-20-You-can-do-AI-with-cpp.markdown | 20 +++++++++---------- 1 file changed, 10 insertions(+), 10 deletions(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index 07f0da0c..5824193a 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -30,10 +30,9 @@ most representative AI libraries available in Conan Center Index. ### An Overview of Some AI and ML Libraries Available in Conan Center -Below are some notable libraries you can easily integrate with your C++ projects through -Conan Center. These libraries range from running large language models locally to -optimizing model inference on edge devices or using specialized toolkits for tasks like -computer vision and numerical optimization. +Below are some notable libraries available in Conan Center Index. These libraries range +from running large language models locally to optimizing model inference on edge devices +or using specialized toolkits for tasks like computer vision and numerical optimization. #### LLaMA.cpp @@ -45,12 +44,13 @@ models like [LLaMA 3](https://huggingface.co/models?search=llama), as well as multimodal models like [LLaVA](https://github.com/haotian-liu/LLaVA). One of the most interesting aspects of this library is that it includes a collection of -CLI tools as examples, making it easy to run your own LLMs straight out of the box. To -install the library with Conan, ensure that you enable building the examples and activate -the network options (which require `libcurl`). Then, use a [Conan -deployer](https://docs.conan.io/2/reference/extensions/deployers.html) to move the -installed files from the Conan cache to the user space. To accomplish this, simply run the -following command: +CLI tools as examples, making it easy to run your own LLMs straight out of the box. + +Let's try one of those tools. First, install the library with Conan and ensure that you +enable building the examples and activate the network options (which require `libcurl`). +Then, use a [Conan deployer](https://docs.conan.io/2/reference/extensions/deployers.html) +to move the installed files from the Conan cache to the user space. To accomplish this, +simply run the following command: ```shell # Install llama-cpp using Conan and deploy to the local folder From 975d35482ddb1e5edc24a2505c19925be0260500 Mon Sep 17 00:00:00 2001 From: Carlos Zoido Date: Fri, 20 Dec 2024 14:55:59 +0100 Subject: [PATCH 09/10] Update _posts/2024-12-20-You-can-do-AI-with-cpp.markdown --- _posts/2024-12-20-You-can-do-AI-with-cpp.markdown | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown index 5824193a..2617da36 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown @@ -57,7 +57,7 @@ simply run the following command: $ conan install --requires=llama-cpp/b4079 --build=missing \ -o="llama-cpp/*:with_examples=True" \ -o="llama-cpp/*:with_curl=True" \ - --deployer=full_deploy + --deployer=direct_deploy ``` You can run your chatbot locally by invoking the packaged `llama-cli` application with a From ed852a59fe10143279f12eb2164b7cb46dd1dcef Mon Sep 17 00:00:00 2001 From: czoido Date: Sat, 21 Dec 2024 09:24:01 +0100 Subject: [PATCH 10/10] review and change publish date --- ...024-12-23-You-can-do-AI-with-cpp.markdown} | 36 +++++++++++++++---- 1 file changed, 30 insertions(+), 6 deletions(-) rename _posts/{2024-12-20-You-can-do-AI-with-cpp.markdown => 2024-12-23-You-can-do-AI-with-cpp.markdown} (92%) diff --git a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown b/_posts/2024-12-23-You-can-do-AI-with-cpp.markdown similarity index 92% rename from _posts/2024-12-20-You-can-do-AI-with-cpp.markdown rename to _posts/2024-12-23-You-can-do-AI-with-cpp.markdown index 5824193a..68fdb114 100644 --- a/_posts/2024-12-20-You-can-do-AI-with-cpp.markdown +++ b/_posts/2024-12-23-You-can-do-AI-with-cpp.markdown @@ -34,7 +34,7 @@ Below are some notable libraries available in Conan Center Index. These librarie from running large language models locally to optimizing model inference on edge devices or using specialized toolkits for tasks like computer vision and numerical optimization. -#### LLaMA.cpp +#### [LLaMA.cpp](https://conan.io/center/recipes/llama-cpp) **LLaMA.cpp** is a C/C++ implementation of [Meta’s LLaMA models](https://www.llama.com/) and others, enabling local inference with minimal dependencies and high performance. It @@ -100,7 +100,7 @@ integrate LLMs into your own applications. For example, here is the code for the we just executed. For more information on the LLaMA.cpp project, please [check their repository on GitHub](https://github.com/ggerganov/llama.cpp). -#### TensorFlow Lite +#### [TensorFlow Lite](https://conan.io/center/recipes/tensorflow-lite) **TensorFlow Lite** is a specialized version of [TensorFlow](https://www.tensorflow.org/) designed for deploying machine learning models on mobile, embedded systems, and other @@ -128,7 +128,7 @@ on platforms like [Kaggle Models](https://www.kaggle.com/models) for various tas can be easily integrated into your code. For more information on Tensorflow Lite, please [check their documentation](https://www.tensorflow.org/lite/guide). -#### ONNX Runtime +#### [ONNX Runtime](https://conan.io/center/recipes/onnxruntime) **ONNX Runtime** is a high-performance inference engine designed to run models in the [ONNX](https://onnx.ai/) format, an open standard for representing network models across @@ -150,7 +150,13 @@ runtime configurations or hardware accelerators. Explore [the Performance sectio documentation](https://onnxruntime.ai/docs/performance/) for more details. For more information, visit the [ONNX Runtime documentation](https://onnxruntime.ai/docs/). -#### OpenVINO +Check all available versions in the Conan Center Index by running: + +```shell +conan search onnxruntime +``` + +#### [OpenVINO](https://conan.io/center/recipes/openvino) **OpenVINO** (Open Visual Inference and Neural Network Optimization) is an [Intel-developed toolkit](https://docs.openvino.ai/) that accelerates deep learning @@ -165,7 +171,13 @@ examples to see how you can integrate OpenVINO into your projects. For more details, visit the [OpenVINO documentation](https://docs.openvino.ai/2024/). -#### mlpack +Check all available versions in the Conan Center Index by running: + +```shell +conan search openvino +``` + +#### [mlpack](https://conan.io/center/recipes/mlpack) **mlpack** is a fast, flexible, and lightweight header-only C++ library for machine learning. It is ideal for lightweight deployments and prototyping. It offers a broad range @@ -180,7 +192,13 @@ healthcare data. For further details, visit the [mlpack documentation](https://www.mlpack.org/). -#### Dlib +Check all available versions in the Conan Center Index by running: + +```shell +conan search mlpack +``` + +#### [Dlib](https://conan.io/center/recipes/dlib) **Dlib** is a modern C++ library widely used in research and industry for advanced machine learning algorithms and computer vision tasks. Its comprehensive documentation and @@ -192,6 +210,12 @@ object classification, and tracking. Examples of these functionalities can be fo For more information, visit the [Dlib official site](http://dlib.net/). +Check all available versions in the Conan Center Index by running: + +```shell +conan search dlib +``` + ## Conclusion C++ offers high-performance AI libraries and the flexibility to optimize for your