Skip to content

This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.

Notifications You must be signed in to change notification settings

HROlive/Poland-End-To-End-LLM-Bootcamp

Repository files navigation

Course

Table of Contents

  1. Description
  2. Information
  3. Tools and Frameworks
  4. Certificate
  5. Attribution
  6. Licensing

Description

Together with NVIDIA and OpenACC organization, The Academic Computer Centre Cyfronet AGH, PLGrid Infrastructure, and Polish National Competence HPC Centre will host an online bootcamp starting March 5 and concluding March 7, 2024.

This bootcamp is designed to give NLP researchers an end-to-end overview of the fundamentals of the NVIDIA NeMo framework, a complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.

The End-to-End LLM (Large Language Model) Bootcamp is designed from a real-world perspective that follows the data processing, development, and deployment pipeline paradigm. Attendees walk through the workflow of preprocessing the SQuAD (Stanford Question Answering Dataset) dataset for Question Answering task, training the dataset using BERT (Bidirectional Encoder Representations from Transformers), and executing prompt learning strategy using NVIDIA® NeMo™ and a transformer-based language model, NVIDIA Megatron. Attendees will also learn to optimize an LLM using NVIDIA TensorRT™, an SDK for high-performance deep learning inference, guardrail prompts and responses from the LLM model using NeMo Guardrails, and deploy the AI pipeline using NVIDIA Triton™ Inference Server, an open-source software that standardizes AI model deployment and execution across every workload.

Information

This content contains three Labs, plus an introductory notebook and two lab activities notebooks:

  • Overview of End-To-End LLM bootcamp
  • Lab 1: Megatron-GPT
  • Lab 2: TensorRT-LLM and Triton Deployment with LLama-2-7B Model
  • Lab 3: NeMo Guardrails
  • Lab Activity 1: Question Answering task
  • Lab Activity 2: P-tuning/Prompt tuning task

More information can be found on the bootcamp website.

Tools and Frameworks

The tools and frameworks used in the Bootcamp material are as follows:

Certificate

The certificate for the workshop can be found below:

"Poland End-To-End LLM Bootcamp" - Academic Computer Centre Cyfronet AGH, PLGrid Infrastructure and Polish National Competence HPC Centre (Issued On: March 2024)

Attribution

This material is an adaptation of the original repository from the OpenHackathons Github.

Licensing

Copyright © 2023 OpenACC-Standard.org. This material is released by OpenACC-Standard.org, in collaboration with NVIDIA Corporation, under the Creative Commons Attribution 4.0 International (CC BY 4.0). These materials may include references to hardware and software developed by other entities; all applicable licensing and copyrights apply.

About

This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on exercises complimented by tutorials, code snippets, and presentations to help researchers kick-start with NeMo LLM Service and Guardrails.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published