Skip to content
View ChrisHayduk's full-sized avatar
Block or Report

Block or report ChrisHayduk

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
ChrisHayduk/README.md

Hi there, I'm Chris πŸ‘‹

I'm currently the Lead ML Engineer for Drug Discovery at Deloitte, where I work to develop, orchestrate, and deploy deep learning models to accelerate pharmaceutical research and development.

In my free time, I build large language model (LLM) powered tools, develop open source LLM models and datasets, and contribute to LLM research projects.

πŸ’» Github Projects

I've worked on several LLM-focused projects featured on GitHub:

  • QLoRA for Masked Language Modeling - Updated QLoRA for use with the masked language modeling objective, enabling efficient finetuning of BERT-family models
  • Multi-GPU QLoRA - Updated QLoRA to allow for distributed data parallel finetuning, significantly accelerating finetuning workloads
  • Athena.ai (work in progress) - Leveraged GPT-4 and ChromaDB to create a personal knowledge management and chat tool
  • LLaMA Thought Cloning (work in progress) - A LLaMA-based repreoduction of "Thought Cloning: Learning to Think while Acting by Imitating Human Thinking", demonstrating that a single open source LLM can be used as a world model and reinforcement learning agent

πŸ€— HuggingFace Projects

I have also open sourced some of my LLM models and data on HuggingFace:

πŸ“ˆ Datasets

  • ChrisHayduk/Llama-2-SQL-and-Code-Dataset - Curated a SQL-focused code instruction set for LLaMA 2. The eval set includes dummy tables so that the trained model can be evaluated for SQL execution accuracy rather than token prediction accuracy. The dataset was processed in a number of ways, including introducing curriculum learning, fixing table inputs, and instruction filtering.

πŸš€ Models

πŸ“« How to Reach Me

Pinned

  1. qlora-multi-gpu qlora-multi-gpu Public

    Forked from artidoro/qlora

    QLoRA with Enhanced Multi GPU Support

    Jupyter Notebook 34 5

  2. QLoRA-for-MLM QLoRA-for-MLM Public

    QLoRA for Masked Language Modeling

    Jupyter Notebook 19

  3. athena.ai athena.ai Public

    An AI-powered knowledge management and chat tool

    TypeScript 1

  4. Llama-Thought-Cloning Llama-Thought-Cloning Public

    Forked from ShengranHu/Thought-Cloning

    A LLaMA-based repreoduction of "Thought Cloning: Learning to Think while Acting by Imitating Human Thinking"

    Python 6