Skip to content
View SulRash's full-sized avatar
🤖
Training some models
🤖
Training some models
  • KAUST (King Abdullah University of Science & Technology)
  • Saudi Arabia
  • 06:02 (UTC +03:00)
  • LinkedIn in/sulrash

Highlights

  • Pro
Block or Report

Block or report SulRash

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse

Pinned Loading

  1. minLLMTrain minLLMTrain Public

    Minimal yet high performant code for pretraining llms. Attempts to implement some SOTA features. Implements training through: Deepspeed, Megatron-LM, and FSDP. WIP

    Python 5

  2. Cheatsheet Cheatsheet Public

    An attempt at improving facial recognition performance through appending a 'cheatsheet' to an image with one positive sample and multiple negatives during training.

    Python 5

  3. envenc envenc Public

    Repository for environment encoder, an attempt at improving reinforcement learning agents' generalisability through learning how to act on universal multimodal embeddings generated by a vision-lang…

    Python 2

  4. Graduation-Project Graduation-Project Public

    My final year honours project that involved developing a rogue like game from scratch and implementing some RL algorithms. The purpose was to compare perfect information vs imperfect information. I…

    Python 1

  5. AnshulSood11/Engagement-Level-Prediction AnshulSood11/Engagement-Level-Prediction Public

    Engagement Intensity Prediction in Real TIme

    C++ 15 9

  6. microsoft/Megatron-DeepSpeed microsoft/Megatron-DeepSpeed Public

    Forked from NVIDIA/Megatron-LM

    Ongoing research training transformer language models at scale, including: BERT & GPT-2

    Python 1.7k 332