Skip to content

Fine-Tuning and Evaluating a Falcon 7B Model for generating HTML code from input prompts.

Notifications You must be signed in to change notification settings

PrincySinghal/Html-code-generation-from-LLM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 

Repository files navigation

Html-code-generation-from-LLMs

Objective:

Fine-tuning the Falcon 7B for the task of HTML code generation. The current model was selected based on its performance on complex reasoning benchmarks such as ARC and GSM8K and its compatibility with the available computational resources.

Dataset:

Used https://huggingface.co/datasets/ttbui/html_alpaca dataset which contains: Size of data: 636 rows

  1. Instructions- user prompts (textual)
  2. Input-Further information needed as per the instruction could be html code or data points (textual+code)
  3. Response- empty
  4. Output- expected HTML code

Process

  1. Model selection
  2. Dataset Preparation and Preprocessing
  3. Model Fine tuning script(setting hyperparameters and choosing fine tuning techniques and regularization)
  4. Model Evaluation
  5. API development to serve the model

Challenges and Errors encountered with resolutions

  1. Understanding and implementing Parameter-Efficient Tuning (PeFT).
  2. Managing the computational complexity and memory limitations of large models.
  3. Ensuring reproducibility and consistency across training runs.
  4. Dealing with long training times and optimizing model runtime.
  5. Found ways to complete the training and evaluation without buying colab pro. Out of RAM error encountered during model training was solved by trying a different way to load my fine-tuned model rather than loading base model from scratch.

Solutions Implemented:

  1. Adopting PeFT techniques like LoRA.
  2. Utilizing quantization and model sharding to manage memory usage.
  3. Setting a random seed for train-test splitting to ensure reproducibility.
  4. Implementing precision training, early stopping and learning rate scheduling to improve convergence speed and solve GPU memory limitations.
  5. Regularization techniques such as dropout and scaling factor were applied.
  6. Training arguments were carefully set up to balance performance and resource usage.

List of hyperparameters that can be tweaked during training:

1.learning_rate: 0.0002 2. train_batch_size: 3. eval_batch_size: 8 4. seed: 42 5.gradient_accumulation_steps: 2 6.total_train_batch_size: 4 7.optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 8.lr_scheduler_type: cosine 9.lr_scheduler_warmup_ratio: 0.03 10.training_steps: 320

Training Results

Model link- https://huggingface.co/PrincySinghal991/falcon-7b-sharded-bf16-finetuned-html-code-generation image image image

Evaluation Results

  1. BLEU score: 0.01782

Ongoing explorations:

  1. LLMS more suited for code generation
  2. Hyperparamter tuning to improve low evalution score

About

Fine-Tuning and Evaluating a Falcon 7B Model for generating HTML code from input prompts.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published