Skip to content

Commit

Permalink
gh: update links in GitHub templates (#19592)
Browse files Browse the repository at this point in the history
  • Loading branch information
Borda committed Mar 7, 2024
1 parent 0b88204 commit 3740546
Show file tree
Hide file tree
Showing 3 changed files with 9 additions and 12 deletions.
7 changes: 3 additions & 4 deletions .github/ISSUE_TEMPLATE/2_refactor.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ body:
- [**Metrics**](https://github.com/Lightning-AI/metrics):
Machine learning metrics for distributed, scalable PyTorch applications.
enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
- [**Flash**](https://github.com/Lightning-AI/lightning-flash):
The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts):
Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
- [**GPT**](https://github.com/Lightning-AI/lit-GPT):
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT.
Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
7 changes: 3 additions & 4 deletions .github/ISSUE_TEMPLATE/3_feature_request.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,6 @@ body:
- [**Metrics**](https://github.com/Lightning-AI/metrics):
Machine learning metrics for distributed, scalable PyTorch applications.
enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
- [**Flash**](https://github.com/Lightning-AI/lightning-flash):
The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts):
Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
- [**GPT**](https://github.com/Lightning-AI/lit-GPT):
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT.
Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.
7 changes: 3 additions & 4 deletions .github/ISSUE_TEMPLATE/4_documentation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,6 @@ body:
- [**Metrics**](https://github.com/Lightning-AI/metrics):
Machine learning metrics for distributed, scalable PyTorch applications.
enables pure PyTorch users to scale their existing code on any kind of device while retaining full control over their own loops and optimization logic.
- [**Flash**](https://github.com/Lightning-AI/lightning-flash):
The fastest way to get a Lightning baseline! A collection of tasks for fast prototyping, baselining, fine-tuning, and solving problems with deep learning.
- [**Bolts**](https://github.com/Lightning-AI/lightning-bolts):
Pretrained SOTA Deep Learning models, callbacks, and more for research and production with PyTorch Lightning and PyTorch.
- [**GPT**](https://github.com/Lightning-AI/lit-GPT):
Hackable implementation of state-of-the-art open-source LLMs based on nanoGPT.
Supports flash attention, 4-bit and 8-bit quantization, LoRA and LLaMA-Adapter fine-tuning, pre-training. Apache 2.0-licensed.

0 comments on commit 3740546

Please sign in to comment.