MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
-
Updated
Jun 11, 2024 - Python
MLX-VLM is a package for running Vision LLMs locally on your Mac using MLX.
A simple UI / Web / Frontend for MLX mlx-lm using Streamlit.
SHARK - High Performance Machine Learning Distribution
SiLLM simplifies the process of training and running Large Language Models (LLMs) on Apple Silicon by leveraging the MLX framework.
Real-time object detection and counting with YOLOv3. Includes a user-friendly GUI for selecting image and video inputs.
Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon
Benchmark of Apple MLX operations on all Apple Silicon chips (GPU, CPU) + MPS and CUDA.
This project is an Implementation of the Paper DDAMFN 2023. This is implemented in PyTorch with FER+ and CK+ dataset using the mps device on Apple Silicon.
Using GPU on Apple Silicon (Tensorflow、Pytorch)
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
Finetune llama2-70b and codellama on MacBook Air without quantization
Explore machine learning techniques with Gradio interfaces for Stable Diffusion image generation and LoRA text generation with the Apple MLX framework.
Script to perform some hashcracking logic automagically
Hard-burned subtitles OCR to SRT extractor
Black-box tool that uses Deep Reinforcement Learning to test and explore Android applications
Lightweight Hashcat automatisation with base dictionaries
Rasa on ARM-based Macs (Native/Docker)
Add a description, image, and links to the apple-silicon topic page so that developers can more easily learn about it.
To associate your repository with the apple-silicon topic, visit your repo's landing page and select "manage topics."