You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We introduce phi-3-mini, a 3.8 billion parameter language model trained on3.3 trillion tokens, whose overall performance, as measured by both academicbenchmarks and internal testing, rivals that of models such as Mixtral 8x7B andGPT-3.5 (e.g., phi-3-mini achieves 69% on MMLU and 8.38 on MT-bench), despitebeing small enough to be deployed on a phone. The innovation lies entirely inour dataset for training, a scaled-up version of the one used for phi-2,composed of heavily filtered web data and synthetic data. The model is alsofurther aligned for robustness, safety, and chat format. We also provide someinitial parameter-scaling results with a 7B and 14B models trained for 4.8Ttokens, called phi-3-small and phi-3-medium, both significantly more capablethan phi-3-mini (e.g., respectively 75% and 78% on MMLU, and 8.7 and 8.9 onMT-bench).
AkihikoWatanabe
changed the title
あ
Phi-3 Technical Report: A Highly Capable Language Model Locally on Your
Phone, Marah Abdin+, N/A, arXiv'24
Apr 23, 2024
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: