Phi‑3 is a family of open-source “small language models” (SLMs) developed by Microsoft Research, offering advanced capabilities—language understanding, reasoning, coding, math—while maintaining a compact size (3.8 B to 14 B parameters) . The flagship Phi‑3‑mini (3.8 B) delivers benchmark performance on par with models twice its size, rivaling larger models like GPT‑3.5 and Mixtral 8x7B . Its instruction‑tuning and long context windows (4 K or industry‑leading 128 K tokens) make it suitable for chat-style, context‑rich applications, usable locally via Azure AI Studio, Hugging Face, Ollama, and optimized with ONNX/DirectML for cross-platform use . The Phi‑3 lineup includes Phi‑3‑small (7 B) and Phi‑3‑medium (14 B), offering increasing reasoning and math performance (MMLU scores ~75 % and 78 % respectively) while retaining cost-efficiency . The multimodal Phi‑3‑vision (4.2 B) integrates language and image processing (OCR, chart/diagram understanding), supports 128 K context, and excels in visual reasoning with compact footprint . Built upon a rigorously curated mix of web and synthetic datasets, and refined through alignment, RLHF, red‑teaming, and safety practices aligned with Microsoft’s Responsible AI standards .
Phi-3 Code
State‑of‑the‑art small language and multimodal AI models—lightweight, efficient, deployable.
Reviews
- No reviews yet.
See something outdated? Suggest an update