The current, most capable model that runs on a single GPU.
5.6M Pulls 21 Tags Updated 1 month ago
Meta's latest collection of multimodal models.
393.5K Pulls 11 Tags Updated 1 week ago
Flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL.
269.5K Pulls 17 Tags Updated 2 weeks ago
🌋 LLaVA is a novel end-to-end trained large multimodal model that combines a vision encoder and Vicuna for general-purpose visual and language understanding. Updated to version 1.6.
6.4M Pulls 98 Tags Updated 1 year ago
Llama 3.2 Vision is a collection of instruction-tuned image reasoning generative models in 11B and 90B sizes.
2.2M Pulls 9 Tags Updated 2 weeks ago
A series of multimodal LLMs (MLLMs) designed for vision-language understanding.
1.8M Pulls 17 Tags Updated 6 months ago
A LLaVA model fine-tuned from Llama 3 Instruct with better scores in several benchmarks.
980.7K Pulls 4 Tags Updated 1 year ago
moondream2 is a small vision language model designed to run efficiently on edge devices.
201.5K Pulls 18 Tags Updated 1 year ago
Building upon Mistral Small 3, Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance.
128.6K Pulls 5 Tags Updated 2 months ago
BakLLaVA is a multimodal model consisting of the Mistral 7B base model augmented with the LLaVA architecture.
119.6K Pulls 17 Tags Updated 1 year ago
A new small LLaVA model fine-tuned from Phi 3 Mini.
96.2K Pulls 4 Tags Updated 1 year ago
A compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.
95.2K Pulls 5 Tags Updated 3 months ago