GG

Georgi Gerganov

0 位关注者11 条内容最近 7 天 1 条

简介

24th at the Electrica puzzle challenge | https://t.co/baTQS2bdia

平台

𝕏Georgi Gerganov

内容历史

GG
Georgi Gerganov
𝕏x7 days ago

RT Jeff Geerling Just tried out the new built-in WebUI feature of llama.cpp and it couldn't be easier. Just start llama-server with a host and port, and voila!

RT Jeff Geerling: Just tried out the new built-in WebUI feature of llama.cpp and it couldn't be easier. Just start llama-server with a host and port, ...
View on X
GG
Georgi Gerganov
𝕏x8 days ago

RT Georgi Gerganov Initial M5 Neural Accelerators support in llama.cpp Enjoy faster TTFT in all ggml-based software (requires macOS Tahoe 26) https://github.com/ggml-org/llama.cpp/pull/16634

View on X
GG
Georgi Gerganov
𝕏x8 days ago

Initial M5 Neural Accelerators support in llama.cpp Enjoy faster TTFT in all ggml-based software (requires macOS Tahoe 26) https://github.com/ggml-org/llama.cpp/pull/16634

View on X
GG
Georgi Gerganov
𝕏x8 days ago

RT Emanuil Rusev Re @fishright @ggerganov Just pushed a fix for this — this is what first launch is going to look like in the next version.

RT Emanuil Rusev: Re @fishright @ggerganov Just pushed a fix for this — this is what first launch is going to look like in the next version.
View on X
GG
Georgi Gerganov
𝕏x9 days ago

LlamaBarn v0.10.0 (beta) is out - feedback appreciated

LlamaBarn v0.10.0 (beta) is out - feedback appreciated
View on X
GG
Georgi Gerganov
𝕏x10 days ago

RT clem 🤗 When you run AI on your device, it is more efficient and less big brother and free! So it's very cool to see the new llama.cpp UI, a chatgpt-like app that fully runs on your laptop without needing wifi or sending any data external to any API. It supports: - 150,000+ GGUF models - Drop in PDFs, images, or text documents - Branch and edit conversations anytime - Parallel chats and image processing - Math and code rendering - Constrained generation with JSON schema supported Well done @ggerganov and team!

RT clem 🤗: When you run AI on your device, it is more efficient and less big brother and free! So it's very cool to see the new llama.cpp UI, a cha...
View on X
GG
Georgi Gerganov
𝕏x10 days ago

RT Georgi Gerganov A detailed look into the new WebUI of llama.cpp

RT Georgi Gerganov: A detailed look into the new WebUI of llama.cpp
View on X
GG
Georgi Gerganov
𝕏x13 days ago

RT yags llama.cpp developers and community came together in a really impressive way to implement Qwen3-VL models. Check out the PRs, it’s so cool to see the collaboration that went into getting this done. Standard formats like GGUF, combined with mainline llama.cpp support ensures the models you download will work anywhere you choose to run them. This protects you from getting unwittingly locked into niche providers’ custom implementations that won’t run outside their platforms.Qwen: 🎉 Qwen3-VL is now available on llama.cpp! Run this powerful vision-language model directly on your personal devices—fully supported on CPU, CUDA, Metal, Vulkan, and other backends. We’ve also released GGUF weights for all variants—from 2B up to 235B. Download and enjoy! 🚀 🤗 Link: https://x.com/Alibaba_Qwen/status/1984634293004747252

View on X
GG
Georgi Gerganov
𝕏x13 days ago

RT Qwen 🎉 Qwen3-VL is now available on llama.cpp! Run this powerful vision-language model directly on your personal devices—fully supported on CPU, CUDA, Metal, Vulkan, and other backends. We’ve also released GGUF weights for all variants—from 2B up to 235B. Download and enjoy! 🚀 🤗 Hugging Face: https://huggingface.co/collections/Qwen/qwen3-vl 🤖 ModelScope: https://modelscope.cn/collections/Qwen3-VL-5c7a94c8cb144b 📌 PR: https://github.com/ggerganov/llama.cpp/pull/16780

View on X
GG
Georgi Gerganov
𝕏x29 days ago

RT Vaibhav (VB) Srivastav BOOM: We've just re-launched HuggingChat v2 💬 - 115 open source models in a single interface is stronger than ChatGPT 🔥 Introducing: HuggingChat Omni 💫 > Select the best model for every prompt automatically 🚀 > Automatic model selection for your queries > 115 models available across 15 providers including @GroqInc, @CerebrasSystems, @togethercompute, @novita_labs, and more Powered by HF Inference Providers — access hundreds of AI models using only world-class inference providers Omni uses a policy-based approach to model selection (after experimenting with different methods). Credits to @katanemo_ for their small routing model: katanemo/Arch-Router-1.5B Coming next: • MCP support with web search • File support • Omni routing selection improvements • Customizable policies Try it out today at hf[dot] co/chat 🤗

View on X
GG
Georgi Gerganov
𝕏x30 days ago

simpleDavid Finsterwalder | eu/acc: Important info. The issue in that benchmark seems to be ollama. Native llama.cpp works much better. Not sure how ollama can fail so hard to wrap llama.cpp. The lesson: Don’t use ollama. Espacially not for benchmarks. Link: https://x.com/DFinsterwalder/status/1978372050239516989

simplesimple
View on X