Discover real AI creators shaping the future. Track their latest blogs, X posts, YouTube videos, WeChat Official Account posts, and GitHub commits — all in one place.

Gemma 4: Byte for byte, the most capable open models Four new vision-capable Apache 2.0 licensed reasoning LLMs from Google DeepMind, sized at 2B, 4B, 31B, plus a 26B-A4B Mixture-of-Experts. Google emphasize "unprecedented level of intelligence-per-parameter", providing yet more evidence that creating small useful models is one of the hottest areas of research right now. They actually label the two smaller models as E2B and E4B for "Effective" parameter size. The system card explains: The smaller models incorporate Per-Layer Embeddings (PLE) to maximize parameter efficiency in on-device deployments. Rather than adding more layers or parameters to the model, PLE gives each decoder layer its own small embedding for every token. These embedding tables are large but are only used for quick lookups, which is why the effective parameter count is much smaller than the total. I don't entirely understand that, but apparently that's what the "E" in E2B means! I tried them out using th...
Released simonw/llm-gemini
simonw released 0.30 at simonw/llm-gemini
RT LangChain OSS Try open weight models in LangChain & Deep Agents! Original tweet: https://x.com/LangChain_OSS/status/2039769083915202605
RT Mason Daugherty http://x.com/i/article/2039763179299901440 Original tweet: https://x.com/masondrxy/status/2039768211554492420
if I start two companies and they sell an apple back and forth for a billion dollars, do i run two billion dollar companies?
One of our company goals is to automate manual data entry from documents ✍️📑 Our Extract feature in LlamaParse does exactly that, and today we are launching Extract v2 🚀 Define a schema in natural language, and our agentic extraction will fill out the schema from the document with both exact match citations and semantic inference. The v2 change includes: ✅ Simplified tiers that vary from lower cost and higher accuracy ✅ Pre-saved extract configurations so that you can load/share existing configs and iterate on your schema ✅ Configurable parsing; so you can use our best-in-class doc OCR for the most complex tables/charts before extracting into structured output.
After the release of Parse v2, Extract is also getting an upgrade — 𝗶𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗶𝗻𝗴 𝗘𝘅𝘁𝗿𝗮𝗰𝘁 𝘃𝟮! 🎉 We've been reworking the experience from the ground up to make document extraction more powerful and easier to use than ever. Here's what's new: ✦
View quoted post