Discover real AI creators shaping the future. Track their latest blogs, X posts, YouTube videos, WeChat Official Account posts, and GitHub commits — all in one place.
Inbox is a way to manage and oversee your agents in LangSmith Fleet Every time they are stuck and need human approval for an action, or have a questions for you, it will show up here and you can quickly cycle through and unblock them
Introducing LangSmith Fleet: an enterprise workspace for creating, using, and managing your fleet of agents. Fleet agents have their own memory, access to a collection of tools and skills, and can be exposed through the communication channels your team uses every day. Fleet
View quoted postNew from the Anthropic Economic Index: how people’s use of Claude changes with experience. Longer-term users are more likely to iterate carefully with Claude, and less likely to hand it full autonomy. They attempt higher-value tasks, and receive more successful responses.
One of the best launches today is Moda, a design agent with taste (link in comments) Under the hood, Moda uses DeepAgent's agent harness This is the same type of agent harness that powers things like Claude Code and Manus Every agent in the future will be built this way
congrats to Moda on the launch of their design agent! it's built on deep agents - all agents going forward should be built on these types of agent harnesses if youre still using the same agent architecture from 1.5 years ago, time to re-evaluate blog: https://blog.langchain.com/how-moda-builds-production-grade-ai-design-agents-with-deep-agents/
We raised $7.5M to kill AI slop. Introducing Moda: the world's first design agent with taste. RT+ comment “Moda” and we’ll design your brand for FREE.
View quoted postRT LangChain JS Frontend engineers shouldn't have to fight their agent SDK. @LangChain makes tool rendering: 👉 type-safe 👉 framework-native 👉 fully customizable 👉 consistent across @reactjs, @vuejs, @sveltejs and @angular Tool calls become just another UI primitive. Build custom cards for every tool your agent uses. Original tweet: https://x.com/LangChain_JS/status/2036489812602126539
RT Latent.Space 🔬Why There Is No "AlphaFold for Materials" https://latent.space/p/materials Materials Science is a force for good everywhere in our lives, from your clothes to the computers you use. We catch up on AI for Materials Discovery with Prof. Heather Kulik of @KulikGroup, one of the first materials scientists to realize that there was alpha in combining computational tools with data driven modeling... and we test out some predictions she makes on the pod with Opus 4.6 and GPT 5.4! Original tweet: https://x.com/latentspacepod/status/2036488561420382659
RT Viv TONS of my work now gets done in Slack by talking to agents with Fleet we want to make it easy for every team to work like this - bug fixes with @ OpenSWE in a channel - chatting with our LC-PM agent on status of tasks and planning work - researching interesting new papers and repos and posting reports for everyone the best thing is our team gets a shared view into the agent inputs and outputs so we can all collaborate in one place it brings AI to our native conversational interface without context switching or copy-pasting info from disparate threads I still use my IDE and the terminal but I really didn’t anticipate how much I actually would use Slack as a place to do work looking forward to people’s feedback and how we can help! Original tweet: https://x.com/Vtrivedy10/status/2036488164303339641
LangSmith Fleet now supports custom Slack bots. Give your agent its own handle, then call it directly from Slack. Use agents where you already work. Try Fleet: https://smith.langchain.com/agents?skipOnboarding=true/?utm_medium=social&utm_source=twitter&utm_campaign=q1-2026_fleet-launch_aw
View quoted postSoftware horror: litellm PyPI supply chain attack. Simple `pip install litellm` was enough to exfiltrate SSH keys, AWS/GCP/Azure creds, Kubernetes configs, git credentials, env vars (all your API keys), shell history, crypto wallets, SSL private keys, CI/CD secrets, database passwords. LiteLLM itself has 97 million downloads per month which is already terrible, but much worse, the contagion spreads to any project that depends on litellm. For example, if you did `pip install dspy` (which depended on litellm>=1.64.0), you'd also be pwnd. Same for any other large project that depended on litellm. Afaict the poisoned version was up for only less than ~1 hour. The attack had a bug which led to its discovery - Callum McMahon was using an MCP plugin inside Cursor that pulled in litellm as a transitive dependency. When litellm 1.82.8 installed, their machine ran out of RAM and crashed. So if the attacker didn't vibe code this attack it could have been undetected for many days or weeks. Supply chain attacks like this are basically the scariest thing imaginable in modern software. Every time you install any depedency you could be pulling in a poisoned package anywhere deep inside its entire depedency tree. This is especially risky with large projects that might have lots and lots of dependencies. The credentials that do get stolen in each attack can then be used to take over more accounts and compromise more packages. Classical software engineering would have you believe that dependencies are good (we're building pyramids from bricks), but imo this has to be re-evaluated, and it's why I've been so growingly averse to them, preferring to use LLMs to "yoink" functionality when it's simple enough and possible.
LiteLLM HAS BEEN COMPROMISED, DO NOT UPDATE. We just discovered that LiteLLM pypi release 1.82.8. It has been compromised, it contains litellm_init.pth with base64 encoded instructions to send all the credentials it can find to remote server + self-replicate. link below
View quoted post