Discover real AI creators shaping the future. Track their latest blogs, X posts, YouTube videos, WeChat Official Account posts, and GitHub commits — all in one place.
Gemini 3 Pro has been upgraded to Gemini 3.1 Pro for all Perplexity Pro and Max users (consumer and enterprise). It's the second most picked model by our Enterprise customers after Claude 4.5 Sonnet/Opus family. Enjoy!
RT Perplexity Gemini 3.1 Pro is now available to all Perplexity Pro and Max subscribers. Original tweet: https://x.com/perplexity_ai/status/2024590462057922864
This is big. Agents can now monitor @vercel cloud infrastructure consumption, suggest optimizations, and run cost simulations in preview or production environments
Access to billing and usage data is now available via API and Vercel CLI. The new /𝚋𝚒𝚕𝚕𝚒𝚗𝚐/𝚌𝚑𝚊𝚛𝚐𝚎𝚜 endpoint uses the FOCUS v1.3 standard for direct FinOps integration. https://vercel.com/changelog/access-billing-usage-cost-data-api
View quoted postActivity on simonw/simonwillisonblog
simonw opened a pull request in simonwillisonblog
View on GitHubVery interested in what the coming era of highly bespoke software might look like. Example from this morning - I've become a bit loosy goosy with my cardio recently so I decided to do a more srs, regimented experiment to try to lower my Resting Heart Rate from 50 -> 45, over experiment duration of 8 weeks. The primary way to do this is to aspire to a certain sum total minute goals in Zone 2 cardio and 1 HIIT/week. 1 hour later I vibe coded this super custom dashboard for this very specific experiment that shows me how I'm tracking. Claude had to reverse engineer the Woodway treadmill cloud API to pull raw data, process, filter, debug it and create a web UI frontend to track the experiment. It wasn't a fully smooth experience and I had to notice and ask to fix bugs e.g. it screwed up metric vs. imperial system units and it screwed up on the calendar matching up days to dates etc. But I still feel like the overall direction is clear: 1) There will never be (and shouldn't be) a specific app on the app store for this kind of thing. I shouldn't have to look for, download and use some kind of a "Cardio experiment tracker", when this thing is ~300 lines of code that an LLM agent will give you in seconds. The idea of an "app store" of a long tail of discrete set of apps you choose from feels somehow wrong and outdated when LLM agents can improvise the app on the spot and just for you. 2) Second, the industry has to reconfigure into a set of services of sensors and actuators with agent native ergonomics. My Woodway treadmill is a sensor - it turns physical state into digital knowledge. It shouldn't maintain some human-readable frontend and my LLM agent shouldn't have to reverse engineer it, it should be an API/CLI easily usable by my agent. I'm a little bit disappointed (and my timelines are correspondingly slower) with how slowly this progression is happening in the industry overall. 99% of products/services still don't have an AI-native CLI yet. 99% of products/services...
RT LangChain JS LangChain now has a first-party OpenRouter integration → available in both Python and Typescript. Access 300+ models from OpenAI, Anthropic, Google, and more through a single interface. Tool calling, structured output, and streaming all work out of the box. One API key, works with every model, and without juggling SDKs Get started in one line: > uv add langchain-openrouter > pnpm install @langchain/openrouter Original tweet: https://x.com/LangChain_JS/status/2024582319613603868