Open Sovereign AI @ii_posts. Founder @StabilityAI.
We bumped up the testing component of II-agent so it has better outputs than peers In normal software development you spend way more time testing than coding Going to experiment with this
We put 4 leading agents to the test with a 3D Runner game 🎮 Same prompt. Same concept. Very different execution. Which one would you play?
View quoted postRT Intelligent Internet We put 4 leading agents to the test with a 3D Runner game 🎮 Same prompt. Same concept. Very different execution. Which one would you play? Original tweet: https://x.com/ii_posts/status/2011448977879892269
own the tokens of production
RT brandon the web is fun again 🎶 Original tweet: https://x.com/burcs/status/2011071446987001947
By next year every frontier lab is going to drop trillions of tokens replicating every piece of software they can to try to get and keep your attention Attention is what they need
Given a trillion tokens of Opus 4.5 could we write a whole operating system and productivity suite from scratch. What about Opus 5? Opus 6?... Linux is about 30m lines of code/500m tokens to give you an idea..
View quoted postDeepSeek’s fund beat all of these…
The Top Hedge Funds of 2025 Apis Capital led the pack at 55.1%, with several Tiger Cubs, including Tiger Global, Lone Pine, and Maverick, making the cut.
Is Claude code really better than Claude in other agent harnesses? Which harness is best?
Who will have the top performing hedge fund
Given a trillion tokens of Opus 4.5 could we write a whole operating system and productivity suite from scratch. What about Opus 5? Opus 6?... Linux is about 30m lines of code/500m tokens to give you an idea..
RT Intelligent Internet Give it an image and a direction. 20+ styles, precision edits, model switching, exact ratios — all in one flow. II-Agent Advanced Image Suite Visuals, perfected. Original tweet: https://x.com/ii_posts/status/2009680394405212352
RT Dan Shipper 📧 NEW: i wrote a complete technical guide to building agent-native software (co-authored with claude) it covers: - the five pillars of agent native design (parity, granularity, composability, emergent capability, self-improvement) - files as the universal interface - agent execution patterns with code samples - mobile agent patterns - advanced patterns like dynamic capability discovery if you want to take full advantage of this moment, it's worth your time: https://every.to/guides/agent-native?source=post_button Original tweet: https://x.com/danshipper/status/2009651408144835021
❤️ epistemology? ❤️ AI? DM for something interesting ✉️
RT Cong Wei Thrilled to open-source UniVideo🎬! UniVideo brings unified multimodal understanding, generation, and editing to the video domain One framework for • video/image understanding • text/image → image/video generation • free-form image/video editing • reference-driven image/video generation/editing Code: https://github.com/KlingTeam/UniVideo Model: https://huggingface.co/KlingTeam/UniVideo Project Page: https://congwei1230.github.io/UniVideo Huggingface: https://huggingface.co/papers/2510.08377 Original tweet: https://x.com/CongWei1230/status/2009293322553315722
Are genes just prompts
RT Intelligent Internet We upgraded intelligence without upgrading the system around it. That mismatch is the story of our time. Full audiobook out now on Spotify and Apple Podcasts👇 Original tweet: https://x.com/ii_posts/status/2008963492598968564
Which AI companies have a clear, up to date document on their approach to/latest thinking alignment? It seems this should be standard by anyone building AGI?
Simplicity is the final achievement of complexity.
RT Quanquan Gu Very true. AI can compress the PhD pipeline, because coding, writing, and literature digestion are all orders of magnitude faster now. People who went through a pre-AI PhD can immediately feel the difference. But there’s another side to this. AI also grants capabilities that didn’t previously belong to many individuals. In some sense, it’s like competing in the Olympics on performance-enhancing drugs: productivity and problem-solving ability are no longer purely a function of training, taste, or depth of understanding. I’m aware of cases where math enthusiasts have already leveraged AI to make progress on a problem that challenged professional mathematicians for century (details cannot be disclosed). This is undeniably exciting, but it raises nontrivial whether it is fair for mathematicians who have spent their whole career working on this problem. This doesn’t invalidate AI-assisted learning and discoveries, but it will force the community to confront complicated questions. The rules of the game are changing, whether we like it or not. Original tweet: https://x.com/QuanquanGu/status/2007947084608246188
Tbh, if I had Claude Code, Gemini, and ChatGPT during my PhD, I’d probably have graduated in 1 year instead of 5.5 years. My PhD was ~50% coding, 25% writing/polishing my papers, 25% reading others' papers. AI now accelerates each by at least 10×. Nothing will ever be the same.
View quoted postIncoming Amusingly the core of the paper can be summarised in two lines But a very important two lines it is
Why doesn’t everyone use MSG in cooking
RT Armand Domalewski wow https://www.nytimes.com/2026/01/02/world/asia/china-ai-cancer-pancreatic.html?unlocked_article_code=1.B1A.iv9-.WzkJolxDYex4&smid=nytcore-ios-share Original tweet: https://x.com/ArmandDoma/status/2007697062926921767
Putting my oil analyst hard hat back on a comment on Venezuelan oil It’s more like tar/molasses than what you see from Saudi etc Expensive to extract, transport & refine Eg all in cost to make a gallon of petrol: Saudi : $0.50 Venezuelan: $2.80 PDVSA gutting didn’t help
@Cypresseed We could see $100 oil in 2022. The full storage is going to wreck capex for long enough to actually kill oil services companies. Demand will be hit long enough to coil things, but take longer than market expects to work things through, just as geopolitics will rise and rise..
View quoted postI bet we have a unified foundational theory of physics in the next year or two Will be very cool Will be similar to thermodynamics or special relativity, a top down theory of principle, not a constructive theory done by fitting loads of lagrangians
Demis Hassabis about the lack of progress in the foundations of physics
View quoted postGot my first desktop in a decade ahead of the upcoming GPU price rises (MSI 5090) What do folk recommend for mouse & keyboard these days? I assume all the gaming monitors are about the same?
AI will massively outperform humans on constructive theories It is still pretty terrible at theories of principle
Great job by Jian Yang & co!
2026: A new AI lab out of China claims 81.4% on SWE-Bench-V and 54.2% on BigCodeBench with a 40B coding LLM. IQuest Labs, like DeepSeek, comes from a quant fund Ubiquant in China with $8B AUM. Jian Yang, the leading contributor, seems to have been on the Qwen2/3 team as well!
RT Intelligent Internet Closing 2025 with the last call to enter our design competition. 🎨 Create with II-Agent + Nano Banana Pro 🍌 ⏰ Deadline: TONIGHT · 23:59 GMT 🚨 Original tweet: https://x.com/ii_posts/status/2006378196472644023
Next year will have several of the biggest discoveries/advances ever Humanity changing stuff Will be wild, make sure you catch your breath before it kicks off
Best AI alignment papers of 2025? Share away!
RT Wes teleoperator kicking himself over not programming Asimov's Laws of Robotics... Worth considering if we should base today's reality on the science fiction musings from the past. Testing with humanoid robots is a very interesting new challenge. Historically robots have been confined to keep out zones with rigid walls and laser barriers, lock-out/tag-out, etc. This is the easy way to ensure people don't get hurt by very capable machines. But the more we integrate humanoid robotics into collaborative roles with humans the more we need to remove those barriers by nature of the work. Original tweet: https://x.com/wmorrill3/status/2004666088374894922
RT Intelligent Internet Merry Christmas from Intelligent Internet 🎄🎅 Original tweet: https://x.com/ii_posts/status/2004335976362164697
My tribute to Ramunajan & a challenge to math lovers… This is accurate to ~6ppm (!) Can you get closer elegantly ^_^ Have fun!
I find automated math proving with AI not hugely interesting What is most interesting is how we can leverage AI to enable more intuition and inspiration like Ramanujan Save the proofs for later, dare to explore and do what we do best while the GPUs do what do they best
View quoted postRT João Batalha von Neumann gave us "zero-sum" in 1944 and it's one of my favorite examples of linguistic compression. Two words that mean “strictly competitive, fixed pie, stop looking for win-wins.” The term's real gift: it reveals the default human assumption that life is chess and value can’t be created—only transferred. Once you have the label, you start noticing how often that’s wrong Original tweet: https://x.com/joao_batalha/status/2003540220352897031
RT Alper Canberk Re We found that Memo understands object affordances: depending on whether an object is larger or smaller than its hand, it selects a one-handed or two-handed grasp. Original tweet: https://x.com/alpercanbe/status/2003254339528458749
I find automated math proving with AI not hugely interesting What is most interesting is how we can leverage AI to enable more intuition and inspiration like Ramanujan Save the proofs for later, dare to explore and do what we do best while the GPUs do what do they best
Celebrating the combination of intuition and experimentation in the mathematical works of Srinivasa Ramanujan on the anniversary of his birth #WithWolfram https://writings.stephenwolfram.com/2016/04/who-was-ramanujan
RT Wolfram Celebrating the combination of intuition and experimentation in the mathematical works of Srinivasa Ramanujan on the anniversary of his birth #WithWolfram https://writings.stephenwolfram.com/2016/04/who-was-ramanujan Original tweet: https://x.com/WolframResearch/status/2003152362781757711
RT Brave On #TheBraveTechnologist, Intelligent Internet (@ii_posts) founder and CEO @EMostaque advocates for open-source AI that operates transparently for the betterment of individuals and communities. Listen to this full chat with Brave's @LukeMulks here: https://brave.com/podcast/e101/ Original tweet: https://x.com/brave/status/2003099250222096423
RT Fermat's Library Srinivasa Ramanujan was born 138 years ago today. He said his formulas came to him as revelations from God. Original tweet: https://x.com/fermatslibrary/status/2003025304147616163
What is the best physics podcast
I don’t think Lean is that useful (yet) for proving anything major. All the labs should get together and apply huge compute to upgrading and filling the gaps in mathlib first, would be useful for models anyway
RT Intelligent Internet If you’re building with AI, this episode is practical context. Clear 2026 perspectives from @EMostaque on Moonshots w/ @PeterDiamandis, @salimismail, @DavidBlundin & @alexwg 🔗 https://www.youtube.com/watch?v=NHAzpG95ptI Original tweet: https://x.com/ii_posts/status/2002148966906147302
diffusion is how reality generates itself
RT Brave "Robots are coming." @EMostaque, founder/CEO of Intelligent Internet (@ii_posts), talks with @LukeMulks about AI's economic disruption and how communities can thrive in a changing world. Listen to the full #TheBraveTechnologist episode here: https://brave.com/podcast/e101/ Original tweet: https://x.com/brave/status/2001777081605149159
RT Intelligent Internet Research at the speed of thought with II-Commons 🧠 Powered by Gemini 3 🔍 Search arXiv + PubMed in one agent 💬 Chat inside papers for real context 📑 Read in native PDF or clean Markdown Try hosted II-Commons Beta 👇 Original tweet: https://x.com/ii_posts/status/2001694176597946871
RT Intelligent Internet The agent structures the thinking. Nano Banana Pro makes the visuals. All built inside II-Agent 🍌 Original tweet: https://x.com/ii_posts/status/2001343872988008932
RT Manus $0 → $100M ARR in 8 months. Since we launched in March: -147 trillion tokens processed -80M+ virtual computers created -Total revenue run rate over $125M Thank you to everyone building with us. https://manus.im/blog/manus-100m-arr Original tweet: https://x.com/ManusAI/status/2001291341343641680
RT Ethan Mollick Surprisingly rapid & high Ai adoption by doctors: 67% use it daily, 84% says it makes them better doctors, 42% says it makes them want to stay in medicine more (10% said less). A lot of the use cases appear to be administrative and research assistance. https://2025-physicians-ai-report.offcall.com/ Original tweet: https://x.com/emollick/status/2001061282485547116
The "thinking" bar for GPT 5.2 Pro has got a lot better, you can see it looking at the pages etc of what goes in, interrupt it etc However it remains on the web version only (along with the ability to flip to high reasoning), makes the app version not v useful :(
RT Intelligent Internet The world’s favorite image app now lives inside the world’s best agent, with GitHub built in. Nano Banana Pro + GitHub in II-Agent: • generate + edit images in chat • turn them into slide decks • connect repos, ask questions, push commits Original tweet: https://x.com/ii_posts/status/1998819489333362856
The Last Economy is number 1 on Amazon's Best Sellers for AI with a pretty good ratings ✍️ No publisher or big campaign, also available for free on the website and paperback just dropped 📖 Thank you all for the support & the next part is nearly done..🚀
Economics was built for scarcity. The Intelligence Age is about abundance. Today we release our book, The Last Economy, introducing Intelligent Economics, a unified theory for this new era. 🧵
View quoted post3b (active) parameters is all you need
Why do you think @AnthropicAI & @OpenAI are doing real workloads with @tempo? Agents 🤝 stable coins
Tempo’s testnet is live! Any company can now build on a payments-first chain designed for instant settlement, predictable fees, and a stablecoin-native experience. Tempo has been shaped with a wide group of partners validating real workloads including @AnthropicAI, @Coupang,
View quoted postEU countries should get together to build an advisory AI that should run the EU with all decisions being transparent and participatory. This should be a fully open source, open data stack with all inputs also available so you can run it and verify recommendations yourself.
The EU Commission should be disbanded in favor of an elected body and the EU President should be directly elected. The current system is rule by bureaucracy, not democracy.
View quoted post"We will continue to aggressively field the World's best technology to make our fighting force more lethal than ever before" - @SecWar
Today, we are unleashing http://GenAi.mil This platform puts the worlds most powerful frontier AI models directly into the hands of every American warrior. We will continue to aggressively field the world’s best technology to make our fighting force more lethal than ever
View quoted post68% on SWE-Bench Verified on just 24b! Laptop class 72.2% on 124b 👏 Great job @MistralAI team, looking forward to trying it out
Introducing the Devstral 2 coding model family. Two sizes, both open source. Also, meet Mistral Vibe, a native CLI, enabling end-to-end automation. 🧵
View quoted postRT Intelligent Internet The Last Economy by @EMostaque is an Amazon Best Seller 📚 #3 Business Economics (kindle) #3 Finance (kindle) #7 Computing Your guide to the AI economy & what comes next Thank you for reading, tell your friends. Now also available in paperback 🎁 Original tweet: https://x.com/ii_posts/status/1998411140502556694
Should I watch For All Mankind?
RT Umut Gunbak fal Startup Program in Europe! Looking for ambitious builders We’re launching our first startup program in Europe to help generative media teams build and scale with @fal Apply for one of three packs: • Starter Pack: $250 fal credits • Startup Pack: $1,000 fal credits • Scale Pack: $5,000 fal credits If you’re just getting started, grab the Starter Pack. As you grow, you can apply for the next tier. Let’s build the next decade of generative media together. Link in the replies Original tweet: https://x.com/umutgunbak/status/1997933230243058090
3 year anniversary of ChatGPT Where will we be 3 years from now
today we launched ChatGPT. try talking with it here: http://chat.openai.com
View quoted postRT Peter H. Diamandis, MD Position Opening: I’m looking for a Rockstar event "Executive Producer" for a project I’m working on related to Moonshots. Might be the most exciting gathering I’ve ever created. Interested? Learn more at link below or SHARE this tweet with someone you think will be awesome! https://moonshotsummit.notion.site/Executive-Director-Moonshot-Summit-2987dc1c4e31803daba1e31068ff952c Original tweet: https://x.com/PeterDiamandis/status/1994836378195300533
RT Bilal bin Saqib MBE In conversation with @EMostaque AI, governance, economy, military, and why Pakistan’s next 1,000 days will define the next 50 years. Chapters: 02:48 – The 1,000-day warning 08:37 – Call centers are finished? 11:22 – The degree bubble 21:50 – Sovereign compute as national power 36:05 – AI for eliminating governance leakages 38:04 – Ending corruption with AI 48:39 – The only jobs that will survive 52:10 – Leadership’s 3 critical actions 1:07:32 – AI in warfare Original tweet: https://x.com/Bilalbinsaqib/status/1994708765434675406
So many amazing new video models coming, we are heading next year to video pixel generation being “solved”
Lots of improvements! Claude still crawls and burns battery on mobile iOS app when doing long replies, very frustratingZvi Mowshowitz: They're burying a lot here. There's a 66% price cut from Opus 4.1 to $5/$25, it uses fewer tokens to solve problems, upgrades to Claude Code in the app, no more length limits on conversations, no more Opus-specific plan caps... Link: https://x.com/TheZvi/status/1993039677650251999
RT Intelligent Internet Terminal Bench v2 just got officially verified by @alexgshaw II-Agent is now Top 2 overall and #1 single-model agent Open Source AI, top-tier performance 🚀Emad: Buried the lede a bit but our fully open source II Agent framework is now state of the art in Terminal Bench 2 using just Gemini 3! Congrats to team for amazing work & more coming the pipeline The best agents will be open source so we can all have the sovereign AI we deserve Link: https://x.com/EMostaque/status/1991186525975970072
Humanity as the biological bootloader of AGIJames Moore: Bootloader?!! Link: https://x.com/jamesdmoore614/status/1992679980816327049
This is more than all but one LLM and probably more than used to train every generative video model ever Still a way to go!Cosmic Marvel: Disney has confirmed that the total budget for ‘ANDOR’ is over $705M. The highest-ever for a Star Wars project. (via https://www.forbes.com/sites/carolinereid/2025/11/22/disney-spent-more-on-andor-than-any-of-its-star-wars-movies/) Link: https://x.com/cosmic_marvel/status/1992383951135203578
Where is the best tomahawk steak in London
Adding remake One Punch Man Season 3” to “remake Game of Thrones Season 8” on the todo list
A billion humanoid robots and a bad firmware update is one of the most reasonable human (Doom) scenariosMike Kalil: Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup's former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company's robots "were powerful enough to fracture a human skull." Robert Link: https://x.com/mikekalilmfg/status/1992269576155922863
This looks crazy good, will run evals now on our sota II-Agent The cost and speed of grok fast 4.1 are way higher than comparable agentic AI models, even top notch ones from the numbers here - 10-20x better in some casesxAI: Introducing Grok 4.1 Fast and the xAI Agent Tools API. Grok 4.1 Fast is our best tool-calling model to date. With a 2M context window, it shines in real-world use cases like customer support and deep research. https://x.ai/news/grok-4-1-fast Link: https://x.com/xai/status/1991284813727474073
Just like that the era of the weird purple vibe coded output ended
The most interesting thing testing Gemini 3 Pro has been how *efficient* it is from tokens to tool calls The intelligence per token of models is increasingly rapidly even as prices fall, its quite somethingLogan Kilpatrick: Introducing Gemini 3 Pro, the world's most intelligent model that can help you being anything to life. It is state of the art across most benchmarks, but really comes to life across our products (AI Studio, the Gemini API, Gemini App, etc) 🤯 Link: https://x.com/OfficialLoganK/status/1990813077172822143
The best verifiability is $$s - would you pay someone to do equivalent work. AI now is moving from prompting/goldfish memory, like a smart person you ask the occasional question to, to being able to do arbitrarily long, economically valuable work Take off will be fastAndrej Karpathy: Sharing an interesting recent conversation on AI's impact on the economy. AI has been compared to various historical precedents: electricity, industrial revolution, etc., I think the strongest analogy is that of AI as a new computing paradigm (Software 2.0) because both are Link: https://x.com/karpathy/status/1990116666194456651
If this is actually verified, real-time & non-teleoperated on a a standard Unitree G1, then it is a huge leap forward on robotics timelines We underestimate the economies of scope of generative AI models even nowMindOn: Mind On Everything ! Link: https://x.com/MindOn_Tech/status/1989232017570140430
Estimate average ~1k tokens per post => 100b tokens per day @grok 4 Fast $0.5/m tokens (vs $10 for GPT5, $15 for Sonnet 4.5) => $50k a day cost to process, $18m a year Value = huge Shows how amazing the price/performance of Grok 4 Fast isElon Musk: @Moimaere @levelsio @nikitabier By next month, Grok will literally look at and understand all ~100 million 𝕏 posts per day (including images and video), no matter how small the account, and recommend content to users based on the intrinsic quality of the content itself. This is only possible with advanced AI Link: https://x.com/elonmusk/status/1988662682241618367
AI multiple compression in action, now < 30x sales :o jk congrats to the Cursor team, many more tab keys to goCursor: We've raised $2.3B in Series D funding from Accel, Andreessen Horowitz, Coatue, Thrive, Nvidia, and Google. We're also happy to share that Cursor has grown to over $1B in annualized revenue and now produces more code than any other agent in the world. This funding will allow Link: https://x.com/cursor_ai/status/1988971258449682608
RT Henri Liriani We're launching Lightfield today. It's a CRM designed for founders going zero to one—shaped in the past year by hundreds of founders who took a bet on us and now use it daily, and a waitlist of 20,000 more. When starting out, it takes countless hours of talking to customers to figure out what works. From day zero, Lightfield builds and updates itself from your unstructured conversations with customers. It becomes your customer memory: helping you execute deals, communicate without missing details, and understand patterns across your business. It's free to try. Would love to hear what you think: http://lightfield.app
Just call it the Gabecube Hardware looks to be around AMD Ryzen Al Max+ 395 (strix halo) level Would be cool if RAM was upgradeable, can run 128 Gb for LLMs (see @FrameworkPuter Desktop)The Game Awards: First footage of Valve's Steam Machine, shipping in early 2026. 6x more powerful than a Steam Deck. No pricing announced. Link: https://x.com/geoffkeighley/status/1988669659881820383
RT Wes Roth thank you so much to @DrKnowItAll16 for highlighting our talk with @EMostaque It was a fascinating conversation. Emad is able to provide insight on a lot of subjects, but specifically we talked about the future of work and employment. I recommend watching this one asap.DrKnowItAll: This is a great interview with @EMostaque. I highly recommend watching it. Thanks, @WesRothMoney for making it happen! https://youtu.be/07fuMWzFSUw?si=Xa_faxDSJyuRl1-n Link: https://x.com/DrKnowItAll16/status/1987502478640693446
RT Shengyuan Hi Dzmitry, our INT4 QAT is weight-only with fake-quantization: we keep the original BF16 weights in memory, during the forward pass we on-the-fly quantize them to INT4 and immediately de-quantize back to BF16 for the actual computation. The original unquantized BF16 weight is retained because gradients have to be applied to this.🇺🇦 Dzmitry Bahdanau: can someone explain to me int4 training by @Kimi_Moonshot ? does it mean weights are stored in int4 and dequantized on the fly for futher fp8/bf16 computation? or does this mean actual calculations are in int4, including accumulators? Link: https://x.com/DBahdanau/status/1986821392206114972
RT Emad Will continuous learning for AI models be solved within 2 years
Will continuous learning for AI models be solved within 2 years
In a year or so this level of performance should be available on a 32b dense model (K2 is 32b active) at a cost of < $0.2/million tokens I don't think folk have that in their estimatesKimi.ai: 🚀 Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. 🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%) 🔹 Executes up to 200 – 300 sequential tool calls without human interference 🔹 Excels in reasoning, agentic search, and coding 🔹 256K context window Built Link: https://x.com/Kimi_Moonshot/status/1986449512538513505
No current AI systems have morals explicitly encoded into them at pretraining time. At the very least they should have Asimov's laws of robotics ehPope Leo XIV: Technological innovation can be a form of participation in the divine act of creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of #AI to cultivate moral discernment as a Link: https://x.com/Pontifex/status/1986776900811837915
Can you imagine being a "frontier" lab that's raised like a billion dollars and now you can't release your latest model because it can't beat @Kimi_Moonshot ? 🗻 Sota can be a bitch if thats your targetEmad: Can you imagine being a "frontier" lab that's raised like a billion dollars and now you can't release your latest model because it can't beat deepseek? 🐳 Sota can be a bitch if thats your target Link: https://x.com/EMostaque/status/1881380253630890471
RT Pope Leo XIV Technological innovation can be a form of participation in the divine act of creation. It carries an ethical and spiritual weight, for every design choice expresses a vision of humanity. The Church therefore calls all builders of #AI to cultivate moral discernment as a fundamental part of their work—to develop systems that reflect justice, solidarity, and a genuine reverence for life.
RT gabriel every single person i know who made a cool video demo of a project that shows agency & great technical ability has been reached out to from top labs & top companies you never need to compete with millions of other people for your top pick role
Necessity is the mother of invention Also - training optimally on small amounts of chips with focus on data means the Chinese models take 10-100x less compute to run as well & have that cost advantage $150/mGPT 4.5 vs $0.5/m DeepSeek v3 etcYuchen Jin: If you ever wonder how Chinese frontier models like Kimi, DeepSeek, and Qwen are trained on far fewer (and nerfed) Nvidia GPUs than US models. In 1969, NASA’s Apollo mission landed people on the moon with a computer that had just 4KB of RAM. Creativity loves constraints. Link: https://x.com/Yuchenj_UW/status/1986474507771781419
I actually think all governments should offer infrastructure underwriting and guarantees for AI build out just as they do for other essential infrastructure - China does! The productive capacity of a country - its intelligent capital stock - is basically it’s GPUs & robotsSam Altman: I would like to clarify a few things. First, the obvious one: we do not have or want government guarantees for OpenAI datacenters. We believe that governments should not pick winners or losers, and that taxpayers should not bail out companies that make bad business decisions or Link: https://x.com/sama/status/1986514377470845007
A note on costs/compute Base Kimi K2 model used 2.8m H800 hours with 14.8 trillion tokens, about $5.6m worth Details of post training for reasoning not given, but it is likely max 20% more (excluding data prep!) Would be < $3m for sota if they had Blackwell chip accessKimi.ai: 🚀 Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. 🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%) 🔹 Executes up to 200 – 300 sequential tool calls without human interference 🔹 Excels in reasoning, agentic search, and coding 🔹 256K context window Built Link: https://x.com/Kimi_Moonshot/status/1986449512538513505
Congratulations to @Kimi_Moonshot for achieving state of the art on many benchmarks & open sourcing the model! The gap between closed & open continues to narrow even as the cost of increasingly economically valuable tokens collapses K2 has its own unique vibe too, try it out!Kimi.ai: 🚀 Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. 🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%) 🔹 Executes up to 200 – 300 sequential tool calls without human interference 🔹 Excels in reasoning, agentic search, and coding 🔹 256K context window Built Link: https://x.com/Kimi_Moonshot/status/1986449512538513505
Folk focusing on data centers when the real $$s will be in teleoperation centers
RT Emad Soon (hopefully)Unitree: Embodied Avatar: Full-body Teleoperation Platform🥳 Everyone has fantasized about having an embodied avatar! Full-body teleoperation and full-body data acquisition platform is waiting for you to try it out! Link: https://x.com/UnitreeRobotics/status/1986329686251872318
Soon (hopefully)Unitree: Embodied Avatar: Full-body Teleoperation Platform🥳 Everyone has fantasized about having an embodied avatar! Full-body teleoperation and full-body data acquisition platform is waiting for you to try it out! Link: https://x.com/UnitreeRobotics/status/1986329686251872318
RT Dheemanth Reddy here is maya1, our open source voice model: We’re building the future of voice intelligence @mayaresearch_ai team is incredible; amazing work by the team. remarkable moment.
RT DiscussingFilm Coca-Cola’s annual Christmas advert is AI-generated again this year. The company says they used even fewer people to make it — “We need to keep moving forward and pushing the envelope… The genie is out of the bottle, and you’re not going to put it back in”
RT BrianEMcGrath Amazing interview with @RaoulGMI and @EMostaque! Some takeaways from this conversation: 1. AI's (LLMs) use tokens that are equivalent to 1.3 words per token. 2. The avg person speaks 20,000 tokens / day. 3. The avg person thinks about 200,000 tokens / day. 4. The latest Grok 4 model costs $0.50 per 1 million tokens. So the current LLMs can do the equivalent of week's worth of knowledge work for $0.50 with an IQ higher than the avg person! ...and they will only get better. Things are changing rapidly.Raoul Pal: A conversation with @EMostaque will always blow your mind. This one is no exception... Enjoy! Link: https://x.com/RaoulGMI/status/1983883585586393195