VP Science @OpenAI, BoD @Cisco @nature_org, LTC @USArmyReserve Ex: Pres @Planet, Head of Product @Instagram @Twitter ❤️ @elizabeth ultramarathons kids cats math
Could not agree more with @gdb.
The world is transitioning to a compute-powered economy. The field of software engineering is currently undergoing a renaissance, with AI having dramatically sped up software engineering even over just the past six months. AI is now on track to bring this same transformation to
View quoted postThis is a great take. Human intelligence and computer intelligence are not different points on a line, but in a high dimensional space—and we should explore more of the space.
Terence Tao proposes what he calls a "Copernican view of intelligence". Instead of buying into the common, one-dimensional narrative that artificial intelligence will simply evolve from "subhuman" to "superhuman" and ultimately make humanity entirely redundant, Tao urges us to
View quoted postFive Erdos problems at once! The proofs are getting more elegant as the models improve 👀
We’ve just released another paper solving five further Erdős problems with an internal model at OpenAI: https://arxiv.org/abs/2604.06609. Several of the proofs were especially enjoyable to digest while writing the paper. My personal favorite was the solution to Erdős Problem 1091. The
Btw: @hemal built this as a simple skill in a few hours, since Prism is powered by Codex. More to come. And if you love the idea of building AI-native products to accelerate science, we're hiring a lead designer for Prism: apply here! https://jobs.ashbyhq.com/openai/7e265aee-6842-4fc1-bf28-1df3ea4ee013
💥 New in Prism today: Paper Review, an AI workflow for reviewing technical and scientific papers. This is the opposite of AI slop: we're using AI to improve scientific rigor, correctness, and reproducibility.
View quoted post💥 New in Prism today: Paper Review, an AI workflow for reviewing technical and scientific papers. This is the opposite of AI slop: we're using AI to improve scientific rigor, correctness, and reproducibility.
RT Mehtaab Sawhney In a beautiful recent paper, Vishesh Jain and Clayton Mizgerd used GPT-5.4 Pro to prove a striking result in the theory of Markov chains: https://arxiv.org/pdf/2604.03937 They study the adjacent transposition Markov chain on the symmetric group. A conjecture of Fill, recently settled by Greaves and Zhu, determined which parameters of this chain maximize the spectral gap, a natural quantity controlling how fast the chain mixes. Jain and Mizgerd go further and characterize exactly when this extremal spectral gap is achieved, answering another question of Fill. As they explain in the paper, once the first part was in place, GPT-5.4 Pro was able to one-shot generate the second part of the main result. From talking with the authors, my understanding is that this would likely have taken substantial effort without GPT. Furthermore even given the first part, several ingredients, such as the piecewise eigenvector construction in Proposition 6.6, were new to them. Just another example of how AI is already changing the everyday practice of research mathematics. Original tweet: https://x.com/mehtaab_sawhney/status/2041354267286737243
RT Simon Smith I’ve been critical of OpenAI lately, but for the past three weeks my family has been dealing with a health issue with my dad, and a ChatGPT shared project with live document syncing has been essential to organizing and understanding everything happening. Me, my four siblings, my mom, and my dad have faced an onslaught of information from various doctors and nurses, which we’ve captured in hundreds of text messages and documents and scans and you name it. ChatGPT has helped us collect this information in a single place, make sense of it, and interrogate it to make the most informed decisions possible. Also, credit where due: Claude played an important role as well, by ingesting iMessages and synthesizing summarizes from them to upload to ChatGPT, as well as by extracting text from a bunch of HEIC document scans. I think those of us, like me, excited at AI’s potential get frustrated when we can see issues so clearly, like ChatGPT’s bad design skills, and Claude’s increasing instability and confusing usage consumption. But at times like this I’m reminded of how incredible this technology already is, letting me and my family make sense and act on hundreds of pieces of information, empowering us in the face of a disjointed and fragmented healthcare system. Original tweet: https://x.com/_simonsmith/status/2040539824034115676
Not only is AI solving more open problems—its proofs are getting more elegant as the models improve
We are excited to share a new paper solving three further problems due to Erdős; in each case the solution was found by an internal model at OpenAI. Each proof is short and elegant, and the paper is available here: https://arxiv.org/pdf/2603.29961
View quoted postRT roon god I love technology Original tweet: https://x.com/tszzl/status/2038856932371681493
RT Chris Hadfield If schedule holds, these 3 giant rockets will launch in the next 3 weeks. From left to right: New Glenn - satellite launch now, planned for the Moon Starship - test flight 12 now, planned for the Moon Artemis - to the Moon and back with 4 crew aboard Pushing the very edge of our capability as we learn how to more safely & cheaply reach space, to explore all that exists beyond. @nasa @SpaceX @blueorigin Original tweet: https://x.com/Cmdr_Hadfield/status/2038629519012061303
"An open source workflow for producing a personalized mRNA cancer vaccine"
My friend @philfung was inspired by the man who built a personalized cancer vaccine for his dog, so he wrote a guide to DIY mRNA vaccine production. Phil used to run a lab startup, and the guide covers the entire process - from sequencing to synthesis, using open-source
Paul used ChatGPT + AlphaFold to create a personalized mRNA vaccine protocol for his dog's cancer. This is a glimpse of the future, with AI accelerating personalized medicine. We have to make it easier for people to do this for people, not just dogs!
RT Katherine Boyle Great to see this. Gwynne Shotwell should be a household name. One of the most impressive business leaders of our time. Original tweet: https://x.com/KTmBoyle/status/2037141887363043838
TIME’s new cover: SpaceX is racing to build its most powerful rockets yet with the goal of returning humans to the moon. Gwynne Shotwell is leading the charge alongside Elon Musk. Read it here: https://time.com/article/2026/03/26/gwynne-shotwell-profile/?utm_source=twitter&utm_medium=social&utm_campaign=editorial&utm_content=260326
RT Leeham GPT-5.4 Pro fully resolves a Machine Learning adjacent Open Math problem from http://solveall.org! I submitted the solution a few weeks ago and just checking now, it has been accepted. This marks the 2nd problem on the site to be fully resolved! https://solveall.org/problem/gaussian-correlation-inequality-extensions Original tweet: https://x.com/Liam06972452/status/2036266860417540425
I’m inspired by young people putting frontier AI to work in science in new and surprising ways. We recently met a young astronomer who used AI to analyze massive datasets. He identified 1.5 million new celestial objects in space, and the head of NASA offered him a job and a ride in a fighter jet as a signing bonus! (https://x.com/rookisaacman/status/2004772750494499104) To celebrate stand-out students and recent grads like this—and help keep their ideas moving—we’re launching a new program: ChatGPT 26. If you're selected, we'll host you at OpenAI HQ in SF, give you access to our latest tech, and give you a $10K cash grant to keep pushing your ideas forward. If this sounds like you, we’d love to meet you and hear your story. Apply or nominate someone here: http://chatgpt26.com
Technology is so rad. @neuralink is amazing.
It’s hard to believe it’s already been 100 days since I received my Neuralink N1 implant. Looking back, the whole journey feels like science fiction that somehow became my everyday reality. The surgery on Day 0 was surprisingly easy. A quick general anaesthetic, a small
RT Dwarkesh Patel When Copernicus proposed heliocentrism in 1543, it was actually less accurate than Ptolemy's geocentric model - a system refined over 1,400 years with epicycles precisely tuned to match observed planetary positions. It took another 70 years before Kepler, working from Tycho Brahe's unprecedentedly precise observations, replaced Copernicus’s circles with ellipses - finally making heliocentrism empirically superior. Terence Tao's point is that science needs a high temperature setting. If we only fund and follow what's most state of the art today, we kill the ideas that might need decades of work to surpass some overall plateau. Original tweet: https://x.com/dwarkesh_sp/status/2035114158241587221
I'll put a lot of money on the over, @jpatel41 :)
Thanks for the kind words @rohanvarma. The partnership between @Cisco and @OpenAI has been nothing short of fabulous. Especially over the past 75 days. Our team is pretty stoked with the progress being made with the use of Codex. Let’s keep pushing on both sides. Appreciate you
View quoted postRT Fidji Simo This news came out a little earlier than we planned; we're excited to be building a deployment arm and will share more details soon. Companies have a ton of urgency to deploy AI in their organizations and we’re sprinting to meet that demand. More than 1 million businesses run on OpenAI products. Codex is now at 2M+ weekly active users, up nearly 4x since the start of the year. API usage jumped 20% in the week after GPT-5.4 launched. And Frontier, which launched last month to help enterprises build, deploy, and manage AI coworkers that can do real work, has way more demand than we can handle. That's why we launched Frontier Alliances so we leverage our ecosystem of partners to scale. And that is also why we are launching a dedicated deployment arm tasked with embedding Forward Deployed Engineers deeply inside of enterprises. This project has been in the works with our investor and alliance partners since last December, and we are grateful for them and their partnership. We’re still early, but the speed of adoption is a clear signal of where this is headed. We're excited to not just be building these technologies but also building many ways for companies to deploy them and get impact. https://www.reuters.com/business/openai-courts-private-equity-join-enterprise-ai-venture-sources-say-2026-03-16/ Original tweet: https://x.com/fidjissimo/status/2033537381907710092
This is beautiful, and true
No one tells you that parenting is just relearning the world through someone who thinks worms are friends & birds are miracles. It’s the most healing thing I’ve ever done. My daughter looked out the window this morning & said, everything is green & growing. I told her, you too.
View quoted postRT martin_casado 5.4 is really special. Original tweet: https://x.com/martin_casado/status/2031536708706250936
So excited to work together! I have a feeling it's going to be a productive summer :)
I guess now is as good a time as any to announce that I shall be joining the AI for Science team at @OpenAI this summer. This has been in the works since January, and I thank @SebastienBubeck and @kevinweil for their personal interest in making this happen.
View quoted postIf true, this would be the first of @EpochAIResearch's Frontier Math open problems to be resolved by AI. "The result emerged from a single GPT-5.4 Pro run and was subsequently refined into Lean with GPT-5.4 XHigh which ran for a few hours."
We believe we have fully resolved, in Lean and python, one of @EpochAIResearch Frontier Math open problems: a Ramsey-style problem on hypergraphs. The result emerged from a single GPT-5.4 Pro run and was subsequently refined into Lean with GPT-5.4 XHigh which ran for a few
View quoted postA look at the future/present
Three days ago I left autoresearch tuning nanochat for ~2 days on depth=12 model. It found ~20 changes that improved the validation loss. I tested these changes yesterday and all of them were additive and transferred to larger (depth=24) models. Stacking up all of these changes,
This is a pretty mind-blowing look inside GPT-4 and its reasoning abilities. Definitely worth your team to read.
Codex for Open Source is an awesome idea. OSS maintainers get API credits, 6 months of ChatGPT Pro with Codex, and access to Codex Security as needed.
We’re launching Codex for Open Source to support the contributors who keep open-source software running. Maintainers can use Codex to review code, understand large codebases, and strengthen security coverage without taking on even more invisible work. http://developers.openai.com/codex/community/codex-for-oss
View quoted postRT Daniel McAuley some psychopath on the internal codex leaderboard hit 100B tokens in the last week Original tweet: https://x.com/_dmca/status/2029810231325380725
💥 GPT 5.4 is launching today! It's our best model ever, and it's also the most capable scientific model we've ever released. GPT 5.4 Pro in particular is 🤯 based on early testing with scientists and mathematicians.
We integrated the Codex harness into Prism (http://prism.openai.com) — this means you get skills, reasoning levels, and the raw tenacity of the Codex model in your LaTeX environment. Oh and we also built version mgmt into Prism, which was one of the top requests. See below thread from @vicapow for some great examples of the power of Codex inside Prism 👇
🧵1/ We've brought the most advanced AI to Prism by introducing Codex to Prism. Prism is already the best place for scientific writing to happen—and with Codex, now you can write, compute, analyze, and iterate all in one place.
View quoted post💥 AI accelerating high energy physics Just a few weeks after the gluon scattering paper, this morning we posted the more complicated graviton scattering analogue. See below for more from @ALupsasca 👇
We just posted a new preprint: “Single-minus graviton tree amplitudes are nonzero.” Yes: a helicity sector long assumed to vanish in quantum gravity can actually appear under well-defined kinematics. Preprint: https://cdn.openai.com/graviton/graviton/graviton.pdf
View quoted postRT Mehtaab Sawhney Last week in a beautiful preprint, Dmitrii Zakharov proved a bound for a two-family version of a word-overlap problem, but the first version left a constant factor gap of e. That gap was quickly improved by a mix of GPT-5.2 Pro and human collaborators. Even more recently, an internal OpenAI model obtained the asymptotically sharp bound. Dmitrii incorporated the argument into v2 of his paper https://arxiv.org/pdf/2602.20143; see his AI statement for more details. Original tweet: https://x.com/mehtaab_sawhney/status/2029052169727754249
RT Joshua Achiam A firm commitment to the principle that AGI companies have to devolve power to democracies and avoid unduly concentrating power in themselves, even when that leads to uncomfortable places, is something I will not regret. Original tweet: https://x.com/jachiam0/status/2027866890278998126
🇺🇸
Tonight, we reached an agreement with the Department of War to deploy our models in their classified network. In all of our interactions, the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome. AI safety and wide distribution of
View quoted postRT Sam Altman We have raised a $110 billion round of funding from Amazon, NVIDIA, and SoftBank. We are grateful for the support from our partners, and have a lot of work to do to bring you the tools you deserve. Original tweet: https://x.com/sama/status/2027386252555919386
So excited for the new world where everyone can build
Beware, it's 2026 and the CEO is back in the code base thanks to AI.
RT Mehtaab Sawhney We just posted a paper solving Erdos #846, which was solved by an internal model at OpenAI (https://cdn.openai.com/infinite-sets/main_single_clean3.pdf). While the problem can also be derived from an earlier paper in the literature, the proof by the internal model was one of the first instances where I smiled reading the proof. Original tweet: https://x.com/mehtaab_sawhney/status/2026716221933343147
Daniel has a really interesting perspective on the future of Math and AI—well worth a read 👇
RT Greg Brockman seeing so much positive progress across each part of openai right now, very proud of the team Original tweet: https://x.com/gdb/status/2024985187579560366
RT Tejal Patwardhan new in Nature about our wet lab evals work! Original tweet: https://x.com/tejalpatwardhan/status/2024636639126102513
AI-driven autonomous robots are coming to biology laboratories, but researchers insist that human skills remain essential https://go.nature.com/3MCzePt
View quoted postMore on the gluon scattering/GPT 5.2 paper from @ALupsasca below 👇 If you're in the Boston area on Tuesday, go see his lecture at Harvard!
Our recent preprint on gluon amplitudes has sparked a lot of discussion, so I want to share the backstory — including how AI helped crack a problem that had stumped us for a year. I'll also be giving a public lecture at Harvard this week. Details at the end.
View quoted postRT Peter Steinberger 🦞 I'm joining @OpenAI to bring agents to everyone. @OpenClaw is becoming a foundation: open, independent, and just getting started.🦞 https://steipete.me/posts/2026/openclaw Original tweet: https://x.com/steipete/status/2023154018714100102
RT Sam Altman We went from AI systems that struggled to do grade school math to AI systems that can solve research-level math problems in just a few years. I agree with Jakub this is perhaps the most important eval now. I am also pretty sure the main reaction will be "it's not that hard" :) Original tweet: https://x.com/sama/status/2022729068949717182
Very excited about the "First Proof" challenge. I believe novel frontier research is perhaps the most important way to evaluate capabilities of the next generation of AI models. We have run our internal model with limited human supervision on the ten proposed problems. The
View quoted postAI 🤝 Mathematics. The First Proof challenge was a great idea and I'm looking forward to seeing the results! Great color on OpenAI's submission from @merettm below 👇
Very excited about the "First Proof" challenge. I believe novel frontier research is perhaps the most important way to evaluate capabilities of the next generation of AI models. We have run our internal model with limited human supervision on the ten proposed problems. The
View quoted postNot gonna lie it's kinda amazing to see gluon scattering trending on Twitter... my worlds colliding https://x.com/i/trending/2022395629113377198
👇💥
I spent last night with Andrew Strominger and Alex Lupsasca, two of the top physicists in the world They just released a paper, co-authored with OpenAi, that seems to me like ASI Andrew, who helped develop string theory, told me that a year ago, his view was that he didn’t know
Coding at 1000 tokens/sec is a mind-expanding experience. You have to try this.
GPT-5.3-Codex-Spark is now in research preview. You can just build things—faster.
View quoted postRT Derya Unutmaz, MD As I mentioned, I started a really crazy project with the new OpenAI GPT-5.2 Deep Research; I created a 110-chapter textbook focused only on T cells, the immune cells I’ve studied for 35 years, & it’s over 1,000 pages long! Sharing the first 15 chapters here: https://tcell-textbook.netlify.app/ I’ll assemble and share the rest soon. Next, I plan to create similar, even more ambitious textbooks on the entire immune system, cancer, aging, and ME/CFS. Because now you can just do things! Original tweet: https://x.com/DeryaTR_/status/2021780855568417244
Could not agree more. Curious for everyone's take on how we best address this, given the relative speed of model improvement vs peer review.
i am begging academics to study AI capabilities using frontier models. the models used in this study (which is going to be cited for years as proof that "AI is bad at health advice") are GPT-4o, Llama 3, and Command R+, two obsolete models and one i've never heard of.
Can confirm
RT Michael Baym A lot of people talking about this like it's a bunch of vibecoders who hooked up an LLM to some robots In reality Joy is one of the best experimentalists I've ever met (before she went to OpenAI), and Ginkgo has spent nearly two decades developing best-in-class lab automation Original tweet: https://x.com/baym/status/2019799194400145568
Yesterday, we announced that our autonomous lab, connected with @OpenAI's GPT-5, beat the state-of-the-art in Cell-Free Protein Synthesis by 40%. This is how we did it. Watch @OpenAI's Joy Jiao and Edmund Wong alongside our own @reshmapshetty as they explain how techniques like
View quoted postCoding models are amazing, but "GPT-5 lowers the cost of cell-free protein synthesis by 40%" is also pretty rad. This was all done in a robotic lab in partnership with @Ginkgo. AI + Science!
We worked with @Ginkgo to connect GPT-5 to an autonomous lab, so it could propose experiments, run them at scale, learn from the results, and decide what to try next. That closed loop brought protein production cost down by 40%.
View quoted post🤣🤣🤣
This is either going to be the best or worst idea I've ever had. Hooked up my OpenClaw to all of our internet connected cameras at the house. Got this this (OUT OF NOWHERE) this morning.
AI doesn't just accelerate science in big ways, it accelerates it in a thousand incremental compounding ways too.
I've now created, using the OpenAI Codex app, a massive Cancer Mutation Catalog of driver mutations! It contains more than 90,000 mutations across 1,600 genes. You can filter by cancer type, limited now, but will soon expand it to include all cancer types https://cancer-mutations-derya.netlify.app/
View quoted postRT Noam Brown Every ~6 months I hear people claim @OpenAI isn’t doing real research and is just incrementally improving ChatGPT. I even heard it right before 🍓/o1. In my opinion @OpenAI is the best frontier lab to do research at today. Building an AI research intern in 2026 is not hype. Original tweet: https://x.com/polynoamial/status/2018792698107634108
How does OpenAI balance long-term research bets with product-forward research fundamentals? I’ve been getting this question a lot lately, usually framed as a suggestion that Jakub (@merettm) and I are pushing an increasingly product-focused agenda. That characterization is
View quoted postIf you're a scientist and/or write in LaTeX, try Prism at http://prism.openai.com and let us know what you think! Taking feedback and feature requests here and turning them into code daily with Codex.
Much of today’s scientific tooling has remained unchanged for decades. Prism changes that. @ALupsasca joins @kevinweil and @vicapow to walk through what it looks like when GPT-5.2 works inside a LaTeX project with full paper context.
View quoted postI've gotten to the point where if I'm sitting in a meeting and I don't have a Codex prompt running, I feel anxious for "wasting" an hour I could have been making parallel progress. It's happened super quickly and totally changed how I think.
If you're interested in the future of AI and science/mathematics, I highly recommend watching @SebastienBubeck's talk "Recent Advances in LLMs for Mathematics." Seb is an amazing AI researcher at OpenAI and former math professor at Princeton. https://www.youtube.com/watch?v=MH3lG7V7SuU
Thanks for having me on, @jordihays @johncoogan! Always super fun to talk with you on TBPN.
OpenAI's @kevinweil says 24/7 robotic labs could automate scientific discovery using "reinforcement learning with a loop through the real world": "There’s a lot of science that can be totally automated. There’s no reason at this point that you need to have grad students
View quoted postWelcome Mehtaab—so excited to get to work with you!
I've recently gone on leave from Columbia to join OpenAI, working on OpenAI for Science. Over the past few months, AI-including GPT 5.2-has become an increasingly important part of my workflow as a mathematician. I'm excited to contribute to efforts to accelerate progress in
View quoted postIt seems some people are misinterpreting comments @thefriley made in Davos. To be 100% clear: she was not saying that OpenAI plans to take a share of individual users’, entrepreneurs’, or scientists' discoveries. We’ve heard interest from some large organizations in licensing or IP-based partnerships, and we’re open to exploring creative ways to partner and align incentives. That’s not something we’re doing today, and if we do it in the future, it would be a bespoke agreement with a company, not something that would impact individual users. As it relates to Prism: you log into Prism with your ChatGPT account. In ChatGPT, you control in settings whether your (anonymized) data is used to improve our models. Prism follows the setting you've chosen in ChatGPT. So you're in control. https://x.com/kevinweil/status/2016210486778642808
💥 Today we’re introducing Prism—a free, AI-native workspace for scientists to write and collaborate on research, powered by GPT-5.2. Accelerating science requires progress on two fronts: 1. Frontier AI models that use scientific tools and can tackle the hardest problems 2. Integrating that AI into the products scientists use every day Prism is free to anyone with a ChatGPT account, with unlimited projects and collaborators. Try it today at http://prism.openai.com—would love to hear your feedback.
👀💥 5️⃣.2️⃣
New record on FrontierMath Tier 4! GPT-5.2 Pro scored 31%, a substantial jump over the previous high score of 19%. Read on for details, including comments from mathematicians.
Congrats @JohnnotJon! Granted I'm 42, but I still think you (and Airpost) are cool.
Today, we’re launching Airpost. I’m 46. That’s not a cool age to found a startup. At least according to Twitter. I still call it Twitter. I’ve loved advertising all my life. Since I was 19 and my mom told me about a movie called “Nothing in Common” with Tom Hanks where he plays
View quoted postLove this prompt from @joannejang! Here's mine. What's yours? Asking it to explain it afterwards is fun too.
Another open Erdös problem solved by GPT 5.2, confirmed by Terence Tao. Great work @neelsomani.
RT Tibo Codex is having a fantastic start to 2026, we're adding compute at unprecedented pace and OpenAI itself is going through a fast evolution phase as we co-evolve how we build together with Codex's capabilities week after week. It's going to be a fun year. Original tweet: https://x.com/thsottiaux/status/2012416863356002373
GPT 5.2 ran uninterrupted for *one week* and wrote *3 million* lines of code. The future is going to be awesome
We built a browser with GPT-5.2 in Cursor. It ran uninterrupted for one week. It's 3M+ lines of code across thousands of files. The rendering engine is from-scratch in Rust with HTML parsing, CSS cascade, layout, text shaping, paint, and a custom JS VM. It *kind of* works! It
Wow, the third Erdos problem solved via GPT 5.2 this week!
Weekend win: The proof I submitted for Erdos Problem #397 was accepted by Terence Tao. The proof was generated by GPT 5.2 Pro and formalized with Harmonic. Many open problems are sitting there, waiting for someone to prompt ChatGPT to solve them:
2026 is going to be an exciting year for science! AI will be a meaningful accelerant and it couldn't come at a better time.
Terence Tao confirms: For the first time, an LLM (GPT-5.2 pro) has successfully solved an Erdos problem on its own. This makes me really excited for GPT-5.3 pro. Science is gaining momentum, and the breakthroughs are becoming more significant.
RT dave kasten Fascinating: Maryland becomes first state govt to try to ship a textfile to help LLMs navigate government services (llms.txt) Original tweet: https://x.com/David_Kasten/status/2009305237949931550
RT Fidji Simo The launch of ChatGPT Health is really personal for me. I know how hard it can be to navigate the healthcare system (even with great care). AI can help patients and doctors with some of the biggest issues. More here: https://fidjisimo.substack.com/p/chatgpt-health Original tweet: https://x.com/fidjissimo/status/2008978500557131893
Super cool! GPT-5.2 🤝 @HarmonicMath to solve and formalize Erdos #728.
GPT-5.2 has successfully and fully autonomously resolved* Erdős problem #728 prior to any human previously: https://www.erdosproblems.com/forum/thread/728 *WITH THREE IMPORTANT CAVEATS 🧵👇: 1) The original problem statement is quite ambiguous. The model solved an interpretation of the problem (1/4)
Great advice for 2026 that would have made zero sense in 2025
remember to tackle your todo list in order of urgency: - reply to friends - kick off gpt-5.2-codex-xhigh tasks - brush teeth, make breakfast, etc.
View quoted postRT Greg Brockman two big themes of AI in 2026 will be enterprise agent adoption and scientific acceleration Original tweet: https://x.com/gdb/status/2006584251521839141
Congrats to the @OpenAI research team—GPT 5.2 is an incredible model!
Nice way to end the year, see you in 2026 for more! (Also good to remember that 6 months ago the models were at 4% on Frontier Math Tier 4...)
This is wild and impressive... and will seem commonplace in 12-24 months.
Codex CLI wrapped 100 billion tokens. That was my usage in just 39 days on one of the laptops where I run OpenAI’s Codex CLI (currently GPT-5.2 Codex xhigh). I have three OpenAI Pro accounts (US$200 each). If I consider two months of subscription, that’s about 6% of what I would
High school student uses AI to discover 1M+ objects humans missed in astronomical data. Head of NASA openly recruiting him through Twitter with a fighter jet ride included. All my worlds colliding. I love everything about this.
@MAstronomers Matteo please apply to work at NASA and I will personally throw in a fighter jet ride as a signing bonus
View quoted postRT Aaron Levie http://x.com/i/article/2004648738762227713 Original tweet: https://x.com/levie/status/2004654686629163154
RT Poetiq We finally had a moment to run our system with GPT-5.2 X-High on ARC-AGI-2! Using the same Poetiq harness as before, we saw results as high as 75% at under $8 / problem using GPT-5.2 X-High on the full PUBLIC-EVAL dataset. This beats the previous SOTA by ~15 percentage points. Original tweet: https://x.com/poetiq_ai/status/2003546910427361402
This is great @ivanhzhao!
RT Alex Predhome, Track and Field Enjoyer Time for a Christmas classic Original tweet: https://x.com/Predamame/status/2002908392558678344
RT roon the primary criticism of AI you hear has nothing to do with water use or existential risk whatsoever: most people just think it’s fake and doesn’t work and is a tremendous bubble eating intellectual property while emitting useless slop along the way. when GPT-5 came out and perhaps didn’t live up to what people were expecting for a full version bump, the timeline reaction was not mild, it was a full-scale meltdown. there are many intelligent (and unintelligent) people who latched onto this moment to declare AI scaling over, thousands of viral tweets, still a prevailing view in many circles. The financial-cultural phenomenon of machine intelligence is one of the most powerful in decades, and there are a lot of people who would like for its position to be weakened, many outright celebrating its losses and setback. Michael burry of ‘Big Short’ fame, unfortunately the type of guy to predict 12 of the last 3 recessions, has bet himself into insolvency on the AI bubble’s collapse one of the stranger things about this time is that there are very few secrets, and very little reason to be so misinformed. model labs have very little space in between creating new capabilities and launching them to the public. The view among the well informed public and not just “lab insiders” is that machine intelligence is absurdly joyfully smart at so many new things every month. It’s actively contributing on the cutting edge of programming and math and science. Sebastian Bubeck and co’s recent paper reports that GPT5-pro is capable of producing results on the frontier of theoretical physics research, Terry Tao wrote a blog about “vibe-proving” Erdos problems with the auto-formalization AI Aristotle. You can read that these scientists are using it to actively contribute to black hole physics, tighten mathematical bounds in optimization theory, churning morasses of biomedical data into real insight. Google Deepmind, from the way they are signalling, seems to be slowly closing a dragn...
The Genesis Mission is a brilliant set of ideas. Very excited to deepen @OpenAI's partnership with the DoE and the National Labs in the name of AI and national security 🇺🇸
OpenAI and the U.S. Department of Energy are expanding their collaboration on AI and advanced computing in support of national scientific priorities. The agreement builds on our work with DOE’s national labs and advances the Genesis Mission to accelerate scientific discovery.
View quoted postThis is super exciting 👀
This is wild! Johannes Schmitt used GPT5 to solve his own open problem on intersection numbers on moduli spaces of curves (the proof turns out to be unexpectedly simple, "low hanging fruit"). He wrote up the paper, being careful to point out which *entire paragraphs* were written
💥 The new ChatGPT imagegen is here! Plus there's a super fun Images section in the ChatGPT app now too. Built-in prompts make it easy to gen great images. Try it and share what you make below!
Science 🤝 GPT-5. Our new FrontierScience benchmark will be a valuable way to measure the performance of AI models on hard chemistry, biology, physics, and more. Plus, GPT-5 operating in a wet lab environment suggested experiments to increase a molecular cloning protocol's efficiency by 79x. Great thread below 👇
Accelerating scientific progress is one of the most impactful ways AI can benefit society. Models can already help researchers reason through hard problems — but doing this well means testing models on tougher evaluations and in real scientific workflows grounded in experiments.
View quoted postRT Daniel Litt OK, I think GPT 5.2 Pro is actually a step change in usefulness for my applications (algebraic geometry/number theory research). Original tweet: https://x.com/littmath/status/2000636724574302478
RT Sam Altman GPT-5.2 exceeded a trillion tokens in the API on its first day of availability and is growing fast! Original tweet: https://x.com/sama/status/1999624463013544024
Love this from @Opendoor 🇺🇸
Very proud to introduce the @Opendoor Hero’s Home Credit. $4k off closing costs when you buy an Opendoor home using Opendoor Checkout. Exclusively for active-duty military and veterans. https://www.opendoor.com/heroes Live now in Texas. AZ by end of year, and available everywhere
View quoted postRT Sebastien Bubeck btw the paper described in this thread is on arxiv already: https://arxiv.org/abs/2512.10220 ! Original tweet: https://x.com/SebastienBubeck/status/1999540978676355478
This problem fits in a broader context of understanding THE SHAPE OF LEARNING CURVES. The most basic property of such shapes is that hopefully ... they are decreasing! Specifically from the statistical perspective, assume that you add more data, can you prove that your test loss
View quoted postRT Marc Andreessen 🇺🇸 It’s time to build. Original tweet: https://x.com/pmarca/status/1997109742200934620
The new National Security Strategy of the United States: "As Alexander Hamilton argued in our republic’s earliest days, the United States must never be dependent on any outside power for core components—from raw materials to parts to finished products— necessary to the nation’s
Want a masterclass in using ChatGPT? Read this account. I work here and helped build these products, and it still blew my mind.
RT Brad Gerstner 🇺🇸🚀 Original tweet: https://x.com/altcap/status/1995860986407096568
Amazing bi-partisan, joint letter from @tedcruz & @CoryBooker calling on all business leaders to contribute to the 50+ million kids accounts set up over next 6 months under the Invest America Act (aka Trump Accounts). 🇺🇸🚀
💯. We want more experienced industry leaders like @DavidSacks in government, especially in critical and highly technical areas like AI.
David Sacks @DavidSacks is a throwback to the era of American greatness in which the most capable private sector citizens selflessly volunteered for government service in moments of peril for a dollar a day. He is a credit to our nation, and we need more like him, not fewer. 🇺🇸
View quoted post