LogoFollow AI builders
  • Home
  • Features
  • Builders
  • Submit Builder
LogoFollow AI builders

Follow Real AI Builders — Discover the Minds Behind the Next AI Revolution

TwitterX (Twitter)Email
Company
  • Contact
Legal
  • Cookie Policy
  • Privacy Policy
  • Terms of Service
© 2026 Follow AI builders All Rights Reserved.

Follow AI Builders — Not Influencers

Discover real AI creators shaping the future. Track their latest blogs, X posts, YouTube videos, WeChat Official Account posts, and GitHub commits — all in one place.

AK
Andrej Karpathy
⚡github•28 minutes ago

Activity on repository

karpathy pushed nanochat

karpathy pushed nanochat

View on GitHub
哥
哥飞
𝕏x•about 2 hours ago

给一个特定人群做新产品,最好的做法是找到这个群体里大多数人都会使用的产品。 搞清楚大家会怎么使用这个产品,操作习惯是怎么样的,然后把这些操作习惯迁移到新产品里。 这样大家用起你的新产品来就会特别顺手,觉得特别好用,于是就会传播开来。

View on X
JL
Jerry Liu
𝕏x•about 2 hours ago

A lot of PDFs have charts, and parsing charts is hard. All frontier models out of the box are notoriously bad at interpreting chart values without explicit guidance. There’s a lot of ways you can throw compute/tokens at the problem (gpt-5.2-pro is pretty good), and there’s still a bunch of edge cases we need to fix. We’ve created some awesome visual understanding capabilities within LlamaParse at a cheap price. Our agentic mode supports good approximate chart parsing at 1c per page (lower with cost optimization/discounts), less than our comparable solutions at a few cents a page. In the picture below we overlaid a source line chart with our parsed chart on top. For simple line charts it basically matches perfectly. Whether it’s a financial report or marketing presentation, check out our agentic mode + agentic parsing toggles to help OCR your docs! Sign up: https://www.llamaindex.ai/signup

A lot of PDFs have charts, and parsing charts is hard. All frontier models out of the box are notoriously bad at interpreting chart values without explicit guidance.

There’s a lot of ways you can ...
View on X
JL
Jason Liu
𝕏x•about 2 hours ago

Might just take my rag course and turn it into an agent skill. And see if it works.

View on X
JL
Jason Liu
𝕏x•about 2 hours ago

Codex is one again giving cracked Eastern European senior developer. Claude code is distinctly American Zirp engineer.

View on X
SW
Simon Willison
⚡github•about 2 hours ago

Activity on repository

simonw created a branch

simonw created a branch

View on GitHub
PL
Pieter Levels
𝕏x•about 2 hours ago

Because games aren't as interesting as REAL LIFE But when you're 12 your life consists of waking up, going to school, being bored at school, coming home, having 10,000 rules by parents, can't do anything without their permission So no shit games are fun then because it gives you freedom and also your life is boring Now you're a grownup you have ALL THE FREEDOM in the world to do anything, go anywhere, talk to anyone How would a video game ever compare? GTA 5 or Cyberpunk 2077 is extremely limited compared to what I can do in my life and what I did already so to me it's just extremely boring But I still play a bit but it's just tedious and boring so I keep quitting

@CoreyBuilds

@levelsio I can’t even play a video game without feeling like I’m wasting my life. Kind of sad because I used to enjoy vidya. Maybe one day when I’ve done enough I can return.

View quoted post
View on X
PL
Pieter Levels
𝕏x•about 3 hours ago

Because when you grow up (if you do it well) your life becomes a video game And it's way better than any game you'll ever play

@Sean Strickland

Something clicks in your brain after 30 and you are unable to enjoy video games..... The inability to ignore your entire life for hours a day just becomes overwhelming....

View quoted post
View on X
GR
Guillermo Rauch
𝕏x•about 3 hours ago

Glimpse of a world of fully generative interfaces. AI → JSON → UI: http://github.com/vercel-labs/json-render

Your browser does not support the video tag.
View on X
SW
Simon Willison
📝blog•about 3 hours ago

Quoting Boaz Barak, Gabriel Wu, Jeremy Chen and Manas Joglekar

When we optimize responses using a reward model as a proxy for “goodness” in reinforcement learning, models sometimes learn to “hack” this proxy and output an answer that only “looks good” to it (because coming up with an answer that is actually good can be hard). The philosophy behind confessions is that we can train models to produce a second output — aka a “confession” — that is rewarded solely for honesty, which we will argue is less likely hacked than the normal task reward function. One way to think of confessions is that we are giving the model access to an “anonymous tip line” where it can turn itself in by presenting incriminating evidence of misbehavior. But unlike real-world tip lines, if the model acted badly in the original task, it can collect the reward for turning itself in while still keeping the original reward from the bad behavior in the main task. We hypothesize that this form of training will teach models to produce maximally honest confessions. — Bo...

1 min readSimon Willison
Read full article