JH

Jeremy Howard

0 位关注者36 条内容最近 7 天 11 条

简介

🇦🇺 Co-founder: @AnswerDotAI & @FastDotAI ; Prev: professor @ UQ; Stanford fellow; @kaggle president; @fastmail/@enlitic/etc founder https://t.co/16UBFTX7mo

平台

𝕏Jeremy Howard

内容历史

JH
Jeremy Howard
𝕏x3 days ago

I agree with @dileeplearningDileep George: I agree with @ylecun Link: https://x.com/dileeplearning/status/1988320699493339284

View on X
JH
Jeremy Howard
𝕏x3 days ago

RT Antonio Sarosi Porting decades old programs to Rust is dumb. They already fixed countless bugs / security issues, spend the time fixing remaining issues instead of starting all over again and re-introducing them. Memory safety doesn't prevent dumb logical errors. Use Rust for new software.The Lunduke Journal: Multiple, serious security vulnerabilities found in the Rust clone of Sudo — which shipped with Ubuntu 25.10 (the most recent release). Not little vulnerabilities: We’re talking about the disclosure of passwords and total bypassing of authentication. In fact, we’re getting new Link: https://x.com/LundukeJournal/status/1988346904581726501

RT Antonio Sarosi: Porting decades old programs to Rust is dumb. They already fixed countless bugs / security issues, spend the time fixing remaining ...RT Antonio Sarosi: Porting decades old programs to Rust is dumb. They already fixed countless bugs / security issues, spend the time fixing remaining ...
View on X
JH
Jeremy Howard
𝕏x3 days ago

RT Micah Goldblum 🚨We converted pretrained LLMs into looped LLMs that can crank up performance by looping for more iterations. Our looped models surpass the performance of the pretrained models we started out with, showing that existing models benefit from increased computational depth. 📜1/9

RT Micah Goldblum: 🚨We converted pretrained LLMs into looped LLMs that can crank up performance by looping for more iterations. Our looped models s...
View on X
JH
Jeremy Howard
𝕏x4 days ago

RT Sully noticed a pretty worrying trend the more i use llms my day to day skills are slowly atrophying and im relying more and more on models for even simple tasks happening for coding, writing etc sometimes i dont even want to do the task if theres no ai to help

View on X
JH
Jeremy Howard
𝕏x4 days ago

RT Rachel Thomas Re "People who go all in on AI agents now are guaranteeing their obsolescence. If you outsource all your thinking to computers, you stop upskilling, learning, and becoming more competent. AI is great at helping you learn." @jeremyphoward @NVIDIAAI https://www.youtube.com/watch?v=zDkHJDgefyk 2/

View on X
JH
Jeremy Howard
𝕏x4 days ago

RT Rachel Thomas TensorFlow was all about making it easier for computers. PyTorch *won* because it was about making it easier for humans. It’s disappointing to see AI community focusing on what’s easiest for machines again (prioritizing AI agents & not centering humans). -- @jeremyphoward 1/

View on X
JH
Jeremy Howard
𝕏x4 days ago

Interesting bot account this one. Check out the posting history. Wonder who is organizing this, and why.Michael: @jeremyphoward @Kimi_Moonshot Don’t fall for China propaganda. You don’t need H100s when your model’s trained on distilled U.S. knowledge — cheap A800s, H800s, even RTX 4090s can handle that just fine using plain old BF16 ops. But when that distilled-knowledge faucet closes in 2026, what then? Link: https://x.com/letsgomike888/status/1987675934548471993

View on X
JH
Jeremy Howard
𝕏x5 days ago

RT 張小珺 Xiaojùn If you are interested in Kimi K2 thinking, you can check out this interview with Yang Zhilin, founder of Kimi (with Chinese and English bilingual subtitles): https://youtu.be/91fmhAnECVc?si=AKNvfeNvxvfYF7fF

View on X
JH
Jeremy Howard
𝕏x5 days ago

RT Shekswess It’s frustrating how labs like @Kimi_Moonshot, @Alibaba_Qwen, @deepseek_ai, @allen_ai, @huggingface... share their research, pipelines, and lessons openly, only for closed-source labs to quietly use that knowledge to build better models without ever giving back.

View on X
JH
Jeremy Howard
𝕏x6 days ago

RT “paula” the tweetInternal Tech Emails: Sam Altman texts Shivon Zilis February 9, 2023 Link: https://x.com/TechEmails/status/1987199248732180862

RT “paula”: the tweetRT “paula”: the tweet
View on X
JH
Jeremy Howard
𝕏x6 days ago

The really funny part is he plagiarized the image in the first placeAcer: no because most mathematicians don’t actually care to evaluate integrals and those that do are basically almost always analytic number theorists, and they usually almost always just need to bound them. Keep doing your calculus homework bud. Link: https://x.com/AcerFur/status/1987023190036480180

The really funny part is he plagiarized the image in the first place
View on X
JH
Jeremy Howard
𝕏x8 days ago

RT Lucas Beyer (bl16) There's no better feeling than coming up with a simpler solution. And then an even simpler one. And then an EVEN SIMPLER one. I love it. It's also the hardest thing to do in programming, because over-complicating things is so so easy, I see it happen everywhere all the time.

RT Lucas Beyer (bl16): There's no better feeling than coming up with a simpler solution. And then an even simpler one. And then an EVEN SIMPLER one. I...
View on X
JH
Jeremy Howard
𝕏x8 days ago

RT Crystal I'm so proud!! The open-source trillion parameters reasoning model <3 > SOTA on HLE (44.9%) and BrowseComp (60.2%)Kimi.ai: 🚀 Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. 🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%) 🔹 Executes up to 200 – 300 sequential tool calls without human interference 🔹 Excels in reasoning, agentic search, and coding 🔹 256K context window Built Link: https://x.com/Kimi_Moonshot/status/1986449512538513505

RT Crystal: I'm so proud!! The open-source trillion parameters reasoning model <3 > SOTA on HLE (44.9%) and BrowseComp (60.2%)
View on X
JH
Jeremy Howard
📝blog8 days ago

A Guide to Solveit Features

An overview of the features of the Solveit platform, which is designed to make exploration and iterative development easier and faster.

1 min readSolveit and Kerem Turgutlu
Read full article
JH
Jeremy Howard
𝕏x10 days ago

RT Stefan Schubert LolPaul Novosad: What happens when online job applicants start using LLMs? It ain't good. 1. Pre-LLM, cover letter quality predicts your work quality, and a good cover gets you a job 2. LLMs wipe out the signal, and employer demand falls 3. Model suggests high ability workers lose the most 1/n Link: https://x.com/paulnovosad/status/1985794453576221085

RT Stefan Schubert: LolRT Stefan Schubert: Lol
View on X
JH
Jeremy Howard
𝕏x10 days ago

RT Joseph Redmon I’m working on a new thing, we’re so back…Ai2: Introducing OlmoEarth 🌍, state-of-the-art AI foundation models paired with ready-to-use open infrastructure to turn Earth data into clear, up-to-date insights within hours—not years. Link: https://x.com/allen_ai/status/1985719070407176577

View on X
JH
Jeremy Howard
𝕏x11 days ago

RT GPU MODE If you'd like to win your own Dell Pro Max with GB300 we're launching a new kernel competition with @NVIDIAAI @sestercegroup @Dell to optimize NVF4 kernels on B200 2025 has seen a tremendous rise of pythonic kernel DSLs, we got on-prem hardware to have reliable ncu benchmarking available to all and we hope the best kernel DSL and the best kernel DSL author win

View on X
JH
Jeremy Howard
📝blog16 days ago

Build to Last

Chris Lattner on software craftsmanship and AI

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blog30 days ago

Let’s Build the GPT Tokenizer: A Complete Guide to Tokenization in LLMs

A text and code version of Karpathy’s famous tokenizer video.

1 min readAndrej Karpathy, via Solveit and Kerem Turgutlu
Read full article
JH
Jeremy Howard
📝blogabout 1 month ago

How to Solve it With Code course now available

An email sent to all fast.ai forum users.

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blog9 months ago

fasttransform: Reversible Pipelines Made Simple

Introducing fasttransform, a Python library that makes data transformations reversible and extensible through the power of multiple dispatch.

1 min readRens Dimmendaal, Hamel Husain, & Jeremy Howard
Read full article
JH
Jeremy Howard
📝blog9 months ago

What AI can tell us about microscope slides

A friendly introduction to Foundation Models for Computational Pathology

1 min readRachel Thomas
Read full article
JH
Jeremy Howard
📝blogabout 1 year ago

A New Chapter for fast.ai: How To Solve It With Code

fast.ai is joining Answer.AI, and we’re announcing a new kind of educational experience, ‘How To Solve It With Code’

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogabout 1 year ago

In defense of screen time

Pundits say my husband and I are parenting wrong.

1 min readRachel Thomas
Read full article
JH
Jeremy Howard
📝blogalmost 2 years ago

A new old kind of R&D lab

Answer.AI is a new kind of AI R&D lab which creates practical end-user products based on foundational research breakthroughs.

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogabout 2 years ago

Can LLMs learn from a single example?

We’ve noticed an unusual training pattern in fine-tuning LLMs. At first we thought it’s a bug, but now we think it shows LLMs can learn effectively from a single example.

1 min readJeremy Howard and Jonathan Whitaker
Read full article
JH
Jeremy Howard
📝blogover 2 years ago

AI and Power: The Ethical Challenges of Automation, Centralization, and Scale

Moving AI ethics beyond explainability and fairness to empowerment and justice

1 min readRachel Thomas
Read full article
JH
Jeremy Howard
📝blogover 2 years ago

AI Safety and the Age of Dislightenment

Model licensing & surveillance will likely be counterproductive by concentrating power in unsustainable ways

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogover 2 years ago

Is Avoiding Extinction from AI Really an Urgent Priority?

The history of technology suggests that the greatest risks come not from the tech, but from the people who control it

1 min readSeth Lazar, Jeremy Howard, & Arvind Narayanan
Read full article
JH
Jeremy Howard
📝blogover 2 years ago

Mojo may be the biggest programming language advance in decades

Mojo is a new programming language, based on Python, which fixes Python’s performance and deployment problems.

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogover 2 years ago

From Deep Learning Foundations to Stable Diffusion

We’ve released our new course with over 30 hours of video content.

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogover 2 years ago

GPT 4 and the Uncharted Territories of Language

Language is a source of limitation and liberation. GPT 4 pushes this idea to the extreme by giving us access to unlimited language.

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogalmost 3 years ago

I was an AI researcher. Now, I am an immunology student.

Last year, I became captivated by a new topic in a way that I hadn’t felt since I first discovered machine learning

1 min readRachel Thomas
Read full article
JH
Jeremy Howard
📝blogabout 3 years ago

1st Two Lessons of From Deep Learning Foundations to Stable Diffusion

4 videos from Practical Deep Learning for Coders Part 2, 2022 have been released as a special early preview of the new course.

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogabout 3 years ago

Deep Learning Foundations Signup, Open Source Scholarships, & More

Signups are now open for Practical Deep Learning for Coders Part 2, 2022. Scholarships are available for fast.ai community contributors, open source developers, and diversity scholars.

1 min readJeremy Howard
Read full article
JH
Jeremy Howard
📝blogabout 3 years ago

From Deep Learning Foundations to Stable Diffusion

Practical Deep Learning for Coders part 2, 2022

1 min readJeremy Howard
Read full article