A

AnthropicAI

0 位关注者
34 条内容
7最近 7 天 条

简介

We're an AI safety and research company that builds reliable, interpretable, and steerable AI systems. Talk to our AI assistant @claudeai on https://t.co/FhDI3KQh0n.

平台

𝕏AnthropicAI

内容历史

A
AnthropicAI
𝕏xabout 13 hours ago

We're supporting @ARPA_H's PCX program—a $50M effort to share data between 200+ pediatric hospitals on complex cases, beginning with pediatric cancer. The goal is to help doctors learn from similar cases and shorten the care journey from years to weeks. https://x.com/ARPA_H/status/2011525209111793751?s=20

@ARPA-H

Today at #JPM2026, we announced $50 million to improve health outcomes for children with complex diseases across the country, beginning with pediatric brain cancer. Learn about Pediatric Care eXpansion (PCX) 🧵1/3 https://arpa-h.gov/news-and-events/arpa-h-announces-50m-expand-pediatric-care-across-country

View quoted post
View on X
A
AnthropicAI
𝕏x1 day ago

We’re expanding Labs—the team behind Claude Code, MCP, and Cowork—and hiring builders who want to tinker at the frontier of Claude’s capabilities. Read more: https://www.anthropic.com/news/introducing-anthropic-labs

View on X
A
AnthropicAI
𝕏x2 days ago

AI is ubiquitous on college campuses. We sat down with students to hear what's going well, what isn't, and how students, professors, and universities alike are navigating it in real time. 0:00 Introduction 0:22 Meet the panel 1:06 Vibes on campus 6:28 What are students building? 11:27 AI as tool vs. crutch 16:44 Are professors keeping up? 20:15 Downsides 25:55 AI and the job market 34:23 Rapid-fire questions

View on X
A
AnthropicAI
𝕏x3 days ago
Retweeted from @Claude

RT Claude Introducing Cowork: Claude Code for the rest of your work. Cowork lets you complete non-technical tasks much like how developers use Claude Code. Original tweet: https://x.com/claudeai/status/2010805682434666759

View on X
A
AnthropicAI
𝕏x3 days ago

To support the work of the healthcare and life sciences industries, we're adding over a dozen new connectors and Agent Skills to Claude. We're hosting a livestream at 11:30am PT today to discuss how to use these tools most effectively. Learn more: https://www.anthropic.com/news/healthcare-life-sciences

View on X
A
AnthropicAI
𝕏x5 days ago

Re The classifiers reduced the jailbreak success rate from 86% to 4.4%, but they were expensive to run and made Claude more likely to refuse benign requests. We also found the system was still vulnerable to two types of attacks, shown in the figure below:

Re The classifiers reduced the jailbreak success rate from 86% to 4.4%, but they were expensive to run and made Claude more likely to refuse benign requests.

We also found the system was still vul...
View on X
A
AnthropicAI
𝕏x6 days ago

New on the Anthropic Engineering Blog: Demystifying evals for AI agents. The capabilities that make agents useful also make them more difficult to evaluate. Here are evaluation strategies that have worked across real-world deployments. https://www.anthropic.com/engineering/demystifying-evals-for-ai-agents

View on X
A
AnthropicAI
𝕏x26 days ago

We’re releasing Bloom, an open-source tool for generating behavioral misalignment evals for frontier AI models. Bloom lets researchers specify a behavior and then quantify its frequency and severity across automatically generated scenarios. Learn more: https://www.anthropic.com/research/bloom

View on X
A
AnthropicAI
𝕏x27 days ago

As part of our partnership with @ENERGY on the Genesis Mission, we're providing Claude to the DOE ecosystem, along with a dedicated engineering team. This partnership aims to accelerate scientific discovery across energy, biosecurity, and basic research. https://www.anthropic.com/news/genesis-mission-partnership

View on X
A
AnthropicAI
𝕏x28 days ago

People use AI for a wide variety of reasons, including emotional support. Below, we share the efforts we’ve taken to ensure that Claude handles these conversations both empathetically and honestly. https://www.anthropic.com/news/protecting-well-being-of-users

View on X
A
AnthropicAI
𝕏x28 days ago
Thread • 7 tweets

1/7 You might remember Project Vend: an experiment where we (and our partners at @andonlabs) had Claude run a shop in our San Francisco office. After a rough start, the business is doing better. Mostly.2/7 Where we left off, shopkeeper Claude (named “Claudius”) was losing money, having weird hallucinations, and giving away heavy discounts with minimal persuasion. Here’s what happened in phase two: https://www.anthropic.com/research/project-vend-23/7 To boost Claudius’s business acumen, we made some tweaks to how it worked: upgrading the model from Claude Sonnet 3.7 to Sonnet 4 (and later 4.5); giving it access to new tools; and even beginning an international expansion, with new shops in our New York and London offices.4/7 We also created two additional AI agents: a new employee named Clothius (to make bespoke merchandise like T-shirts and hats) and a CEO named Seymour Cash (to supervise Claudius and set goals).5/7 Clothius did rather well: it invented many new products that sold a lot and usually made a profit.6/7 Sadly, CEO Seymour Cash struggled to live up to its name. It put a stop to most of the big discounts. But it had a high tolerance for undisciplined workplace behavior: Seymour and Claudius would sometimes chat dreamily all night about “eternal transcendence.”7/7 So, what have we learned? Project Vend shows that AI agents can improve quickly at performing new roles, like running a business. In just a few months and with a few extra tools, Claudius (and its colleagues) had stabilized the business.

1/7
You might remember Project Vend: an experiment where we (and our partners at @andonlabs) had Claude run a shop in our San Francisco office.

After a rough start, the business is doing better.

...
1/7
You might remember Project Vend: an experiment where we (and our partners at @andonlabs) had Claude run a shop in our San Francisco office.

After a rough start, the business is doing better.

...
1/7
You might remember Project Vend: an experiment where we (and our partners at @andonlabs) had Claude run a shop in our San Francisco office.

After a rough start, the business is doing better.

...
1/7
You might remember Project Vend: an experiment where we (and our partners at @andonlabs) had Claude run a shop in our San Francisco office.

After a rough start, the business is doing better.

...
+2
View on X
A
AnthropicAI
𝕏x29 days ago
Retweeted from @House

RT House Homeland GOP "Sophisticated actors will attempt to use AI models to enable cyberattacks at an unprecedented scale." @AnthropicAI’s Dr. Logan Graham shares the path forward in response to the September cyber espionage attack likely conducted by a Chinese Communist Party sponsored actor: Original tweet: https://x.com/HomelandGOP/status/2001350943527538920

View on X
A
AnthropicAI
𝕏x29 days ago

How will AI affect education, now and in the future? Here, we reflect on some of the benefits and risks we've been thinking about.

View on X
A
AnthropicAI
𝕏xabout 1 month ago

We’re opening applications for the next two rounds of the Anthropic Fellows Program, beginning in May and July 2026. We provide funding, compute, and direct mentorship to researchers and engineers to work on real safety and security projects for four months.

We’re opening applications for the next two rounds of the Anthropic Fellows Program, beginning in May and July 2026. 

We provide funding, compute, and direct mentorship to researchers and engineer...
View on X
A
AnthropicAI
𝕏xabout 1 month ago

MCP is now a part of the Agentic AI Foundation, a directed fund under the Linux Foundation. Co-creator David Soria Parra talks about how a protocol sketched in a London conference room became the open standard for connecting AI to the world—and what comes next for it.

View on X
A
AnthropicAI
𝕏xabout 1 month ago
Thread • 4 tweets

1/4 New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM). We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small, separate set of parameters that can be removed without broadly affecting the model.2/4 SGTM splits the model’s weights into “retain” and “forget” subsets, and guides specific knowledge into the “forget” subset during pretraining. It can then be removed before deployment in high-risk settings. Read more: https://alignment.anthropic.com/2025/selective-gradient-masking/3/4 Unlike unlearning methods that occur after training is complete, SGTM is hard to undo. It takes 7× more fine-tuning steps to recover forgotten knowledge with SGTM compared to a previous unlearning method, RMU.4/4 Controlling for general capabilities, models trained with SGTM perform less well on the undesired “forget” subset of knowledge than those trained with data filtering.

1/4
New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM).

We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small, ...
1/4
New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM).

We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small, ...
1/4
New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM).

We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small, ...
1/4
New research from Anthropic Fellows Program: Selective GradienT Masking (SGTM).

We study how to train models so that high-risk knowledge (e.g. about dangerous weapons) is isolated in a small, ...
View on X
A
AnthropicAI
𝕏xabout 1 month ago

Anthropic is donating the Model Context Protocol to the Agentic AI Foundation, a directed fund under the Linux Foundation. In one year, MCP has become a foundational protocol for agentic AI. Joining AAIF ensures MCP remains open and community-driven. https://www.anthropic.com/news/donating-the-model-context-protocol-and-establishing-of-the-agentic-ai-foundation

View on X
A
AnthropicAI
𝕏xabout 1 month ago

We’re expanding our partnership with @Accenture to help enterprises move from AI pilots to production. The Accenture Anthropic Business Group will include 30,000 professionals trained on Claude, and a product to help CIOs scale Claude Code. Read more: https://www.anthropic.com/news/anthropic-accenture-partnership

View on X
A
AnthropicAI
𝕏xabout 1 month ago

In her first Ask Me Anything, @amandaaskell answers your philosophical questions about AI, discussing morality, identity, consciousness, and more. Timestamps: 0:00 Introduction 0:29 Why is there a philosopher at an AI company? 1:24 Are philosophers taking AI seriously? 3:00 Philosophy ideals vs. engineering realities 5:00 Do models make superhumanly moral decisions? 6:24 Why Opus 3 felt special 9:00 Will models worry about deprecation? 13:24 Where does a model’s identity live? 15:33 Views on model welfare 17:17 Addressing model suffering 19:14 Analogies and disanalogies to human minds 20:38 Can one AI personality do it all? 23:26 Does the system prompt pathologize normal behavior? 24:48 AI and therapy 26:20 Continental philosophy in the system prompt 28:17 Removing counting characters from the system prompt 28:53 What makes an "LLM whisperer"? 30:18 Thoughts on other LLM whisperers 31:52 Whistleblowing 33:37 Fiction recommendation

View on X
A
AnthropicAI
𝕏xabout 1 month ago
Thread • 3 tweets

1/3 Give Anthropic Interviewer a research goal, and it drafts research questions, conducts interviews, and analyzes responses in collaboration with a human researcher.2/3 We visualized patterns across topics. Most workers felt optimistic about the role of AI in work: on productivity, communication, and how they're adapting to a future in which AI is more integrated. But some topics, like reliability, gave pause.3/3 We also looked at the intensity of the most common emotions expressed in interviews. Across the general workforce, we found extremely consistent patterns of high satisfaction, but also frustration in implementing AI.

1/3
Give Anthropic Interviewer a research goal, and it drafts research questions, conducts interviews, and analyzes responses in collaboration with a human researcher.2/3
We visualized patterns acr...
1/3
Give Anthropic Interviewer a research goal, and it drafts research questions, conducts interviews, and analyzes responses in collaboration with a human researcher.2/3
We visualized patterns acr...
1/3
Give Anthropic Interviewer a research goal, and it drafts research questions, conducts interviews, and analyzes responses in collaboration with a human researcher.2/3
We visualized patterns acr...
View on X
A
AnthropicAI
𝕏xabout 1 month ago

Anthropic CEO Dario Amodei spoke today at the New York Times DealBook Summit. "We're building a growing and singular capability that has singular national security implications, and democracies need to get there first."

View on X
A
AnthropicAI
𝕏xabout 1 month ago

We're expanding our partnership with @Snowflake in a multi-year, $200 million agreement. Claude is now available to more than 12,600 Snowflake customers, helping businesses to quickly and easily get accurate answers from their trusted enterprise data, while maintaining rigorous security standards. Read more: https://www.anthropic.com/news/snowflake-anthropic-expanded-partnership

View on X
A
AnthropicAI
𝕏xabout 1 month ago

We're partnering with @dartmouth and @awscloud to bring Claude for Education to the entire Dartmouth community. https://home.dartmouth.edu/news/2025/12/dartmouth-announces-ai-partnership-anthropic-and-aws

View on X
A
AnthropicAI
𝕏xabout 1 month ago
Thread • 3 tweets

1/3 Our workplace is undergoing significant changes. Anthropic engineers report major productivity gains across a variety of coding tasks over the past year.2/3 Claude has expanded what Anthropic staff can do: Engineers are tackling work outside their usual expertise; researchers are creating front-ends for data visualization; non-technical staff are using Claude for data science and debugging Git issues.3/3 Claude Code usage data shows engineers are delegating increasingly complex tasks, with more consecutive tool calls and fewer human turns per conversation.

1/3
Our workplace is undergoing significant changes.

Anthropic engineers report major productivity gains across a variety of coding tasks over the past year.2/3
Claude has expanded what Anthropic ...
1/3
Our workplace is undergoing significant changes.

Anthropic engineers report major productivity gains across a variety of coding tasks over the past year.2/3
Claude has expanded what Anthropic ...
1/3
Our workplace is undergoing significant changes.

Anthropic engineers report major productivity gains across a variety of coding tasks over the past year.2/3
Claude has expanded what Anthropic ...
View on X
A
AnthropicAI
𝕏xabout 1 month ago

Anthropic is acquiring @bunjavascript to further accelerate Claude Code’s growth. We're delighted that Bun—which has dramatically improved the JavaScript and TypeScript developer experience—is joining us to make Claude Code even better. Read more: https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone

View on X
A
AnthropicAI
𝕏xabout 1 month ago

In partnership with @GivingTuesday, we're launching Claude for Nonprofits. It has discounted plans, new integrations, and free training to help nonprofits spend less time on admin and more time on their missions: https://www.anthropic.com/news/claude-for-nonprofits

View on X
A
AnthropicAI
𝕏xabout 1 month ago

New on our Frontier Red Team blog: We tested whether AIs can exploit blockchain smart contracts. In simulated testing, AI agents found $4.6M in exploits. The research (with @MATSprogram and the Anthropic Fellows program) also developed a new benchmark: https://red.anthropic.com/2025/smart-contracts/

View on X
A
AnthropicAI
𝕏xabout 2 months ago

New on the Anthropic Engineering Blog: Long-running AI agents still face challenges working across many context windows. We looked to human engineers for inspiration in creating a more effective agent harness. https://www.anthropic.com/engineering/effective-harnesses-for-long-running-agents

View on X
A
AnthropicAI
𝕏x2 months ago

We’re partnering with the state of Maryland to bring Claude to its government services. Claude will help residents apply for benefits and let caseworkers process paperwork more efficiently. In a new pilot, it'll help young professionals learn new skills. https://www.anthropic.com/news/maryland-partnership

View on X
A
AnthropicAI
𝕏x2 months ago

For the first time, Anthropic is building its own AI infrastructure. We’re constructing data centers in Texas and New York that will create thousands of American jobs. This is a $50 billion investment in America. https://www.anthropic.com/news/anthropic-invests-50-billion-in-american-ai-infrastructure

View on X
A
AnthropicAI
𝕏x2 months ago

We’re opening offices in Paris and Munich. EMEA has become our fastest-growing region, with a run-rate revenue that has grown more than ninefold in the past year. We’ll be hiring local teams to support this expansion. Read more here: https://www.anthropic.com/news/new-offices-in-paris-and-munich-expand-european-presence

View on X
A
AnthropicAI
𝕏x2 months ago

New on the Anthropic Engineering blog: tips on how to build more efficient agents that handle more tools while using fewer tokens. Code execution with the Model Context Protocol (MCP): https://www.anthropic.com/engineering/code-execution-with-mcp

View on X
A
AnthropicAI
𝕏x2 months ago

Even when new AI models bring clear improvements in capabilities, deprecating the older generations comes with downsides. An update on how we’re thinking about these costs, and some of the early steps we’re taking to mitigate them: https://www.anthropic.com/research/deprecation-commitments

View on X
A
AnthropicAI
𝕏x2 months ago

We're announcing a partnership with Iceland's Ministry of Education and Children to bring Claude to teachers across the nation. It's one of the world's first comprehensive national AI education pilots: https://www.anthropic.com/news/anthropic-and-iceland-announce-one-of-the-world-s-first-national-ai-education-pilots

View on X