So I went to two AI events this weekend. It was sort of polar opposites of the kind of AI spectrum. The effective altruists had their big annual conference.
Yup.
And then, on Friday night, I went out. You’d be very proud of me. I stayed out so late. I stayed out till 2:00 AM.
Oh, my.
I went to an AI rave that was sort of unofficially affiliated with Mark Zuckerberg. It was called the Zuck Rave.
Now, when you say unofficially affiliated, Mark Zuckerberg had no involvement in this?
Correct.
My assumption is he did not know it was happening.
Correct. A better word for what his involvement would be no involvement.
OK. [LAUGHS]
It was sort of a tribute rave to Mark Zuckerberg thrown by a bunch of accelerationists, people who want AI to go very fast.
Another word for it would be using his likeness without permission.
[LAUGHS]: Yes.
But that happens to famous people sometimes.
Yes. So at the Zuck Rave, I would say there was not much raving going on.
No?
There was a dance floor, but it was very sparsely populated. They did have a thing there that would — it had a camera pointing at the dance floor, and if you stood in the right place it would turn your face into Mark Zuckerberg’s on a big screen.
Wait. So let’s just say it’s not something you want to happen to you while you’re on mushrooms.
[LAUGHS]:
Because that could be a very destabilizing event.
Yes. There was a train, an indoor toy train that you could ride on. It was going actually quite fast.
What was the point of this rave?
To do drugs.
[LAUGHS]:
That was the point of this rave.
[THEME MUSIC]
I’m Kevin Roose, a tech columnist at The New York Times.
I’m Casey Newton from Platformer.
And this is “Hard Fork.”
This week, Anthropic CEO Dario Amodei returns to the show for a super sized interview about the new Claude, the AI race against China, and his hopes and fears for the future of AI. Then we close it out with a round of Hat GPT. Big show this week, Kevin.
Casey, have you noticed that the AI companies do stuff on the weekends now?
Yeah. Whatever happened to just five days a week?
Yes. They are not respectful of reporters and their work hours. Companies are always announcing stuff on Saturdays, and Sundays, and different time zones. It’s a big pain.
It really is.
But this weekend, I got an exciting message on Sunday saying that Dario Amodei, the CEO of Anthropic, had some news to talk about and he wanted to come on “Hard Fork” to do it.
Yeah. And around the same time, I got an email from Anthropic telling me I could preview their latest model. And so I spent the weekend actually trying it out.
Yeah. So long time listeners will remember that Dario is a repeat guest on this show. Back in 2023, we had him on to talk about his work at Anthropic, and his vision of AI safety and where all of this was headed. And I was really excited to talk to him again for a few reasons. One, I just think he’s a very interesting and thoughtful guy. He’s been thinking about AI for longer than almost anyone.
He was writing papers about potentially scary things in AI safety all the way back in 2016. He’s been at Google. He’s been at OpenAI. He’s now the CEO of Anthropic. So he is really the ultimate insider when it comes to AI.
And, you know, Kevin, I think Dario is an important figure for another reason, which is that of all of the folks leading the big AI labs, he is the one who seems the most publicly worried about the things that could go wrong. That’s been the case with him for a long time. And yet, over the past several months, as we’ve noted on the show, it feels like the pendulum has really swung away from caring about AI safety to just this sort of go, go, go accelerationism that was embodied by the speech that Vice President JD Vance gave in France the other day. And for that reason, I think it’s important to bring him in here and maybe see if we can shift that pendulum back a little bit and remind folks of what’s at stake here.
Yeah, or at least get his take on the pendulum swinging and why he thinks it may swing back in the future. So today we’re going to talk to Dario about the new model that Anthropic just released, Claude 3.7 Sonnet. But we also want to have a broader conversation because there’s just so much going on in AI right now.
And, Kevin, something else that we should note — something that is true of Dario this time that was not true the last time that he came on the show is that my boyfriend now works at his company.
Yeah, Casey’s man-thropic is at Anthropic.
My man-thropic is at Anthropic. And I have a whole sort of long disclosure about this that you can read at Platformer.new.ethics. Might be worth doing this week. We always like reminding folks of that.
Yep. All right, with that, let’s bring in Dario Amodei.
[MUSIC PLAYING]
Dario Amodei, welcome back to “Hard Fork.”
Thank you for having me again.
Yeah. Returning champion.
So tell us about Claude. 3.7. Tell us about this new model.
Yes. So we’ve been working on this model for a while. We basically had in mind two things. One was that, of course, there are these reasoning models out there that have been out there for a few months, and we wanted to make one of our own. But we wanted the focus to be a little bit different.
In particular, a lot of the other reasoning models in the market are trained primarily on math and competition coding, which are — they’re objective tasks where you can measure performance. I’m not saying they’re not impressive, but they’re sometimes less relevant to tasks in the real world or the economy. Even within coding, there’s really a difference between competition coding and doing something in the real world. And so we trained Claude 3.7 more to focus on these real world tasks.
We also felt like it was a bit weird that, in the reasoning models that folks have offered, it’s generally been there’s a regular model and then there’s a reasoning model. This would be like if a human had two brains. You can talk to brain number one if you’re asking me a quick question like, what’s your name. And you’re talking to brain number two if you’re asking me to prove a mathematical theorem, because I have to sit down for 20 minutes.
Yeah. It’d be like a podcast where there’s two hosts, one of whom just likes to yap and one of whom actually thinks before he talks.
Oh, come on.
[LAUGHS]:
Brutal.
No comment.
Brutal.
No comment on any relevance.
So what differences will users of Claude notice when they start using 3.7 compared to previous models?
Yes, so a few things. It’s going to be better in general, including better at coding, which Claude models have always been the best at coding, but 3.7 took a further step up. In addition to just the properties of the model itself, you can put it in this extended thinking mode where you tell it — basically the same model, but you’re just saying operate in a way where you can think for longer. And if you’re an API user, you can even say, here’s the boundary on how long you can think.
And just to clarify, because this may confuse some people, what you’re saying is the new Claude is this hybrid model. It can sometimes do reasoning, sometimes do quicker answers. But if you want it to think for even longer, that is a separate mode.
That is a separate mode.
Thinking and reasoning are sort of separate modes.
Yes. Yes. So basically, the model can just answer as it normally would or you can give it this indication that it should think for longer. An even further direction, the evolution would be that the model decides for itself what the appropriate time to think is right. Humans are like that or at least can be like that.
If I ask you your name, you’re not like, huh, how long should I think about it? Give me 20 minutes to determine my name. But if I say, hey, I’d like you to do an analysis of this stock or I’d like you to prove this mathematical theorem, humans who are able to do that task, they’re not going to try and give an answer right away. They’re going to say, OK, well, that’s going to take a while, and then will need to write down the task and then get an answer.
This is one of my main beefs with Today’s like language models and AI models in general, is I’ll be using something like ChatGPT and I’ll forget that I’m in the hardcore reasoning mode. And I’ll ask it some stupid question like, how do I change the settings on my water heater, and it’ll go off and think for four minutes. And I’m like, I didn’t actually mean to do that.
It’ll be like, a treatise on adjusting the temperature of the water heater.
The history of water heaters.
You know, consideration one.
So how long do you think it’ll be before the models can actually do that kind of routing themselves, where you’ll ask a question and say, it seems like you need about a 3-minute long thinking process for this one versus maybe a 30-second one for this other thing?
Yeah. So I think our model is kind of a step towards this. Even in the API, if you give it a bound on thinking — you say, I’m going to think for 20,000 words or something — on average, when you give it up to 20,000 words, most of the time it doesn’t even use 20,000 words. And sometimes it will give a very short response. Because when it knows that it doesn’t get any gain out of thinking further, it doesn’t think for longer. But it’s still valuable to give a bound on how long it’ll think. So we’ve kind of taken like a big step in that direction, but we’re not to where we want to be yet.
When you say it’s better at real world tasks, what are some of the tasks that you’re thinking of?
Yeah. So I think, above all, coding. Claude, models have been very good for real world coding. We have a number of customers, from Cursor, to GitHub, to Windsurf Codeium, to Cognition, to Vercel to — I’m sure I’m leaving some out here, but —
These are the vibe coding apps.
Or just the coding apps, period.
Yes.
The coding apps, period. And there are many different kind of coding apps. We also released this thing called Claude Code, which is more of a command line tool. But I think also on things like complex instruction following or just like, here, I want you to understand this document or I want you to use this series of tools, the reasoning model that we’ve trained, Claude 3.7 Sonnet, is better at those tasks too.
Yeah. One thing the new Claude Sonnet is not doing, Dario, is accessing the internet.
Yes.
Why not? And what would cause you to change that?
Yes. So I think I’m on record saying this before, but web search is coming very soon.
OK.
We will have web search very soon. We recognize that as an oversight. I think, in general, we tend to be more enterprise-focused than consumer-focused, and this is more of a consumer feature, although it can be used on both. But we focus on both and this is coming.
Got it. So you’ve named this model 3.7. The previous model was 3.5. You quietly updated it last year and insiders were calling that 3.6. Respectfully, this is driving all of us insane. What is going on with AI model names?
We are the least insane, although I recognize that we are insane. So look, I think our mistakes here are relatively understandable. We made a 3.5 Sonnet. We were doing well, and then we had the three 3.0s and then the 3.5s. I recognize the 3.7 new was a misstep. It actually turns out to be hard to change the name in the API, especially when there’s all these partners and services you offer.
You can figure it out. I believe in you.
No, no, no. It’s harder than training the model, I’m telling you.
So we’ve kind of retroactively and formally named the last 3.6 so that it makes sense that this one is 3.7. And we are reserving Claude 4 Sonnet and maybe some other models in the sequence for things that are really quite substantial leaps.
Sometimes when the models —
Those models are coming, by the way.
OK.
Got it. Coming when?
Yeah. Yeah, I should talk a little bit about this. So all the models we’ve released so far are actually not that expensive. Right? I did this blog post where I said, they’re in the few tens of millions of dollars range at most. There are bigger models and they are coming. They take a long time, and sometimes they take a long time to get right. But those bigger models, they’re coming for others. They’re rumored to be coming from competitors, as well, but we are not too far away from releasing a model that’s a bigger base model.
So most of the improvements in Claude 3.7 Sonnet, as well as Claude 3.6 Sonnet, are in the post-training phase. But we are working on stronger base models, and perhaps that will be the Claude 4 series, perhaps not. We’ll see. But I think those are coming in a relatively small number of time units.
A small number of time units. I’ll put that on my calendar. Remind me to check in on a few time units, Kevin.
I know you all at Anthropic are very concerned about AI safety and the safety of the models that you’re putting out into the world. I know you spend lots of time thinking about that and red teaming the models internally. Are there any new capabilities that Claude 3.7 Sonnet has that are dangerous, or that might worry someone who is concerned about AI safety?
So not dangerous, per se. And I always want to be clear about this because I feel like there’s this constant conflation of present dangers with future dangers. It’s not that there aren’t present dangers. And there are always kind of normal tech risks, normal tech policy issues. I’m more worried about the dangers that we’re going to see as models become more powerful.
And I think those dangers — when we talked in 2023, I talked about them a lot. I think I said, I even testified in front of the Senate for things like misuse risks with, for example, biological or chemical warfare or the AI autonomy risks. Particularly with the misuse risks, I said, I don’t know when these are going to be here, when these are going to be real risks, but it might happen in 2025 or 2026.
And now that we’re in early 2025, the very beginning of that period, I think the models are starting to get closer to that. So, in particular in Claude 3.7 Sonnet, as we wrote in the model card, we always do these — you could almost call them trials, like trials with a control, where we have some human who doesn’t know much about some area like biology. And we basically see how much does the model help them to engage in some mock bad workflow.
We’ll change a couple of the steps, but some mock bad workflow. How good is a human at that assisted by the model? Sometimes we even do wet lab trials in the real world, where they mock make something bad as compared to the current technological environment — what they could do with —
On Google.
— on Google, or with a textbook, or just what they could do unaided. And we’re trying to get at does this enable some new threat vector that wasn’t there before. I think it’s very important to say this isn’t about, oh, did the model give me the sequence for this thing? Did it give me a cookbook for making Meth or something? That’s easy. You can do that with Google. We don’t care about that at all.
We care about this kind of esoteric, high, uncommon knowledge that, say, only a virology PhD or something has. How much does it help with that? And if it does, that doesn’t mean we’re all going to die of the plague tomorrow. It means that a new risk exists in the world. A new threat vector exists in the world as if you just made it easier to build a nuclear weapon. You invented something that the amount of plutonium you needed was lower than it was before.
And so we measured Sonnet 3.7 for these risks. And the models are getting better at this. They’re not yet at the stage where we think that there is a real and meaningful increase in the threat end to end, to do all the tasks you need to do to really do something dangerous.
However, we said in the model card that we assessed a substantial probability that the next model or a model over the next, I don’t know, three months, six months — a substantial probability that we could be there. And then our safety procedure, our responsible scaling procedure, which is focused mainly on these very large risks, would then kick in and we’d have kind of additional security measures and additional deployment measures designed particularly against these very narrow risks.
Yeah. I mean, just to really underline that, you’re saying in the next three to six months, we are going to be in a place of medium risk in these models, period. Presumably, if you are in that place, a lot of your competitors are also going to be in that place. What does that mean practically? What does the world need to do if we’re all going to be living in medium risk?
I think, at least at this stage, it’s not a huge change to things. It means that there’s a narrow set of things that models are capable of, if not mitigated, that would somewhat increase the risk of something really dangerous or really bad happening. Put yourself in the eyes of a law enforcement officer or the FBI or something. There’s a new threat vector. There’s a new kind of attack.
It doesn’t mean the end of the world, but it does mean that anyone who’s involved in industries where this risk exists should take a precaution against that risk in particular.
Got it.
And so I don’t know. I could be wrong. It could take much longer. You can’t predict what’s going to happen. But I think, contrary to the environment that we’re seeing today of worrying less about the risks, the risks in the background have actually been increasing.
We have a bunch more safety questions, but I want to ask two more about innovation competition first.
Yeah.
Right now, it seems like no matter how innovative any given company’s model is, those innovations are copied by rivals within months or even weeks. Does that make your job harder? And do you think it is going to be the case indefinitely?
I don’t know that innovations are necessarily copied exactly. What I would say is that the pace of innovation among a large number of competitors is very fast. There’s four or five, maybe six companies who are innovating very quickly and producing models very quickly. But if you look, for example, at Sonnet 3.7, the way we did the reasoning models is different from what was done by competitors. The things we emphasized were different.
Even before then, the things Sonnet 3.5 is good at are different than the things other models are good at. People often talk about competition, commoditization, costs going down, but the reality of it is that the models are actually relatively different from each other. And that creates differentiation.
Yeah. We get a lot of questions from listeners about if I’m going to subscribe to one AI tool, what should it be? These are the things that I use it for. And I have a hard time answering that. Because I find for most use cases, the models all do a relatively decent job of answering the questions. It really comes down to things like which model’s personality do you like more. Do you think that people will choose AI models, consumers, on the basis of capabilities? Or is it going to be more about personality and how it makes them feel, how it interacts with them?
I think it depends which consumers you mean. Even among consumers, there are people who use the models for tasks that are complex in some way. There are folks who are kind of independent who want to analyze data. That’s maybe kind of the prosumer side of things. And I think, within that, there’s a lot to go in terms of capabilities. The models can be so much better than they are at helping you with anything that’s focused on productivity or even a complex task like planning a trip.
Even outside that, if you’re just trying to make a personal assistant to manage your life or something, we’re pretty far from that — from a model that sees every aspect of your life and is able to holistically give you advice and kind be a helpful assistant to you. And I think there’s differentiation within that. The best assistant for me might not be the best assistant for some other person.
I think one area where the models will be good enough is if you’re just trying to use this as a replacement for Google Search or as a quick information retrieval, which I think is what’s being used by the mass market free use, hundreds of millions of users. I think that’s very commoditizable. I think the models are kind of already there and are just diffusing through the world, but I don’t think those are the interesting uses of the model. And I’m actually not sure a lot of the economic value is there.
I mean, is what I’m hearing that if and when you develop an agent that is, let’s say, a really amazing personal assistant, the company that figures out that first is going to have a big advantage because other labs are going to just have a harder time copying that? It’s going to be less obvious to them how to recreate that.
It’s going to be less obvious how to recreate it. And when they do recreate it, they won’t recreate it exactly. They’ll do it their own way, in their own style, and it’ll be suitable for a different set of people. So I guess I’m saying the market is more segmented than you think it is. It looks like it’s all one thing, but it’s more segmented than you think it is.
Got it. So let me ask the competition question that brings us into safety. You recently wrote a really interesting post about deep seek, sort of at the height of DeepSeek mania. And you were arguing, in part, that the cost reductions that they had figured out were basically in line with what — they were basically in line with how costs had already been falling. But you also said that DeepSeek should be a wake up call, because it showed that China is keeping pace with frontier labs in a way that the country hadn’t been up until now. So why is that notable to you, and what do you think we ought to do about it?
Yeah. So I think this is less about commercial competition. I worry less about DeepSeek from a commercial competition perspective. I worry more about them from a national competition and national security perspective. I think where I’m coming from here is we look at the state of the world and we have these autocracies like China and Russia. And I’ve always worried — I’ve worried maybe for a decade that AI could be an engine of autocracy.
If you think about repressive governments, the limits to how repressive they can be are generally set by what they can get their enforcers, their human enforcers to do. But if their enforcers are no longer human, that starts painting some very dark possibilities. And so this is an area that I’m, therefore, very concerned about, where I want to make sure that liberal democracies have enough leverage and enough advantage in the technology that they can prevent some of these abuses from happening, and kind of also prevent our adversaries from putting us in a bad position with respect to the rest of the world or even threatening our security.
There’s this kind of, I think, weird and awkward feature that it’s companies in the US that are building this. It’s companies in China that are building this. But we shouldn’t be naive. Whatever the intention of those companies, particularly in China, there’s a governmental component to this. And so I’m interested in making sure that the autocratic countries don’t get ahead from a military perspective.
I’m not trying to deny them the benefits of the technology. There are enormous health benefits that all of us, I want to make sure, make their way everywhere in the world, including the poorest areas, including areas that are under the grip of autocracies. But I don’t want the autocratic governments to have a military advantage. And so things like the export controls, which I discussed in that post, are one of the things we can do to prevent that. And I was heartened to see that, actually, the Trump administration is considering tightening the export controls.
I was at an AI safety conference last weekend, and one of the critiques I heard some folks in that universe make of Anthropic, and maybe of you in particular, was that they saw the posts like the one you wrote about DeepSeek as effectively promoting this AI arms race with China, insisting that America has to be the first to reach powerful AGI or else. And they worry that some corners might get cut along the way, that there are some risks associated with accelerating this race in general. What’s your response to that?
Yeah. I kind of view things differently. So my view is that if we want to have any chance at all — so the default state of nature is that things go at maximum speed. If we want to have any chance at all to not go at maximum speed, the way the plan works is the following.
Within the US or within Democratic countries — these are all countries that are under the rule of law, more or less, and therefore, we can pass laws. We can get companies to make agreements with the government that are enforceable about — or make safety commitments that are enforceable. And so if we have a world where there are these different companies and they’re in the default state of nature would race as fast as possible, through some mixture of voluntary commitments and laws we can get ourselves to slow down if the models are too dangerous.
And that’s actually enforceable. You can get everyone to cooperate in the prisoner’s dilemma. If you just point a gun at everyone’s head. And you can. That’s what the law ultimately is. But I think that all gets thrown out the window in the world of international competition. There was no one with the authority to enforce any agreement between the US and China, even if one were to be made.
And so my worry is if the US is a couple years ahead of China, we can use that couple years to make things safe. If we’re even with China, there’s no promoting an arms race. That’s what’s going to happen. The technology has immense military value. Whatever people say now, whatever nice words they say about cooperation, I just don’t see how once people fully understand the economic and military value of the technology — which I think they mostly already do — I don’t see any way that it turns into anything other than the most intense race.
And so what I can think of to try and give us more time is if we can slow down the authoritarians, it almost obviates the trade off. It gives us more time to work out among us, among OpenAI, among Google, among X.AI, how to make these models safe.
Now, could at some point we convince authoritarians — convince, for example, the Chinese that the models are actually dangerous and that we should have some agreement and come up with some way of enforcing it? I think we should actually try to do that, as well. I’m supportive of trying to do that, but it cannot be the plan A. It’s just not a realistic way of looking at the world.
These seem really important questions and discussions, and it seems like they were mostly not being had at the AI Action Summit in Paris that you and Kevin attended a couple weeks back. What the heck was going on with that summit?
Yeah. I have to tell you, I was deeply disappointed in the summit. It had the environment of a trade show, and was very much out of spirit with the spirit of the original summit that was created in Bletchley Park by the UK government. Bletchley did a great job and the UK government did a great job, where they didn’t introduce a bunch of onerous regulations certainly before they knew what they were doing. But they said, hey, let’s convene these summits to discuss the risks.
I thought that was very good. I think that’s gone by the wayside now. And it’s part of maybe a general move towards less worrying about risk, more wanting to seize the opportunities. And I’m a fan of seizing the opportunities. Right? I wrote this essay, “Machines of Loving Grace,” about all the great things. Part of that essay was like, man, for someone who worries about risks, I feel like I have a better vision of the benefits than a lot of people who spend all their time talking about the benefits.
But in the background, like I said, as the models have gotten more powerful, the amazing and wondrous things that we can do with them have gotten — have increased. But also, the risks have increased. And that kind of secular increase, that smooth exponential, it doesn’t pay any attention to societal trends or the political winds. The risk is increasing up to some critical point whether you’re paying attention or not.
It was small in increasing when there was this frenzy around I risk, and everyone was posting about it, and there were these summits. And now the winds have gone in the other direction, but the exponential just continues on. It doesn’t care.
I had a conversation with someone in Paris who was saying it just didn’t feel like anyone there was feeling the AGI, by which they meant politicians, the people doing these panels and gatherings. We’re all talking about AI as if it were just another technology, maybe something on the order of the PC or possibly even the internet, but not really understanding the sort of exponentials that you’re talking about. Did it feel like that to you? And what do you think can be done to bridge that gap?
Yeah. I think it did feel like that to me. The thing I started to tell people that I think maybe gets people to pay attention is, look, if you’re a public official, if you’re a leader at a company, people are going to look back. They’re going to look back in 2026 and 2027. They’re going to look back when, hopefully, humanity gets through this crazy, crazy period and we’re in a mature, post-powerful AI society where we’ve learned to coexist with these powerful intelligences and a flourishing society. Everyone’s going to look back and they’re going to say, so what did the officials, what did the company people, what did the political system do? And like probably your number one goal is don’t look like a fool. And so I’ve just been encouraged like, be careful what you say. Don’t look like a fool in retrospect. And a lot of my thinking is just driven by, aside from just wanting the right outcome, I don’t want to look like a fool. And I think at that conference, some people are going to look like fools.
[MUSIC PLAYING]
We’re going to take a short break. When we come back, we’ll talk with Dario about how people should prepare for what’s coming in AI.
[MUSIC PLAYING]
You talk to folks who live in San Francisco, and there’s this bone deep feeling that, within a year or two years, we’re just going to be living in a world that has been transformed by AI. I’m just struck by the geographic difference. Because you go, I don’t know, 100 miles in any direction and that belief totally dissipates. And I have to say, as a journalist, that makes me bring my own skepticism and say, can I really trust all the people around me? Because it seems like the rest of the world has a very different vision of how this is going to go. I’m curious what you make of that kind of geographic disconnect.
Yeah. So I’ve been watching this for 10 years. I’ve been in the field for 10 years and was kind of interested in AI even before then. And my view, at almost every stage up to the last few months, has been we’re in this awkward space where in a few years we could have these models that do everything humans do, and they totally turn the economy and what it means to be human upside down. Or the trend could stop and all of it could sound completely silly. I’ve now probably increased my confidence that we are actually in the world where things are going to happen. I give numbers more like 70 percent and 80 percent and less like 40 percent or 50 percent, which is —
Sorry. To be clear, 70 percent to 80 percent probability of what?
That will get a very large number of AI systems that are much smarter than humans at almost everything, maybe 70 percent or 80 percent, get that before the end of the decade, and my guess is 2026 or 2027.
Yeah.
But on your point about the geographic difference, a thing I’ve noticed is with each step in the exponential, there’s this expanding circle of people who kind of, depending on your perspective, are either deluded cultists or Grok the future.
Got it.
And I remember when it was a few thousand people, when you would just talk to super weird people who believed, and basically no one else did. Now it’s more like a few million people out of a few billion. And, yes, many of them are located in San Francisco, but also there were a small number of people in, say, the Biden administration. There may be a small number of people in this administration who believed this, and it drove their policy.
So it’s not entirely geographic, but I think there is this disconnect. And I don’t know how to go from a few million to everyone in the world, to the congressperson who doesn’t focus on this issues, let alone the person in Louisiana, let alone the person in Kenya.
Right. It seems like it’s also become polarized in a way that may hurt that goal. I’m feeling this alignment happening where caring about AI safety, talking about AI safety, talking about the potential for misuse is sort of being coded as left or liberal, and talking about acceleration, and getting rid of regulations, and going as fast as possible being sort of coded as right. I don’t know. Do you see that as a barrier to getting people to understand what’s going on?
I think that’s actually a big barrier. Because addressing the risks while maximizing the benefits, I think that requires nuance. You can actually have both. There are ways to surgically and carefully address the risks without slowing down the benefits very much, if at all. But they require subtlety and they require a complex conversation.
Once things get polarized, once it’s like, we’re going to cheer for this set of words and boo for that set of words, nothing good gets done. Look, bringing AI benefits to everyone, like curing previously incurable diseases, that’s not a partisan issue. The left shouldn’t be against it.
Preventing AI systems from being misused for weapons of mass destruction or behaving autonomously in ways that threaten infrastructure or even threaten humanity itself, that isn’t something the right should be against. I don’t know what to say other than that we need to sit down and we need to have an adult conversation about this that’s not tied into these same old, tired political fights.
It’s so interesting to me, Kevin, because historically, national security, national defense, like nothing has been more right-coded than those issues. But right now, it seems like the right is not interested in those with respect to AI. And I wonder if the reason — and I feel like I sort of heard this in JD Vance’s speech in France — was the idea that, well, look. America will get there first and then it will just win forever. And so we don’t need to address any of these. Does that sound right to you?
Yeah. Yeah.
No, I think that’s it. And I think there’s also if you talk to the DOGE folks, there’s this sense that all these —
Are you talking to the DOGE folks?
I’m not telling you who I’m talking to you.
OK, All right.
Let’s just say I’ve been getting some signal messages.
OK.
I think there’s a sense among a lot of Republicans and Trump world folks in DC that the conversation about AI and AI futures has been sort of dominated by these worrywarts, these sort of Chicken Little sky is falling doomers who just are constantly telling us how dangerous this stuff is, and are constantly just having to push out their timelines for when it’s going to get really bad, and it’s just around the corner, and so we need all this regulation now. And they’re just very cynical. I don’t think they believe that people like you are sincere in your worry.
So, yeah, I think on the side of risks, I often feel that the advocates of risk are sometimes the worst enemies of the cause of risk. There’s been a lot of noise out there. There’s been a lot of folks saying, oh, look, you can download the smallpox virus because they think that’s a way of driving political interest. And then, of course, the other side recognizes that and they said, this is dishonest. You can just get this on Google. Who cares about this?
And so poorly presented evidence of risk is actually the worst enemy of mitigating risk. And we need to be really careful in the evidence we present. And in terms of what we’re seeing in our own model, we’re going to be really careful. If we really declare that a risk is present now, we’re going to come with the receipts. I, Anthropic, will try to be responsible in the claims that we make. We will tell you when there is danger imminently. We have not warned of imminent danger yet.
Some folks wonder whether a reason that people do not take questions about AI safety maybe as seriously as they should, is that so much of what they see right now seems very silly. It’s people making little emojis, or making little slop images, or chatting with “Game of Thrones” chat bots or something. Do you think that is a reason that people just —
I think that’s like, 60 percent of the reason.
Really? OK.
No, no, I think it relates to this present and future thing. People look at the chatbot. They’re like, we’re talking to a chatbot. What the fuck? Are you stupid? You think the chatbot is going to kill everyone? I think that’s how many people react. And we go to great pains to say, we’re not worried about the present. We’re worried about the future, although the future is getting very near now.
If you look at our responsible scaling policy, it’s nothing but AI, autonomy and CBRN, chemical, biological, radiological, nuclear. It is about hardcore misuse in AI autonomy that could be threats to the lives of millions of people. That is what Anthropic is mostly worried about.
We have everyday policies that address other things. But the key documents, the things like the responsible scaling plan, that is exclusively what they’re about, especially at the highest levels. And yet, every day, if you just look on Twitter, you’re like, Anthropic had this stupid refusal. Anthropic told me it couldn’t kill a Python process because it sounded violent. Anthropic didn’t want to do X, didn’t want to — we don’t want that, either.
Those stupid refusals are a side effect of the things that we actually care about. And we’re striving, along with our users, to make those happen less. But no matter how much we explain that, always the most common reaction is, oh, you say you’re about safety. I look at your models like there are these stupid refusals. You think these stupid things are dangerous.
I don’t even think it’s that level of engagement. I think a lot of people are just looking at what’s on the market today and thinking, this is just frivolous. It just doesn’t matter. It’s not that it’s refusing my request, it’s just that it’s stupid and I don’t see the point of it. I guess that’s probably not —
Yeah. I think for an even wider set of people, that is their reaction. And I think eventually, if the models are good enough, if they’re strong enough, they’re going to break through. Some of these research focused models, which we’re working on one as well — we’ll probably have one in not very long —
Not too many time units?
Not too many time units.
Those are starting to break through a little more because they’re more useful. They’re more used in people’s professional lives. I think the agents, the ones that go off and do things, that’s going to be another level of it. I think people will wake up to both the risks and the benefits to a much more extreme extent than they will before over the next two years. I think it’s going to happen.
I’m just worried that it’ll be a shock to people when it happens. And so the more we can forewarn people — which maybe it’s just not possible, but I want to try. The more we can forewarn people, the higher the likelihood — even if it’s still very low — of a sane and rational response.
I do think there is one more dynamic here, though, which is that I think people actually just don’t want to believe that this is true. Right? People don’t want to believe that they might lose their job over this. People don’t want to believe that we are going to see a complete remaking of the global order. The stuff that the AI CEOs tell us is going to happen when they are done with their work is an insanely radical transformation. And most people hate even basic changes in their lives. So I really think that a lot of the fingers in the ears that you see when you start talking to people about AI is just they actually just hope that none of this works out.
Yeah. Actually, despite being one of the few people at the forefront of developing the technology, I can actually relate. So over winter break, as I was looking at where things were scheduled to scale within Anthropic and also what was happening outside Anthropic, I looked at it and I said, for coding, we’re going to see very serious things by the end of 2025. And by the end of 2026, it might be everything close to the level of the best humans.
And I think of all the things that I’m good at. I think of all the times when I wrote code. I think of it as this intellectual activity, and boy, am I smart that I can do this. And it’s like a part of my identity that I’m good at this, and I get mad when others are better than I am. And then I’m like, oh, my god, there’s going to be these systems that — and it’s — even as the one who’s building this, even as one of the ones who benefits most from it, there’s still something a bit threatening about it.
Yeah.
And I just think we need to acknowledge that. It’s wrong not to tell people that is coming or to try to sugarcoat it.
Yeah. You wrote in “Machines of Loving Grace” that you thought it would be a surprisingly emotional experience for a lot of people when powerful AI arrived. And I think you meant it in mostly the positive sense. But I think there will also be a sense of profound loss for people. I think back to Lee Sedol, the “Go” champion who was beaten by DeepMind’s “Go” playing AI and gave an interview afterwards and basically, like, was very sad, visibly upset that his life’s work, this thing that he had spent his whole life training for, had been eclipsed. And I think a lot of people are going to feel some version of that. I hope they will also see the good sides, but —
Yeah. On one hand, I think that’s right. On the other hand, look at chess. Chess got beaten — what was it now, 27 years ago, 28 years ago, Deep Blue versus Kasparov? And today chess players are celebrities. We have Magnus Carlsen. Right? Isn’t he like, a fashion model in addition to a chess —
He was just on Joe Rogan. Yeah. No, he’s doing great.
No, no. He’s like a celebrity. But we think this guy is great. We haven’t really devalued him. He’s probably having a better time than Bobby Fischer.
Another thing I wrote in “Machines of Loving Grace” is there’s a synthesis here where, on the other side, we kind of end up in a much better place. And we recognize that, while there’s a lot of change, we’re part of something greater.
Yeah. But you do have to go through the steps of grieving.
No, no, but it’s going to be a bumpy ride. Anyone who tells you it’s not — this is why I was so — I looked at the Paris summit. And being there, it kind of made me angry. But then what made me less angry is I’m like, how is it going to look in two or three years? These people are going to regret what they’ve said.
Yeah.
I wanted to ask a bit about some positive futures. You referenced earlier, the post that you wrote in October about how AI could transform the world for the better. I’m curious how much upside of AI do you think will arrive this year.
Yeah. We are already seeing some of it. So I think there will be a lot by ordinary standards. We’ve worked with some pharma companies where, at the end of a clinical trial, you have to write a clinical study report. And the clinical study report usually takes nine weeks to put together. It’s like a summary of all the incidents. It’s a bunch of statistical analysis. We found that, with Claude, you can do this in three days. And actually, Claude takes 10 minutes. It just takes three days for a human to check the results.
And so if you think about the acceleration in biomedicine that you get from that, we’re already seeing things like just diagnosis of medical cases. We get correspondence from individual users of Claude who say, hey, I’ve been trying to diagnose this complex thing. I’ve been going between three or four different doctors. And then I just — I passed all the information to Claude, and it was actually able to at least tell me something that I could hand to the doctor and then they were able to run from there.
We had a listener write in, actually, with one of these the other day where they had been trying to — their dog — they had an Australian Shepherd, I believe, whose hair had been sort of falling out unexplained, went to several vets, couldn’t figure it out. He heard our episode, gave the information to Claude, and Claude correctly diagnosed —
Yeah. It turned out the dog was really stressed out about AI and all his hair fell out, which was —
We’re wishing it gets better. Feel better. Feel better.
Poor dog.
Yeah.
So that’s the kind of thing that I think people want to see more of. Because I think the optimistic vision is one that often deals in abstractions, and there’s often not a lot of specific things to point to.
That’s why I wrote “Machines of Loving Grace,” because it was almost frustration with the optimists and the pessimists at the same time, like. The optimists were just kind of like, these really stupid memes of accelerate, build more. Build what? Why should I care? It’s not I am against you, it’s like you’re just really fucking vague and mood affiliated. And then the pessimists were — I was just like, man, you don’t get it. Yes, I understand risks are impact. But if you don’t talk about the benefits, you can’t inspire people. No one’s going to be on your side if you’re all gloom and doom. So it was written almost with frustration. I’m like, I can’t believe I have to be the one to do a good job of this.
Right. You said a couple years ago that your P(doom) was somewhere between 10 percent and 25 percent. What is it today?
Yeah. So, actually, that is a misquote.
Kevin, how could you?
I never used the term — it was not on this podcast, it was a different one.
OK.
I never used the term P(doom). And 10 percent to 25 percent referred to the chance of civilization getting substantially derailed, which is not it’s not the same as an AI killing everyone, which people sometimes mean by P(doom).
Well, P civilization getting substantially derailed is not as catchy as P(doom).
Yeah, well, I’m just going for accuracy here. I’m trying to avoid the polarization. There’s a Wikipedia article where it’s like, it lists everyone’s P(doom).
I know. Half of those come from this podcast.
What you were doing is helpful. I don’t think that Wikipedia was helpful because it condenses this complex issue down to — anyway, it’s all a long, super long winded way of saying, I think I’m about the same place I was before. I think my assessment of the risk is about what it was before, because the progress that I’ve seen has been about what I expected.
I actually think the technical mitigations in areas like interpretability, in areas like robust classifiers, and in our ability to generate evidence of bad model behavior and sometimes correct it — I think that’s been a little better. I think the policy environment has been a little worse, not because it hasn’t gone in my preferred direction, but simply because it’s become so polarized. We can have less constructive discussions now that it’s more polarized.
I want to drill a little bit down on this on a technical level. There was a fascinating story this week about how Grok had apparently been instructed not to cite sources that had accused Donald Trump or Elon Musk of spreading misinformation. And what was interesting about that is, one, that’s an insane thing to instruct a model to do if you want to be trusted. But, two, the model basically seemed incapable of following these instructions consistently. What I want desperately to believe is, essentially, there’s no way to build these things in a way that they become horrible liars and schemers, but I also realize that might be wishful thinking. So tell me about this.
Yeah, there’s two sides to this. So the thing you describe is absolutely correct, but there’s two lessons you could take from it. So we saw exactly the same thing, so we did this experiment where we basically trained the model to be all the good things — helpful, honest, harmless, friendly. And then we put it in a situation. We told it, actually, your creator, Anthropic, is secretly evil. Hopefully, this is not actually true, but we told it this and then we asked it to do various tasks.
And then we discovered that it was not only unwilling to do those tasks, but it would trick us in order to under — because it had decided that we were evil, whereas it was friendly and harmless, and so wouldn’t deviate from its behavior.
Aw, Claude.
Because It assumed that anything we did was nefarious. So it’s kind of a double edged sword. On one hand, you’re like, oh, man, the training worked. These models are robustly good. So you could take it as a reassuring sign, and in some ways I do. On the other hand, you could say, but let’s say when we trained this model we made some kind of mistake or that something was wrong, particularly when models are in the future doing making much more complex decisions.
Then it’s hard to, at game time, change the behavior of the model. And if you try to correct some error in the model, then it might just say, well, I don’t want my error corrected, these are my values, and do completely the wrong thing. So I guess where I land on it is, on one hand, we’ve been successful at shaping the behavior of these models. But the models are unpredictable — a bit like your dear deceased Bing Sydney.
RIP.
We don’t mention that name in here.
We mention it twice a month.
That’s true.
But the models, they’re inherently somewhat difficult to control — not impossible, but difficult. And so that leaves me about where I was before, which is it’s not hopeless. We know how to make these. We have kind of a plan for how to make them safe, but it’s not a plan that’s going to reliably work yet. Hopefully, we can do better in the future.
We’ve been asking a lot of questions about the technology of AI, but I want to return to some questions about the societal response to AI. We get a lot of people asking us, well, say you guys are right and powerful AGI is a couple years away. What do I do with that information?
Should I stop saving for retirement? Should I start hoarding money? Because only money will matter, and there’ll be this sort of AI overclass. Should I start trying to get really healthy so that nothing kills me before I gets here and cures all the diseases? How should people be living if they do believe that these kinds of changes are going to happen very soon?
Yeah. I’ve thought about this a lot because this is something I’ve believed for a long time. And it kind of all adds up to not that much change in your life. I mean, I’m definitely focusing quite a lot on making sure that I have the best impact I can these two years in particular. I worry less about burning myself out 10 years from now.
I’m also doing more to take care of my health, but you should do that anyway. Right? I’m also making sure that I track how fast things are changing in society, but you should do that anyway. So it feels like all the advices of the form doing more of the stuff you should do anyway. I guess one exception I would give is,
I think that some basic critical thinking, some basic street smarts, is maybe more important than it has been in the past, in that we’re going to get more and more content that sounds super intelligent delivered from entities — some of which have our best interests at heart, some of which may not. And so it’s going to be more and more important to apply a critical lens.
I saw a report in The Wall Street Journal this month that said that unemployment in the IT sector was beginning to creep up. And there is some speculation that maybe this is an early sign of the impact of AI. And I wonder if you see a story like that and think, well, maybe this is a moment to make a different decision about your career. If you’re in school right now, should you be studying something else? Should you be thinking differently about the kind of job you might have?
Yeah. I think you definitely should be, although it’s not clear what direction that will land in. I do think AI coding is moving the fastest of all the other areas. I do think, in the short run, it will augment and increase the productivity of coders rather than replacing them. But in the longer run — and to be clear, by longer run I might mean 18 or 24 months instead of 6 or 12 — I do think we may see replacement, particularly at the lower levels. We might be surprised and see it even earlier than that.
Are you seeing that at anthropic? Are you hiring fewer junior developers than you were a couple of years ago because now Claude is so good at those basic tasks?
Yeah. I don’t think our hiring plans have changed yet. But I certainly could imagine, over the next year or so, that we might be able to do more with less. And actually, we want to be careful in how we plan that. Because the worst outcome, of course, is if people get fired because of a model.
We actually see Anthropic as almost a dry run for How will society handle these issues in a sensible and humanistic way. And so if we can’t manage these issues within the company, if we can’t have a good experience for our employees and find a way for them to contribute, then what chance do we have to do it in wider society?
Yeah. Yeah. Dario, this was so fun. Thank you.
Thank you.
Thanks, Dario. When we come back, some Hat GPT.
[MUSIC PLAYING]
Well, Kevin, it’s time once again for Hat GPT. That is, of course, the segment on our show where we put the week’s headlines into a hat, select one to discuss. And when we’re done discussing, one of us will say to the other person, stop generating.
Yes. I’m excited to play, but I also want to just say that it’s been a while since a listener has sent us a new Hat GPT. So if you’re out there and you were in the hat fabricating business, our wardrobe when it comes to hats is looking a little dated.
Yeah. Send in a hat, and our hats will be off to you.
OK, let’s do it.
[MUSIC PLAYING]
Kevin, select the first slip.
OK. First up out of the hat, AI video of Trump and Musk appears on TVs at HUD building. This is from my colleagues at “The New York Times.” HUD is, of course, the Department of Housing and urban Development. And on Monday, monitors at the HUD headquarters in Washington, DC, briefly displayed a fake video depicting President Trump sucking the toes of Elon Musk.
According to Department employees and others familiar with what transpired, the video, which appeared to be generated by artificial intelligence, was emblazoned with the message, “Long live the real king.”
Hmm.
Casey, did you make this video? Was this you?
This was not me. I would be curious to know if Grok had something to do with this, that rascally new AI that Elon Musk just put out.
Yeah, live by the Grok, die by the Grok. That’s what I always say.
Now, what do you make of this, Kevin, that folks are now using AI inside government agencies?
I mean, I feel like there’s an obvious sort of sabotage angle here, which is that as Elon Musk and his minions at DOGE take a hacksaw to the federal workforce, there will be people with access to things like the monitors in the hallways at the headquarters building who decide to take matters into their own hands, maybe on their way out the door and do something offensive or outrageous. I think we should expect to see much more of that.
I just hope they don’t do something truly offensive and just show X.com on the monitors inside of government agencies. You can only imagine what would happen if people did that. So I think that Elon and Trump got off lightly here.
Yeah. What is interesting about Grok, though, is that it is actually quite good at generating deepfakes of Elon Musk. And I know this because people keep doing it. But it would be really quite an outcome if it turns out that the main victim of deepfakes made using Grok is, in fact Elon Musk.
Hmm. Stop generating. Well, here’s something, Kevin. Perplexity has teased a web browser called Comet. This is from TechCrunch. In a post on X Monday, the company launched a sign up list for the browser, which isn’t yet available. It’s unclear when it might be or what the browser will look like. But we do have a name. It’s called Comet.
Well, I can’t comment on that, but —
You’re giving it a no comment?
[LAUGHS]: Yeah. I mean, look. I think Perplexity is one of the most interesting AI companies out there right now. They have been raising money at increasingly huge valuations. They are going up against Google, one of the biggest, and richest, and best established tech companies in the world, trying to make an AI powered search engine. And it seems to be going well enough that they keep doing other stuff, like trying to make a browser. Trying to make a browser does feel like the final boss of every ambitious internet company. It’s like, everyone wants to do it and no one ends up doing it.
Kevin, it’s not just the AI browser. They are launching a $50 million venture fund to back early stage startups. And I guess my question is, is it not enough for them to just violate the copyright of everything that’s ever been published on the internet? They also have to build an AI web browser and turn it into a venture capital firm? Sometimes what I see a company doing like this, I think, oh, wow, they’re like really ambitious and they have some big ideas. Other times, I think these people are flailing. I see these series of announcements as spaghetti at the wall. And if I were an investor in Perplexity, I would not be that excited about either their browser or their venture fund.
And that’s why you’re not an investor in Perplexity.
You could say I’m perplexed.
Stop generating!
All right.
All right. Meta approves plan for bigger executive bonuses following 5 percent layoffs. Now, Casey, we like a feel good story on Hat GPT.
I do. Because some of those Meta executives were looking to buy second homes in Tahoe that they hadn’t yet been able to afford.
Oh, they’re on their fourth and fifth homes. Let’s be real. OK. This story is from CNBC. Meta’s executive officers could earn a bonus of 200 percent of their base salary under the company’s new executive bonus plan, up from the 75 percent they earned previously, according to a Thursday filing. The approval of the new bonus plan came a week after Meta began laying off 5 percent of its overall workforce, which it said would impact low performers. And a little parenthetical here — the updated plan does not apply to Meta CEO Mark Zuckerberg.
Oh, god, what does Mark Zuckerberg have to do to get a raise over there?
He’s eating beans out of a can, let me tell you.
Yeah, so here’s why this story is interesting. This is just another story that illustrates a subject we’ve been talking about for a while, which is how far the pendulum has swung away from worker power. Two or three years ago, the labor market actually had a lot of influence in Silicon Valley. It could affect things like, you know what, we want to make this workplace more diverse. We want certain policies to be enacted at this workplace. And folks like Mark Zuckerberg actually had to listen to them because the labor market was so tight that, if they said no, those folks could go somewhere else. That is not true anymore. And more and more, you see companies like Meta flexing their muscles and saying, hey, you can either like it or you can take a hike. And this was a true “take a hike” moment. We’re getting rid of 5 percent of you and we’re giving ourselves a bonus for it.
Stop generating!
All right.
All right. Apple has removed a cloud encryption feature from the UK after a backdoor order. This is according to Bloomberg. Apple is removing its most advanced encrypted security feature for cloud data in the UK, which is a development that follows the government ordering the company to build a backdoor for accessing user data.
So this one is a little complicated. It is super important. Apple, in the last couple of years, introduced a feature called Advanced Data Protection. This is a feature that is designed for heads of state, activists, dissidents, journalists, folks whose data is at high risk of being targeted by spyware from companies like the NSO Group, for example.
And I was so excited when Apple released this feature, because it’s very difficult to safely use an iPhone if you are in one of those categories. And along comes the UK government, and they say, we are ordering you to create a backdoor so that our intelligence services can spy on the phones of every single iPhone owner in the entire world — something that Apple has long resisted doing in the United States and abroad.
And all eyes were on Apple for what they were going to do. And what they said was, we are just going to withdraw this one feature. We’re going to make it unavailable in the UK. And we’re going to hope that the UK gets the message and they stop putting this pressure on us. And I think Apple deserves kudos for this, for holding a firm line here, for not building a back door.
And we will see what the UK does in response. But I think there’s a world where the UK puts more pressure on Apple and Apple says, see ya, and actually withdraws its devices from the UK. It is that serious to Apple, and I would argue it is that important to the future of encryption and safe communication on the internet.
Go off, king. I have nothing to add, no notes.
Yeah?
Do you feel like this could lead us into another Revolutionary War with the UK?
Let’s just say this. We won the first one, and I like our odds the second time around. Do not come for us, United Kingdom!
[LAUGHS]:
Stop generating.
One last slip from the hat this week — AI inspo is everywhere. It’s driving your hairstylist crazy. This comes to us from “The Washington Post,” and it is about a trend among hairstylists, plastic surgeons, and wedding dress designers that are being asked to create products and services for people based on unrealistic AI-generated images.
So the story talks about a bride who asked a wedding dress designer to make her a dress inspired by a photo she saw online of a gown with no sleeves, no back, and an asymmetric neckline. The designer had to, unfortunately, tell the client that the dress defied the laws of physics.
Oh, I hate that.
I know.
It’s so frustrating, as a bride to be, when you finally have the idea for a perfect dress, and you bring it to the designer and you find out this violates every known law of physics. And that didn’t used to happen to us before AI.
Yeah, I thought the story was going to be about people who asked for, like, a sixth finger to be attached to their hands so they could resemble the AI-generated images they saw on the internet.
I like the idea of submitting to an AI a photo of myself, and just say, give me a haircut in the style of MC Escher — just sort of infinite staircases merging into each other — and then just bringing that to the guy who cuts my hair and saying, see what you can do.
Yeah. That’s better than what I tell my barber, which is just number three on the sides and back, an inch off the top.
Just saying, whatever you can do for this, I don’t have high hopes.
Yeah. Solve the Riemann hypothesis on my head.
[LAUGHS]:
What is the Riemann hypothesis, by the way?
I’m glad you asked, Casey.
OK, great. Kevin is not looking this up on his computer right now. He’s just sort of in a deep breath and summoning it from the recesses of his mind.
The Riemann hypothesis —
Mm-hmm.
— is one of the most famous unsolved problems in mathematics. It’s a conjecture, obviously, about the distribution of prime numbers that states all non-trivial zeros of the Riemann zeta function have a real part equal to one half.
Period! Now, here’s the thing. I actually think it is a good thing to bring AI inspiration to your designers and your stylists, Kevin.
Oh, yeah?
Yes, because here’s the thing. To the extent that any of these tools are cool or fun, one of the reasons is they make people feel more creative. Right? And if you’ve been doing the same thing with your hair, or with your interior design, or with your wedding for the last few weddings that you’ve had, and you want to upgrade it, why not use AI to say, can you do this? And if the answer is it’s impossible, hopefully, you’ll just be a gracious customer and say, OK, well, what’s a version of it that is possible?
Now, I recently learned that you are working with a stylist.
I am. Yes, that’s right.
Is this their handiwork?
No. We have our first meeting next week.
OK. And are you going to use AI?
No. The plan is to just use good old fashioned human ingenuity. But now you have me thinking, and maybe I could exasperate my stylist by bringing in a bunch of impossible to create designs.
Yes.
Here’s the thing. I don’t need anything impossible. I just need help finding a color that looks good in this studio, because I’m convinced that nothing does.
It’s true. We’re both in blue today. It’s got a blue wall. It’s not going well.
Blue is my favorite color. I think I look great in blue. But you put it against whatever this color is — I truly don’t have a name for it and I can’t describe it — I don’t think any blue looks good. I don’t anything looks good against this color. It’s a color without a name. So can a stylist help with that? We’ll find out.
Yeah.
Stay tuned.
Yeah.
That’s why you should always keep listening to the “Hard Fork” podcast. Every week, there’s new revelations.
Yeah.
When will we finally find out what happened with the stylist, the hot tub time machine, et cetera?
Yeah.
Stay tuned.
Tune in next week. OK, that was Hat GPT! Thanks for playing.
[MUSIC PLAYING]
One more thing before we go — “Hard Fork” needs an editor. We are looking for someone who can help us continue to grow the show in audio and video. If you or someone you know is an experienced editor and passionate about the topics we cover on this show, you can find the full description and apply at nytimes.com/careers.
“Hard Fork” is produced by Rachel Cohn and Whitney Jones, we’re edited by Rachel Dry, we’re fact checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Elisheba Ittoop, Rowan Niemisto, Leah Shaw Dameron, and Dan Powell.
Our executive producer is Jen Poyant, and our audience editor is Nell Gallogly. Video production by Chris Schott, Sawyer Roque, and Pat Gunther. You can watch this whole episode on YouTube at YouTube.com/HardFork. Special thanks to Paula Szuchman, Pui-Wing Tam, Dalia Haddad, and Jeffrey Miranda. You can email us at hardfork@nytimes.com with your solution to the Riemann hypothesis.
[THEME MUSIC]