A.I. Is a Hyperobject
Speaker A: Hello and welcome to Slate Money, your guide to the business and finance news of the week. I’m Felix Salmon of Bloomberg. I’m here with Elizabeth Spires of New York Times.
Speaker B: Hello.
Speaker A: With Emily Peck of Axios.
Speaker C: Hello. Hello.
Speaker A: And we have an an AI extravaganza this week. We are finally doing it, people. And I can tell you the story, which is basically that Elizabeth went into the slack one day and said, listen, we can’t talk about AI unless we have someone who knows what they’re talking about. And specifically gave the example of this one guy, Paul Ford. And I’m like, we love Paul on this show. And so, ladies and gentlemen, without further ado, the one and only Paul Ford.
Speaker C: We have to stop beaming like this.
Speaker A: You probably need no introduction, you’ve been on this show many times. But who are you and how do you know about this AI thing?
Speaker C: So I am the president and co founder of an AI services and platform shop. We build things for clients using AI. But I also still sort of function as a journalist and a technologist. So I write and talk about it. And around the beginning, last year I brought myself sort of out of retirement as a programmer in order to really see if I, as a pretty mid level programmer could use these new tools to be productive and to really get my head around them. And so I’ve spent the last more than a year, but especially the last four or five months just going super, super deep and trying to just get my bearings. And so that’s what I can share with you.
Speaker A: So this is amazing. We are going to talk to you about where we’re at right now, what it means for us normal humans, what it means for the big great capitalists, what it means for the Pentagon for Microsoft. It’s a spectacular conversation. And I do believe the term human centipede appears at some point. You’re gonna wanna stick around for that. It’s all coming up on Sleep. Paul, you wrote a fabulous column in, I believe it was the New York Times. And this was your first mistake is you write in the New York Times and then you get all the craziest, like because you’re setting the agenda. But you were writing a column that, correct me if I’m wrong, basically said we live in a world where pretty soon it’s going to be easier and cheaper to roll our own software than to buy it from people. And that’s going to change a lot of things.
Speaker C: I mean, that’s one of the things, right? You know What I’m saying really, is that I was shocked like everybody else, as a person who comes out of both the liberal arts and software development, things that used to take months and months don’t take months. Sometimes they take minutes, sometimes they’re not as good. But it’s so different. It’s so different from how it used to be.
Speaker A: That’s textbook disruption. Right? Which is when something which is maybe not as good, but is a lot more convenient and a lot quicker and cheaper just completely takes over.
Speaker C: Yeah, but typically disruption shows up and, like, everybody’s been working on it for a while and they’re like, okay. And, you know, this product will be better. And everybody’s got a couple years to deny that it’s happening. And this was like one day in November. It was like anthropic, was like, hey, we’re going to give everybody a whole bunch of extra credits to play with. Claude Code, which had been like an assistant before. And then everyone realized over the course of a week that it was building whole apps and you could build things you could never build before. And there are all sorts of caveats and asterisks, but it’s code, it was real, it was working. So I think everyone just sort of went, what the h***? And now we’re all metabolizing that change in various painful ways. So it is disruption. It’s just at a scale and speed in the. Supposedly the king of industries. Right. That no one’s ever seen before.
Speaker A: And this is in the context, just to be clear of the. I mean. Well, let me ask you where you stand on this, the famous Marc Andreessen thesis that software is eating the world, and that basically we are like, every day, in every way, we are becoming more and more software. Is that still true?
Speaker C: I’m a huge fan of our brilliant philosopher, Marc Andreessen. Now, my joke for that, which is that software has eaten the world, then it digests the world, then. Then it shat out the world, and now software is eating the s***. And that is what AI is, if we’re truly honest about it, and that we’ve created this sort of human Centipede cycle as a result of that genius idea.
Speaker A: And no, I can’t remember why we ever wanted to have you on this show, Paul.
Speaker C: Yeah, I know. It’s a terrible mistake. Look, I believe very, very firmly that the technology industry exists to be the servant of the other industries. Like, we’re supposed to be helping everyone do their work faster, and instead it’s a bunch of nerds going, you work for me now, as they’re like oily fedoras. And it’s really exhausting because I talk to people all day who just kind of want to get their thing to work. And so to me, the whole advantage of this is that your thing could work a lot faster. I used to run an agency and it was like every beautiful do gooder organization would come in and then they would open their wallets and moths would fly out. And I was like, I just can’t get you software that you really need. It’s kind of can’t do it. We’ll just be out of money and now they can have it.
Speaker A: What is it that your clients were wanting? I am not someone who wakes up in the morning and goes, I need a piece of software. You deal with these people all the time, and I don’t. And in my experience, the people who have their heads spinning fastest when it comes to AI are precisely the people who have built software or built websites or something like that in the past. And for those of us who haven’t, we’re like, what’s all of us about? So who were these impecunious nonprofits and why were they in the market for something that was software shaped?
Speaker C: Sure. So you’re totally right. Most of the world does not see problems in a software shape. They see problems in business shapes. I’ll give you some examples without naming names. But like, the leader of a large healthcare for children organization got in touch with us and they were like, we want to take advantage of all this stuff in order to do better assessments. A lot of it is really simple. Let me take a step back. If you go to a job, you have a thing you log into and it has dashboards and it might send you emails and you might solve tickets. Right. That’s pretty normal stuff that in the world of software is like CRM and help desk. And some of it, if you’re doing logistics, is erp. It’s all those ugly acronyms. And so that’s because that’s so expensive and difficult to make. Very large companies have bundled up all kinds of technology and then you end up with an ad on the super bowl where, like, Matthew McConaughey’s in a hot air balloon for Salesforce. Right. Why? Because they can move a box from left to right on a screen. And that is like, so hard to do sometimes that we just give them a trillion dollars culturally. So now all of those things got easier. Like, I can make you a little Salesforce clone just for your organization. Most people don’t need that, but places that run clinics, food banks, medical clinics, et cetera. They have a huge logistics problem, all of them, and they really need help with the scheduling and the calendaring, and they need help. And they also tend to have, like, some guy was there five years ago and he set up a system and then he went and worked somewhere else. And so they’re just kind of limping. And so they all have this pile of what we call technical debt, which is all the software that sucks in the organization. And so now they’re very curious. Out of nowhere, they’re like, wait a minute, could this make it suck less? And could I have nice things even though I don’t have a lot of money? And increasingly it looks like the answer is yes.
Speaker D: I just spoke to a guy who owns an ice cream shop, a scoop shop, and it’s really small. I think he said he had five employees or something. And he’s been playing around with AI and has discovered all these different ways to save money. Like, he’s figured out how to use AI to time ordering supplies that he might need. Instead of just ordering when they need to order, or on a monthly cadence, he’s been able to figure out a way to determine when prices are most optimal for ordering and do it and has thus saved money. And I was saying that actually doesn’t sound like innovative. It sounds like something that, like, big companies do all the time. And he was like, yeah, but they spend all this money to a company, like an SAP or whatever, to build them the software or to get the software to figure out all that stuff. And, like, we would never be able to afford that. But now we pay like a little bit of money to ChatGPT or whatever he was using, and it’s super easy. And we’re saving money and all kinds of, like, little ways like that.
Speaker A: I thought that was really interesting, putting my economist hat on here. And this is.
Speaker C: Do you ever take that off?
Speaker A: You take it off most of the time, Paul.
Speaker C: I think you wear it in the shower.
Speaker A: I only put it on for, like, I don’t know, five minutes a day because otherwise it gets uncomfortable. But Josh Barrow made this point, which is that this is really good for the economy. If every little ice cream mom and pop store is becoming more efficient and making more money, that makes money for the owners, that gives more money to pay the workers. That’s great. And the sort of AI apocalypse story of AI is going to eat a whole bunch of profits. Well, it might eat some profits at Salesforce, but it’s going to create so much wealth and so much cash flow, like new ways of making money across normal companies in the normal economy, in the real economy. And I will take that bargain any day of the week.
Speaker B: I feel like that’s the techno optimist view of things. Paul, I’d be curious to hear what you think about that.
Speaker C: The answer is yes.
Speaker B: Everything Felix said or not, like, that’s that right?
Speaker C: Like, so this is real now, first of all, I know this exact ice cream vendor. I know this person. And they are an absolute freakazoid when it comes to this technology. Your average ice cream store person is just like, is really not thinking in terms of this kind of logistics. But there is a reality and this is a very real thing. So there’s a couple different ways this could go. It could mean that in the future Anthropic owns all ice cream because that’s just how the society we build these things gather all the power and they build all the products because they can move faster and they just kind of become the new Amazon, Google, Anthropic, OpenAI mega thing because they know everything and they can see everything. That’s the sort of high dystopian and it seems to be many of the worst people in the world are running these companies. So that’s tricky. On the other side, there is a tremendous narrative of empowerment where your nephew, instead of helping you get set up on Seamless for your restaurant, is like, no, we’ll make our own website, I’ll make my own ordering system, et cetera, et cetera. That’s tricky because doordash has a lot locked in already. Right. Like, it’s just like, that’s actually that kind of disruption, I think will take a really long time because when you really play out how that lock in works and how much time it would change, it would be years before there’d be this like, natural sense of like, I’ll build my own ordering tool. Sure.
Speaker A: But Paul, like right now we’re in a world where software companies lock you in and then they can raise their prices as much as they like and you’re locked in.
Speaker C: And at the margin, even if people don’t code their own restaurant website, the fact that they could in theory should stop seamless and DoorDash from raising their prices as much as they might otherwise have done and therefore increase the profits of the restaurants, that just sounds like capitalism to me.
Speaker A: Yeah, but I’m pro capitalist. I’m in the liberal shill.
Speaker C: You forget, I get it. I’m the president of a Firm like, I understand.
Speaker B: Yeah. But I think, you know, that that sort of precludes the idea that these big companies that do already have customers locked in are somehow not taking advantage of AI themselves and not pricing that into their business strategy. So the idea that, I think somebody, let’s say, spins up a small system for a small business that replicates Salesforce, that’s not going to take Salesforce out of business. Because Salesforce, big enterprise customers need something far more complicated.
Speaker A: Absolutely. There’s not going to be a mass extinction event among the mega cap tech companies. I completely agree on that.
Speaker C: Guys, I’ve been talking to lots of people about this recently and I got to tell you, the only thing that lands is no answers. Everyone knows that we’re in the middle of something culturally and in the media. Like, we have no ability to really process this level of ambiguity because we’re on the hook to kind of give some sort of forward motion. I don’t know. I mean, the way the tech industry has been going, it’s all at once. You’ll have incredibly empowered small businesses, giant mega businesses that find new lines of business that sort of eat and gobble the small businesses. Like it will all happen at once because it’s the whole economy. I mean, you think about sort of vectors like health care, right? Health care is. AI is all over health care. And healthcare is real excited to meet it. And some applications are like, I can serve more patients and some are the insurers saying, hey, we can reject more claims. And so it’s not going to play out in a linear way. And it’s just, I don’t know if you know the concept of the HyperObject 2013, look it up. Climate change is a hyperobject. It’s sort of. It touches everything, it’s viscous, it gets into everywhere, but you can’t really see it. It doesn’t work on a schedule. It’s, you know, like, oh, it’s kind of warm this winter. Or not. It’s this sort of nice postmodern idea explaining the enormous things that we need to talk about, but we don’t really know how to talk about. And AI is like this incredibly rapidly developing hyper object that touches absolutely everything in our universe in lots of different ways that you can only see in how it affects other things. It’s more like weather than it is technology.
Speaker A: I wrote a book, Paul, about COVID and in a weird way, Covid was a hyper object as well. One of the things that I wrote about was this sort of radical epistemic uncertainty that we were facing, especially in 2020, and the way that the whole world had to get used to a sudden switch from. I used to live in a world of I have Google on my phone and anything I want to know that’s a fact in the world I can call up on my phone in five seconds to a world of there’s a whole bunch of really big questions I have about the world and no one knows the answers. And just not knowing the answers was a strange and unsettling place to be. And I feel like what you’re saying is that we are now back in that strange and unsettling place, which totally, by the way, aligns with what I said in the book, which is that we have now entered this place and we’re not leaving it.
Speaker C: I mean, I really came in here just to promote your book. That’s the only reason I’m here.
Speaker B: That’s why we’re all here.
Speaker C: Yeah.
Speaker B: This kind of raises a question for you though, Paul. Your company is literally building enterprise products now using AI, and you’re saying, I don’t know where this is going. None of us do. Well, how do you think about running your business when something cataclysmic happens? Maybe this isn’t that cataclysmic, but I’m thinking about there’s an ex Google engineer named Steve Yegi who came up with this framework for running a bunch of autonomous agents at once. And then the developer community lost its mind for a few days. How do you react to stuff like that?
Speaker C: That’s called Gastown. If somebody wants to look it up. It’s kind of like this weird mashup of Mad Max mythos and sort of big nerd thinking. It’s really, really hard. It’s really hard and sometimes very difficult and painful. And you don’t know if you’re building the right thing. And the more I talk to other leaders and the more I talk to orgs and big and small orgs, frankly, everyone is covering up and nobody really knows what’s coming. And so I just. I’m too old. I’m just like, let’s rip the band Aid off. What are we going to do to build a nice society that we like, as opposed to the Palantir s*** world that is kind of getting thrown to us going back to Gastown and stuff. There’s a real fantasy that you’ll be able to type everything into a prompt and you will get everything as a result. And they’re trying to build that. They’re going to have Lots of agents take over and build big platforms. What I’m finding, because I decided to go in and really, really learn this tech as much as I could, is that fundamental questions of software as a cultural object are really hard to solve still. Like, I have things I’ve been working on in Vibe coding and I sort of do them for a half hour a day and I’m like three, four months in now. And it’s not because of the technical parts. I’ve solved a lot of the technical parts. It’s really good at that. It can build you your APIs and your front end and it can really scaffold your database and you can say, go, make that faster. And it’ll do it for most cases because a lot of stuff is just a lot of abstract knowledge, but it’s known. So if something is pretty well known, it can solve it. But the experience of using a tool and the strategies of getting the content into it and sort of the aspects of making stuff, they didn’t get easier. Aspects of sort of quickly building and assembling are really easy. But like making something good remains pretty brutal. Like, you could use this as a really good research assistant under a lot of constraints. But I doubt that if you talk to editors, they’re finding that writers are turning things in on time. Like, I don’t think we change.
Speaker A: Paul, I want to switch gears here for a minute and do a little throwback to my favorite ever Slate Money episode, which was one that featured you coming on the show to talk about Microsoft buying LinkedIn.
Speaker C: Right?
Speaker A: It was like another era. And we had a lot of fun laughing about like, what the f***? And since then, a couple of things have happened. One is that LinkedIn hasn’t changed all that much, but it has become like more ubiquitous and much more valuable. And people are like, this is a major asset for Microsoft. And that was a great acquisition. A second thing is that it has not, contrary to what a lot of people expected, really been incorporated into sort of the Microsoft Suite, the core Microsoft product offering. And the third is that Microsoft has become this like multi trillion dollar behemoth. It’s so much bigger and so much more valuable than it used to be, partly because of AI, something something. And Microsoft is now, you know, the largest shareholder in OpenAI. And I just wanted to sort of come back and ask you because LinkedIn in particular feels like so old school software, old school technology company getting bought for a lot of money. And then it’s still a good acquisition somehow. How? Like now, with hindsight, how do you think about that whole thing.
Speaker C: Oh, well, obviously I had a complete answer. Ready for this?
Speaker D: He’s just completely. Listeners, he’s thrown us this curveball. No one was prepared to talk about LinkedIn and here he blew the whole thing up.
Speaker C: I am ready to talk about LinkedIn.
Speaker D: I would love to talk about LinkedIn. I’m LinkedIn’s biggest fan.
Speaker C: Yeah, let’s talk about this. Finally. Finally, people are talking about LinkedIn. So, okay, LinkedIn is an enormous network of what everybody’s working on. I’m in there. It’s very bot vulnerable, frankly. And there’s a lot of AI generated nonsense on LinkedIn. I’m in there. I post links about AI three times a day. That is my job as a thought leader and I’ve decided to take that job seriously.
Speaker D: Wait, for real? I’m going to follow you right now.
Speaker C: Oh, yeah, yeah. No, it’s, it’s. I’m user F train. I was like user5000 on LinkedIn. Just a little something. It’s just. Just, you know.
Speaker A: Yeah, little flex there from.
Speaker B: I feel like you’re user 5000 of everything, though.
Speaker C: No, not at all. I’m usually kind of a surprisingly late adopter for a big nerd, because everything. I hate everything. So I’m like, ah, man, Twitter, that’s going to ruin the cool world of blogging. And it did.
Speaker D: I mean, it did. Yeah, you were right.
Speaker C: Yeah, no, I’m right. I think of myself. I like to think of myself as like the fun Cassandra. Like, I’m just sort of like, it’s all really going to be bad. Everybody’s like, that’s funny. And then.
Speaker A: But you are. I mean, you were right in a way that, like, LinkedIn, and we’ve said this many times on the show, is the only good social network and possibly the only surviving social network.
Speaker C: Well, it depends, right? So the Ethan Malik, who, I don’t know if you follow, but he’s like a professor at, I think a Wharton who talks a lot about sort of is trying to just kind of make this make sense for business. And he was like, look, man, it’s coming. Like, the bots. How can you stop an army of bots posting on LinkedIn? They’re just going to kind of beat you. And I don’t know if that’s true or not or where we’re going to end up, but I do think we’re ending up. It’s fascinating, right, Because I know a lot of people who were doing kind of work on the ground around immigration rights and stuff. Like that. And when Minneapolis happened, so many people were like, how is this happening? Where are these people coming from? But if you were following, they were all on signal, like everybody was just kind of talking. People go and create their own networks to get away from this stuff. And the cultural implications are real, but they’re not visible. They’re especially not visible to people on social media, and they’re not visible always to the media proper. And so I think there’s all these sort of new networks showing up, and I think that they’re being built almost with an immune system so that they’re not going to be easy to track people. They’re sort of anti regulation and they’re very anti AI. They’re also happening in places like Discord that are both kind of open and closed. So I think there’s a whole new set of networks showing up. But I also think that, like, you know, LinkedIn just lodged in as the world’s resume. You still need that function. It works okay. I did think Microsoft would do more with it. I think what’s fascinating now, though, is every time you open up a product like Keynote on The Mac or PowerPoint, it is just that little diamond for AI. It’s just kind of spackled over everything. Right.
Speaker A: And so, so annoying.
Speaker C: Yeah. But the cost of integrating LinkedIn to absolutely everything is approaching zero for Microsoft.
Speaker B: I recently reread Microservice by Douglas Copeland for a and it’s a satire about Microsoft culture. And I realize this is not the Microsoft of the late 90s and early aughts, but one of the things that comes up over and over again in that novel is that it’s a very siloed place and they don’t do well when they try to work across groups. And I get the impression that that’s still kind of the case, which would explain the lack of integration of LinkedIn.
Speaker C: I mean, every large org is completely siloed at that level. You just can’t be that big without being an enormous political superstructure. I think it’s better than it used to be. But yeah, it used to be pure pain. Like just levels of punishment across groups as to who can put an icon in the ribbon or whatever. Right. There was all this drama, but I think that that’s what’s happening. Right. We had all this natural friction in the industry of it’s hard to build an API, it’s hard to build a good API, it’s hard to build a good front end, it’s hard to build a good website, et cetera. And so we have to have discipline and practice. And every two weeks we have an Agile meeting and we do scrum. And there was sort of all this ceremony and process because it was so expensive and it still often didn’t work out. That’s still true in a lot of cases. It’s still really hard to make good stuff. But elements of that friction and that culture are no longer as valid because certain things can happen really quickly. And if you want to take a very conservative pass, it would just be like, everybody knows now it can stand up a prototype. You may never put that code out into the world, but you can really look and feel and understand how a piece of software could work. And what used to take six months now might take a week or two max. Right? And so, so that alone is a really, really disruptive change. And there’s tons of them. So the friction has gone out of a lot of this stuff. And friction and bureaucracy are handmaidens, and they all work together to keep things moving along slowly in a way so that people can predict their lives and have a sense of control while they do the big thing that makes money. And, I mean, it’s just eroding. I don’t even have to learn about LinkedIn. I can just say, hey, Claude, if LinkedIn has an API, tell me what it costs. And if you, if it doesn’t cost too much, let’s go ahead and start doing things with it.
Speaker D: And so if I’m inside of Microsoft and I want to hack LinkedIn and it has an API, I can just kind of go to town now, wait, what, you’re saying I can hack into LinkedIn and mess around with it, or that I can put LinkedIn into other stuff that I’m using?
Speaker C: Sure. It’s not hacked in. LinkedIn has all these ways to kind of use it both externally and internally. Right. Like, they’re all documented. It’s just kind of. You had to have a lot of meetings. You just don’t have to have as many meetings now. You can.
Speaker A: You mean if you’re hiring people and you can, like, you can sort through people who you might want to hire or something like that?
Speaker C: Oh, yeah, you can pay all. You can get all kinds of data and insights out of LinkedIn if you pay. But if you’re Microsoft, you can really get access to all the good stuff. And now there are fewer barriers. Like everyone can just kind of go to town on everybody else’s platform. There’s sort of always this promise. I remember once I was working at a company and they were like, we are so open and everything we do talks to everything else and you just ask us and we’ll give you access to that system. And I asked them and they gave me access. And I hit like one piece of data, like a tiny request, and my phone rang literally 30 seconds later, like, hey, what are you doing? What are you doing? Like, these are very territorial environments and this is a deterritorializing technology, which is rough because we’re territorial animals that like to, you know, we’re primates, we like to kind of protect our tribes.
Speaker D: So now everyone can make their own software stuff. And I see everyone’s posts on the social networks being like, I just made a thing that does this and I made this app and blah, blah, blah. Has anyone monetized anything that they’ve vibe coded yet or using it on a large scale in a company? Are there examples we can point to of real world vibe coded things?
Speaker C: There’s a lot. There’s a ton of little stuff. There’s people been sending me stuff, but a lot of people don’t want it to be known. The easiest indicators here are like, Claude commits code for your behalf on GitHub. So people are releasing all kinds of open source tools that have commits by Claude and you can see the effects there. You can see it in the sort of way that certain things are accelerating. And if you go to Reddit, there are just lots of people. I mean, there’s a ton of internal projects, right? People are shipping apps. It’s one of these things where there’s no one distinct story where I can point to and be like, oh, yeah. I mean, I’ve got a couple websites I’ve launched, so does everybody, right? It’s a lot of slightly blurry, indistinct stuff as people are just figuring out what it can do. So I think what’s really interesting about your question is that it points to a world in which things are discrete and large and take energy to launch. And now instead you’re just seeing thousands and thousands of things that are kind of cool experiments, apps. I built a piano practice manager simply because I’m trying to learn piano. And I called it. I came up with the name to do lists, but it’s spelled L I S Z-T dot com. And I was like, that’s a good name.
Speaker A: I like that.
Speaker C: I know. And I was like, all right, well, maybe I got to build it. I took a swing.
Speaker A: Once you come up with a good name, you have to build the app.
Speaker C: That’s the thing I built the app because of the pun and then it kind of worked. And then I’m like, I really should just go ahead and just kind of practice piano every morning instead of like, it’s just another way to procrastinate.
Speaker A: Let me answer Emily’s question because I think I have a good ex. Well, can we answer your first question first? We all remember when wordle launched, right? There was this guy called Chris Wardle in England and he coded this cute little webpage called wordle and it became viral and eventually wound up getting bought by the New York Times. But you could see it was simple and you could see how one person could make it and then it could go viral because it was simple and lovely. And then the rest is history. Chris Wardle has now released a new game called Password, which is wonderful as well, and I like it just as much. But if you go to Password, you can see it’s just so much more complex as a sort of coding issue. There’s a lot more sort of intelligence and gnarliness built into it. And I can easily believe that Password took him no longer to build than wordle did, just because there’s so much more power at his fingertips.
Speaker B: Now I have an example that actually literally answers your question.
Speaker D: Three tries.
Speaker B: Now, there was a Vibe coder whose company’s called Base44 that sold to Wix for $80 million in cash. And it’s just one Israeli developer who Vibe coded his entire entire product. So it can happen.
Speaker D: Okay? So everyone is saying that because it’s so easy now to code, everyone’s going to do it and that’s going to destroy all these software companies in the so called cesspocalypse. But what I’m hearing from the three of you who tried to answer my question is there’s a new, much more productive and simple way to code and it’s creating more complex products, more products, more interesting products. And, and it just seems to me like it’s unleashing more interesting things that could potentially, like I guess what Felix was saying before, unleash more productivity into the economy. It may be different.
Speaker A: Not saying it’s not gonna destroy people’s jobs, but it does sound like it’s gonna make people more productive and do new and interesting things and there is still value to this. Like to Elizabeth’s point, you know, base 44 sold for large amounts of money. We just had Ben Affleck’s 16 person AI companies sell for 600 million or something. You can just because it’s easy to replicate in theory doesn’t mean that these things can’t be worth real cash.
Speaker D: Like when blogs happened and everyone had a blog, it did do a lot of damage to some traditional journalism, obviously. But then it made the new careers of the guy with the economist hat in the corner there and the lady up there and I mean, and you know, changed everything. You know, it wasn’t good or bad, it was destroying and creating. And isn’t that what AI is?
Speaker B: It’s Jeevant’s paradox. It’s where, you know, something that becomes more cost effective actually increases demand instead of lowering it.
Speaker D: Right, I’ve heard that too.
Speaker C: I’m going to throw just a little bit of cold water in, but then probably agree because that’s just my personality.
Speaker A: It’s a nice guy. Bull.
Speaker C: I don’t think the new wordle game actually. I think there actually are a number of people involved. I think it is pretty complicated. And I don’t think that base 44 was probably that easy. He couldn’t vibe code the whole thing, but he could code a lot of it and then he could call out to LLMs and move along. We’re not totally at this tipping point yet where the really good products that you use can just be built with this stuff. That said, an internal business application can probably be built in a couple of weeks. The dryer stuff, the database back stuff, the web based stuff that that just kind of needs to look okay and work pretty well is going to be better than a lot of what people are using today. And so that stuff might move along very quickly. The really experience based stuff where you just kind of need to get a feeling and it’s going to have lots of users, it’s still going to have a lot of humans involved, but their jobs might be very different.
Speaker A: I want to switch gears again because you’ve been talking a bunch about the way that large organizations work and the way that AI has really changed the ability of large organizations to see into other parts of the enterprise and to operate. And we were using the example of Microsoft, but I feel like the really germane and interesting example is the Pentagon. And Claude managed to sort of take over the Pentagon in record time. And everyone loves it. The business of warfare, like the number one thing that large armies and defense departments do is deal with making sure that information gets to where it needs to be at the right time. It’s the core problem that they’ve been solving for millennia. AI seems to have really upended the way that the Pentagon works. Palantir is all up in there and is making huge amounts of money. And there does seem to be a vibe in the air that AI is truly transforming the way that modern warfare works, not only in terms of, like, it can control autonomous drones and that kind of thing, but really up there at the sort of org chart level. And I’d love to know how you’re seeing that and what you make of this, like, beef between Pete Hegseth and Anthropic.
Speaker C: Oh, well, you know, I’m always over there at the Pentagon talking to those guys. I’ll tell you what, I. There’s a wonderful formulation that your listeners might find useful. There’s a sort of programmer and developer, a person named Lori Voss, works at AI companies, and they wrote about something that actually sounded very reductive, but I come back to it all the time, which is that the really good usage of these tools is to take. Take lots of text and turn it into less text. They’re just good at summarizing. And actually, when you look at how things are built on top of LLMs, it tends to be that they spin up a lot of text and they summarize it. That’s what deep research is. Goes out to the web, gets a lot of text. Summarize it. We’ve all seen it make bullet points. Right? Now, let me take you back to.
Speaker D: I’m not a job screwed.
Speaker C: Well, maybe you can make them faster, like John Henry. Just. We’re gonna. We’ll figure that out right after the podcast.
Speaker D: Okay.
Speaker B: Emily Bott is now writing six newsletters instead of one.
Speaker C: I mean, the thing is, you can anyway, but you probably shouldn’t. Everybody remembers World War II. Always a bad one. What was the Winston Churchill, like, famous dictum about how he needed to get information every day? It was, though. He wanted one piece of paper and he begged for it because everybody kept giving him these big briefs. So LLMs are good at that. Right. So everybody can have their one piece of paper summary. That is probably. Can be as good as a knows how you validate that like, that’s a whole process.
Speaker D: No, it cannot be as good at summarizing as I am. No.
Speaker A: But it can definitely be better at summarizing than Pete Hegseth is.
Speaker C: Well. And Emily, it can be better at summarizing like, 10,000 pages of CIA briefings. Right?
Speaker D: Yeah, fair enough.
Speaker C: I think that’s true. And so that’s, I think, probably the most exciting thing. Also, Palantir is funny because they’ve got this core technology and they wave their hands around it, but it’s like kind of databases that are kind of cool with funny names like Foundry and Gotham and people like working in it.
Speaker A: I thought those are typefaces.
Speaker C: No, no, I know they might be. I get it all mixed up anyway. And they, you know, they have like a hundred clients. They’re not that big in terms of like their world, but they just make a ton of money and they ship software. They go in where a big consulting firm had been for a year and a half and delivered nothing. And Palantir is like, I’m going to get you these things in four months. I have this, you know, starting point. And so you, you combine these worlds, right? You’ve got accelerated software delivery. They like AI, they’re, I think they’re called forward engineers or sort of forward deploy engineers are in there kind of helping you at the, you know, medium sized city police department that got a grant to do something and get a drone monitoring system set up. And they’re like all in it. They’re just going to get it for you. And over on the other side, you have the ability to finally summarize when everybody else was refusing to give you the short bullet points because they wanted to show their value. Now you have this sort of pipeline of information coming through, so you feel really empowered and you feel that you don’t have to be as beholden to your team. And so it’s just a lot of those dynamics playing out. It’s just a bunch of people. Right? The Pentagon is not a thing. It’s humans in a shape.
Speaker A: A pentagonal shape.
Speaker C: It is exactly right. And so information flows through that, but it’s used to make life or death decisions. And so we focus on it in a different way, but the core technologies are still the same.
Speaker A: Did we have a numbers round?
Speaker D: I don’t know. I’ve lost track of time. This is what AI does. I had a thing today. Well, Harvard had a thing about how it fries people’s brains.
Speaker C: But I like how you mix yourself up with Harvard. You’re like, ah, you know me, Harvard.
Speaker D: Well, I wrote about it, I summarized it and put bullet points on it. So I feel ownership.
Speaker C: Oh, absolutely.
Speaker D: You know how it is.
Speaker A: I think it’s time to do some numbers. Elizabeth, do you have a number?
Speaker B: Yeah, my number is four. And that’s the number of minutes it took me using Claude to Vibe code an app I’m calling wrongabot. And all it has, it has two fields. One says Elizabeth says and the other says Felix says. And then you click a button that says, who is Wrong. So initially I programmed it so that the function completely ignored anything that you put in the field and said Felix is wrong no matter what. But then somebody pointed out that it says Felix is wrong if you put in Felix says I am wrong. So I had to then put in an exception so that now Felix says I am wrong in the field, which is an unlikely use case. But it returns. For once, Felix is right. That’s the promise of vive coding. You can antagonize your colleagues.
Speaker A: So if I put in, like Felix says, that Elizabeth is always incredibly insightful and always right. That is still wrong.
Speaker B: Yeah, I was a little lazy. I didn’t put in an exception for that.
Speaker C: I mean, you can’t put this on the web because it would get exploited. But you could actually put a Felix simulator into that app and it could just.
Speaker B: That’s true.
Speaker D: My God, I miss Felix at work. So I could do that. I could just put it into Slack and then.
Speaker A: Yeah, you can have a Felix spot in your slack and then you can ask it questions and it’ll be like, but where’s the time series?
Speaker C: But four minutes to have a piece of software that was live on the web. Right.
Speaker D: Incredible.
Speaker B: Yeah.
Speaker C: And look, there were ways to do that kind of before, but not like that. That is new.
Speaker A: Why don’t you go, Emily?
Speaker D: Well, I have two and I can’t decide. So you decide for me. Is my number nine or is my number 145?
Speaker A: Let’s do nine.
Speaker D: Nine. Okay.
Speaker C: Nine.
Speaker D: Nine is the number of windows that Donna Kelce is replacing on her modest two bedroom, two bathroom home in Orlando, Florida. Donna Kelce is, as Felix, I’m sure knows, the mother of Travis Kelce and has a podcast and is marrying Taylor Swift. Swift, yes. Correct. And TMZ had a story earlier where they were like, exclusive. Donna Kelce is remodeling her modest two bedroom home in Orlando, Florida. And because the Internet is still fun, they ran with it and they were like breaking. There is a war going on.
Speaker B: And Donna Kelce is remodeling her modest two bedroom home for energy efficiency.
Speaker D: Yeah. Just because there was some seepage from the air conditioning. So she’s doing the windows and the doors.
Speaker A: Well done, Donna. I like you.
Speaker D: Yay, Internet. You can write to me. And guess what the 1:45 is. Because you might be able to.
Speaker A: Is that the time? I’m gonna clock off this afternoon and just go.
Speaker C: Probably.
Speaker D: Cause you’re so much more efficient now. Because I have.
Speaker A: Exactly. I’m much more efficient. My number is 14.5 million, which is the number of dollars that a guitar sold for at auction this week. This is a guitar called the Black Strat, which used to belong to David Gilmour. This is now the most expensive guitar ever. It was owned by the late Indianapolis Colts owner Jim Ursay, who put three of his guitars up for auction. And all three of them, three of his guitars sold for more than the previous record, which was a Kurt Cobain guitar that had sold for 6 million. So there was one went for 6.9 million and one went for 11 million and one went for 14 million. Like, suddenly, guitars are an asset class. This comes in the wake of the Pokemon card that went for $20 million that was bought by Anthony Scaramucci’s son. And like, I feel like just crazy expensive collectibles are now have suddenly become a thing. They’re the new meme stocks, or NFTs.
Speaker C: One of the first things I Vibe coded was a Pokemon card trading game for my 14 year old son. And I was like, this will be a fun way for us to interact with. Essentially you had to build your Pokemon trading empire. And I got real Pokemon pictures and I put a little work into it. We were on vacation. This is how I interact with my family. And I was like, check this out. And he just smashed the phone for two hours, refusing to make eye contact until he cornered the entire Pokemon card market.
Speaker B: Can you give me that game? My 10 year old would lose his mind.
Speaker C: I mean, I will. I absolutely will. It was funny because I thought I was breaking us out, out of the cycle.
Speaker A: Do you think he’s going to become a Pokemon trader in irl?
Speaker C: He’s really interested in it. He’s a little bit of a math kid. And so I’m not surprised that Pokemon is simultaneously about economics and building databases, which is why it’s so fun.
Speaker A: Is this true, Elizabeth? I thought it was about little yellow creatures.
Speaker B: It is kind of true. My kid was in a Pokemon club for a bit where they were supposed to actually play the game and then none of the kids were interested in playing the game, so they just traded. Like. I think they’re all just going to grow up to be little private equity people. So.
Speaker C: No, it’s real. It’s, you know, why is polka? It’s something like that. It’s just little tiny children exchanging currency in the form of tiny animals.
Speaker A: Amazing. All right, Paul, come up with a number off the top of your head.
Speaker C: Oh, well, no, actually, you know, it’s funny. I’ll get you one. I get an automated mail every day from a claudebot that reads something like 500 different news sources summarizing the AI news of the world. Then I go and read the pieces and I pick good paragraphs from them and put those on LinkedIn. But it’s my feeder source, so hold on just a second, let me find it called Daily Briefing. And it gives me top stories and it gives me top numbers every day because everybody likes numbers. Yeah.
Speaker A: Wait, so this is Slate Money’s first AI generated number. I like this boy.
Speaker C: I got a bunch of good ones. I got a little amd, a little cursor, some real nonsense. I mean, the one that leads, which is probably the one that makes the most sense, is that it’s 26 billion.
Speaker A: Is that a number of dollars?
Speaker C: Yes. And that is Nvidia’s planned investment in building open weight AI models because it wants to directly compete with the CLAUDE codes and the clauds and the OpenAI’s of the world or whatever compete means when they’re all together. So it’s a big number, they’re going to double down. They also. So the thing with Nvidia though is they’re also very invested in things like world models, meaning like, you know, more about making the robot behave than making language get emitted. So like they’re kind of all over everything and they’re so giant.
Speaker A: But Nvidia, but this is super interesting. If Nvidia is spending $26 billion, which is, I’m just looking up on my notepad here, Bloomberg Terminal, actually quite a large amount of money.
Speaker C: Oh, we forgot.
Speaker D: Wow.
Speaker A: Even by AI standards, that’s like quite a lot of money. They are doing the thing that OpenAI does, that anthropic does. They are building their own AI model and on some level, I guess they’re competing with those companies.
Speaker C: They are. Even though they’re some of their major buyers, they also have obviously some of the cheapest access to Nvidia chips of anybody. Probably more directly, they’re competing with Google, who have their own TPU chip, which is like the Nvidia chip that you need to do this kind of AI model building. These are all the things that use up all the water and burn up all the energy. And so Google is able to do more, more with its own hardware. And I’m guessing Nvidia is just like. No, we would also like to be a super mega growth, unbelievably large meta company as well. And so that’s.
Speaker A: So let me ask you that because obviously building AI models is ludicrously expensive and In a capital intensive industry like AI, the returns intuitively accrue to the places where the costs and the capital is cheapest. And that would be Google, Microsoft, Amazon, Nvidia, rather than equity funded startups like Anthropic and OpenAI.
Speaker C: Yeah, but at this scale, do you really count them? I mean they’re on their like series Double Q at this point, right? Do you really count them as equity startup funds? They’re about to go public, they’re going to be sloshing in money.
Speaker A: Well, yeah, I do, because I mean, yes, they can raise a lot of equity, but it’s still equity and equity is still expensive. But more to the point, what they don’t have is the hundreds of billions of dollars of cash flow that Google and Microsoft and Nvidia have. Right. Nvidia can just sell chips and Microsoft can sell Microsoft Office and Amazon can sell junk on the Internet and that will give them cash that they can use to fund these things. Whereas they are making a certain amount of money OpenAI and Anthropic, but they’re not making that kind of money in terms of revenues.
Speaker C: There’s a strong case to be made that of everyone, especially now that they’re really catching up, Google is the one that can win. Can’t quite get it together on AI. Like Google sort of slopped it all over the page results. But the Gemini product is good and it’s not quite as good at coding as cloud code, but it could get there.
Speaker D: Today we’re talking on Friday there was news that Meta is slowing or delaying or postponing whatever it’s doing with its attempt at an AI model and it is using Gemini. So I kind of feel like we’re going to see more of that.
Speaker C: I mean, I think Apple is too. I don’t know if I didn’t know that about Meta, but I think Apple is as well. And so they’re incredibly expensive, they’re prohibitive to build, I think, look, where does this go? There’s a couple different places. One is there is some evidence, just if you look at the scale of things, we’re not going to get another exponential leap in LLM technology. They’re going to get iteratively smarter. Which is why Claude Code was such a compelling product, because it was true product built up from that stuff as opposed to just sort of like another bunch of magic tricks that happen when you type in the prompt. It has a discrete way of working that you can understand as a human being as opposed to just, just coming out of nowhere.
Speaker A: Right.
Speaker C: So it could be that like we’re kind of getting close to an end with this stuff and they’ll get better, but then we’ll have to build up from them and integrate that with the way that computers used to work. That could be exciting because they could clean up all the mess that computers made before instead of just making mess of their own. Then there is the like, no, we need a whole lot of new models and we’re going to need to double down because there could be a lot of exponential growth here. We need to get some more nuclear reactors, we need to get more GPUs. And this is kind of where the market market seems to want to go because there’s so much growth left. We haven’t figured out even the fundamentals. Let’s go. And so the huge incumbents have a lot of advantages there. And then there is sort of the deep sea wild card, which is actually. No, hold on. We found some ways to do this a lot cheaper and a lot faster. We were able to build our own models much less expensively, but they do pretty well and they use less energy and it use less energy to build them. So that part of the cost went away. So you actually don’t need quite as much revenue to incrementally make improvements or even maybe exponentially. You can use good old fashioned know how to do that. So there are all these different possible paths forward and everyone can vociferously argue that one is correct. We’ve all been slapped across the face by this industry so much recently that I’m loathe to predict anything.
Speaker A: On which note, I want to flag that we have a Slate plus segment where I’m going to ask you about that business big debate that people either love it or hate it. And I want to talk to you a little bit about who loves it and who hates it. But for the time being I think that is it for Slate Money this week. Thank you very much, Paul Ford for coming on. This is always amazing when you’re on the show.
Speaker C: It’s nice to see you my friends.
Speaker A: Thanks to Jessamyn, Molly and Merritt Jacob. But mostly thanks to you guys. You love lovely listeners. We love you, especially when you send us emails on slatemoneylate.com so keep on doing that and we’ll be back next week with more Slate Money.