AI Revisited
AI has moved fast, and so has 37signals’ thinking. This week, host Kimberly Rhodes talks to co-founder and CTO David Heinemeier Hansson about the progress AI has made over the last few months. David shares what’s changed, what’s actually useful now, and why his outlook has grown more optimistic.
Watch the full video episode on YouTube
Key Takeaways
- 00:11 - How today’s AI feels dramatically more powerful
- 05:05 - David’s real-world experience working with AI agents
- 23:36 - ow AI can help non-programmers
- 31:05 - Why the upside of AI outweighs the potential risks
Links & Resources
- Fizzy – a new take on kanban
- O’Saasy License Agreement
- Record a video question for the podcast
- Books by 37signals
- 30-day free trial of HEY
- HEY World
- The REWORK Podcast
- Shop the REWORK Merch Store
- The 37signals Dev Blog
- 37signals on YouTube
- 37signals on X
Sign up for a 30-day free trial at Basecamp.com
Transcript
Kimberly (00:00): Welcome to REWORK, a podcast by 37signals about the better way to work and run your business. I’m Kimberly Rhodes, joined this week by David Heinemeier Hansson, co-founder and CTO of 37signals. Well, we have gotten a lot of questions over the years about AI and incorporating AI into our products. We’ve typically said, “We’re just not quite there yet. We’re seeing how things go,” but I feel like there’s been a little bit of a shift recently, internally, at least from announcements that I’ve seen, so I thought we would talk a little bit about that today. David, why don’t you just jump right in? I know we’ve kind of not been hesitant. I don’t know if hesitant is the right word, but it seems like we’re jumping in a bit more now.
David (00:40): Yeah. I think for me, AI really changed in the later half of 2025. The models got better as they’ve been getting better for a while, but we gave them more powers, especially when it comes to development. There’s a new, or not even that new, but approach of instead of having the AI being in your editor, auto completing things, trying to help you write something that you’re in the process of writing, they moved out of that and into the terminal running in agent mode, which basically just means that they’re running on their own from a script, from a plan where you tell them to do something and they go off using the tools available on the computer. They use your terminal, they run bash commands, they might even run some programming. They start looking things up on the web. They’ve been doing that for a while. But when you combine these things that they have control over your computer, which sounds a little scary, but when you see it in action as an agent doing development alongside you, not sort of on top of you, it really changes the game, not just in how it feels, which to me was a really important part of it.
(01:55): I played around a fair bit with the auto completion paradigm of AI as it was for developers in the initial phase, and I didn’t like it at all. I did not like this idea that I have a thought in my head coming out through my keys and then boom, the AI tries to … Did you think of this? Did you think of that? Can you shut up for a second and just let me type the damn thought out of my head? Now, I know there are others who rather enjoyed that and saw some great benefit out of it. I didn’t like it. And that was where a lot of my opposition to using AI as a developer tool was coming from, that I could see the advantages, I could see the promise, at least I should say, but it didn’t feel like it was in a format that was compatible with how I wanted to work.
(02:42): I don’t like working in IDEs, these heavy duty development machines. I like a text editor. I like to type my little code out by hand. And it felt like it was interfering with that. It was stepping over my toes. But this new way where they run in what’s being called agent mode, where they’re on their own doing stuff that you set them off to do sometimes for 10 seconds, 30 seconds, other times for multiple minutes really changes things. So you have this duality here where the mode of using them for me as a developer totally changed. It went from auto complete to agentic. I mean, I’m having a hard time even saying that word agentic because it’s so damn buzzwordy at the moment that you almost be like, all right, make sure you don’t say some bullshit here because agentic sounds a little bit like bullshit.
(03:35): But I used the term anyway. There’s agentic mode where they’re doing work on their own, that was huge. And then the step up in capacity is really huge too. The initial phase where I tried to use AI, even in early ’25, I kept checking in every month through every new model drop. I’m like, all right, cool. Everyone’s getting excited about this. Let me try it for something real. Let me give it a real code base, a real problem, see if it can help me. And I kept seeing like, do you know what? There’s something here, but it’s not there yet. It’s not actually helping me. I can see the promise. I can be amazed that we’re even making computers do this, even if I don’t actually want to keep any of the work it’s doing. And again, here in the second half of 2025, now coming into ’26, it really just changed.
(04:24): The models with their mode of working became so good that when I asked them to help me, they were actually helpful in a way where I wanted to keep the majority of what they produced. And I wrote about this earlier this week about promoting AI agents and sort of like a double meaning that both, hey, you should check this out. It’s actually quite cool what’s been going on. So if you’ve been, I don’t know, using ChatGPT or using Cursor or any of the auto completion mode, try out this agentic thing if you’re working as a software developer or a designer working directly under code, you should check it out because it’s really cool. But then for me personally, there’s a promotion here where the AI that I’ve been using for the past several years just went from a text input box where I would ask at things, help me with APIs, maybe even double check something if it was uncertain or wanted the second opinion to actually doing work where the product of that work could be kept.
(05:28): I’d still tweak it. I’d amend it. Sometimes I’d just chuck it out. It wasn’t good enough, but I’ve been shocked how often it’s like 80% there. And sometimes I could get the last 20% by just iterating with the agent. And more often I’d say, I just jump in and do the last 20%. If someone can take a problem that I’m facing and I want done and do 80% of it, that’s amazing. If you could even just do 20% of it and not fill the rest of it with bullshit that would take me even longer to pull out, that’s still amazing. And it’s that personal experience that these agents, this AI that we have access to now have really just taken a leap that has made me question my own relationship with the AI, how I feel about it, how I’m involved I want it to be in my own work.
(06:21): And then what are we doing at 37signals? This party also came from Jeremy who has been pioneering ways of putting these agents to work on the toil that we have. And we have a fair bit of toil, programmer work that isn’t the most exciting thing in the world, but it’s very important and needs to be done. Things like reviewing security reports. So we use a system called HackerOne where external researchers, and I use that term very, very broadly because researcher is a really big word to use about 90% of the reports we get that are more like bullshit. But nonetheless, when you find or someone finds an actual security research or find something important, it’s obviously a huge value because if they find it before a hacker does, that’s great for us. And that’s why we participate in the HackerOne program where we have these bounties, that’s what motivates these researchers to submit their findings to us.
(07:25): I think their payouts of up to $5,000, or maybe it’s even more at this point if you find something truly catastrophic that could give you access to execution context or exfiltrate data or whatever have you. The vast, vast, vast majority of reports we get, even the good ones on that, they’re like, oh yeah, I guess if you really string all this stuff together, you could cause some harm to our system in some way. But as I said, 90% of it is not of high quality, but it takes us quite a long time to review it because we got to check. You don’t know in advance what’s the great report where you’re going to find something really important that needs to be fixed and what’s kind of just low effort sludge, as I called it, created by humans or maybe by not so good AI submitted to us.
(08:14): So Jeremy came up with a system where we can use AI to give it access to the history of the reports we’ve gotten, give it access to sort of a scoring of the people submitting reports. Has this person been submitting nonsense in the past? Then maybe take that into consideration when you’re weighing whether this report merits more delicate inquiry. And it’s been a huge hit. It’s been a huge step forward for us that we can save a lot of time by having the toil of reviewing these reports pre-cooked by an AI agent that can go through it and then we can spend more of our time on the high value stuff and less time on the low value stuff, which in some ways is kind of just taking the concept of spam detection and applying it to a slightly different context. This isn’t spam, this is just low signal and AI turns out to be really good at that, right?
(09:10): It’s really good at filtering noise from signal. So that’s one example of something that has really helped us. And then we’ve started looking into where else can it take some of this toilet away. One thing we do, for example, is we do these things called console reviews where we, on a biweekly basis, review all the access logs when programmers have used access to our production systems and our production data in service of doing investigations or inquiries into customer reports. And we need to make sure that no one we have on staff is accessing data in any way other than what they have been granted permission to do by customers. So we review the logs and we make sure that people have got the right permissions and all of that stuff. And it’s kind of tedious. And it’s the same sort of thing where I don’t know if we’ve … oh, that’s not true.
(10:02): When we first started that program, we found some bad habits, some sort of sloppy ways of getting access to too much data at once and we shaped those things up. But the last many, many reports that I’ve read from when we do this by hand have said, all right, here’s 42 accesses to personal information that we have in these systems. They were all vetted, they were all good, they were all within jurisdiction. Now, that’s the kind of work that kind of does get a little cumbersome or boring, even at times when you’ve been doing the same thing for a long time and you keep finding nothing. Well, do you know what? AI is perfectly patient to do the same work over and over and over again, and you can train it quite well to look at some of these things and see, find the operation, find something that’s not quite right.
(10:49): So that’s another part of it. And then finally, we’ve also had some good success assisting us on on-call or performance issues where we have some sort of degradation. We need to find out what’s going on here. Can you look at the logs? Can you look at our exception systems, our performance monitoring systems? Tell me what’s going on. And it’s shocking at times that you give the AI access to this system and it’ll pull all or stuff and do all sorts of queries and it comes up with a goddamn answer where you just have to sit back and go, that’s pretty impressive. No, that’s damn impressive. And how did it do it so quickly? So these are some of those areas, toil, investigations, inquiries that require pulling a bunch of things together where we’re finding that AI is starting to live up to just a fraction of the hype and the promise that we’ve been pumped full of for the last, what, two and a half, three years, right?
(11:46): It’s finally starting to get to the point where this isn’t just, oh, that’s cool to, oh, that’s really useful. So both of those two experiences as a programmer using AI to assist our work and as us as an organization employing these AI agents to help with some of this toil, we’re finding a new enthusiasm for applying it. Now, it’s interesting because both of these cases are actually internally facing. They’re not about features. Some of the early exploration with AI that we didn’t end up shipping was, how can we jam this into the product? And we tried certain things around search. We tried certain things about commands and auto completion, and we never really ended up with something that felt like slam dunk. This is just amazing. Everyone is going to go, wow, I’m so glad you shipped this. A lot of it did have this tinge of do you know what?
(12:45): If we launched this, people are going to say it’s a little bit of AI slob. It’s about putting on a sticker of something where it’s not uniformly better after we’ve introduced this, and that’s not what we want to launch. So in that sense, I don’t mind being a little later to the game. And then when we do show up, we ship something that’s useful. Now, all of this work with our own personal toil reduction, our programmer systems, our design systems that we’re getting out of our engagement with AI, as it has happened since probably even just the last three, four months of 2025, has given us a newfound seal to try to find some of the areas where we can deliver that kind of value, that kind of unequivocally positive impact in the product itself. And we have a bunch of really neat ideas and I’m really excited about one of them specifically around Basecamp, but I don’t want to sort of just…
Kimberly (13:42): No spoilers. …
David (13:43): Throw open the kimono just yet because we may not launch it either, right? This is the thing with AI. You can get excited about seeing these glimmers, the shimmering of, holy crap, we’re about to enter into a new world. And you get these anecdotes, you get these experiences, these peak experiences. I had one the other day where there was a bug in the latest version of Rails and it was producing this really weird artifact when you were opening a Rails console and I just sighed because I was like, do you know what? I don’t really want to do the deep dive right now to figure out what the problem is. And I gave it to, I think it was Opus, which is Claude’s latest model, which is one of those frontier models that really does feel like it’s a serious step forward, the Opus 4.5.
(14:30): I give it this task and the fucking thing starts pulling out a CD bugger and all sorts of tools that I know they exist, but this is not something where I just go like, all right, I got 45 seconds here. Let me just crack open this toolbox right now. And I just had to sit there as it was thinking through and I was seeing all the thinking. This is what I love about these new thinking models, you get to see how it reasons with itself whether half of it is just for our amusement and edification or whether it’s actually what it’s doing underneath, I don’t really care. The show is spectacular. It’s going through these motions like, oh, I think there might be a problem with this thing. Maybe it’s something it’s linking to. I should try to pull up the CD bugger and connect it to the process. And I’m like, wait, you can do that?
(15:21): How do you know how to do that? And then it keeps going through and the first, I don’t know, two hypotheses as, it doesn’t pan out. It’s not what it thinks it is, but it just keeps going. And I’m just sitting there, oh wow, this is a show here. Let me see where it ends up. And it ends up just figuring it out, pinpoints exactly where the issue is, finds the commit where things went wrong, comes up with a patch that I could just put in the Basecamp code base to work around it until we fix it in the core of rails. And I simply just had to sit back and say, do you know what? I wouldn’t have done this.
(15:57): I don’t know, maybe in theory I could have over some period of time, over some period of halfs of hours or full hours dedicated to it, but I wouldn’t have done it right now in that situation. I had other things I wanted to do. I would’ve left it on side and found a way to work around it. Now, AI is getting problems out of my way. Not everything’s like this. I’ve had other times where I’ve gotten so excited from one of those positive experiences and I ask it for something that I actually think is rather simple and it goes make a mess of things. But the ratio is changing so fast between I asked the agent something and it goes to make a mess of things and I ask the agent something and it goes to either just solve the damn problem or at least give me enough of a draft that I go like, oh, I see where you’re going here.
(16:43): I’m going to wrap it up. Thank you very much for all the clues. This is exactly what I needed. Now I can polish it off or I can finish it. Or sometimes I even throw out its solution entirely and write it all from scratch again, but it’s given me the blueprint of where I need to go. Other times I don’t even just get one blueprint, I just get a bunch of drafts. I tried this with one job I had. I wanted to look into this MCP standard, which by the way now seems like maybe it’s kind of fading. Things do move quite fast with all this AI stuff, but I wanted to see like, all right, if I wanted to build this in a Rails kind of way, maybe we could have a new framework for it. Let me just see what a bunch of different agents can do.
(17:24): And this is where one of my favorite new tools is a TUI, a terminal user interface called OpenCode. And OpenCode, which by the way, we just pushed into Omarchy and now it’s part of the default setup, is a way to use all of the models from all the providers at the same time, in the same interface. So you can use CloudCode, you can use Gemini, you can use OpenAI’s Codex and a bunch of other of the commercial offerings. And then you can use all these open weight models, which is kind of like open source versions of AI that are being run by commodity hardware providers. And I got five drafts from five different models on this problem that I had, and they were all quite different, but you know what? They all worked, which shocked me to no end. I gave it a fairly open-ended tasks.
(18:20): It was not a detailed plan. I just built me this MCP connector. It’s got these three tools. This is what I want to show. Here’s how to authenticate, use a bearer token, blah, blah, blah. The agents go off and they spend about between two and a half and I think six minutes on the tasks. Each of them, every single one of them was able to use the tools available, the tests that we already had to come up with something that worked. Now in the end, I didn’t actually like any of the five enough to keep them, and I wanted to really focus on the fluidity of the interface and so forth. And they weren’t trying to do that, but I was so blown away by the fact that they all produced a working solution. Several of them had really good ideas that I hadn’t actually thought about myself.
(19:10): And I could then take it all, put it together, and then come up with something of my own, but much faster because I could get to running software in that. I could get something that worked, hook it up to my MCP debugger and see hello world come back out. It is one of the most exciting things I’ve seen in front of my eyes that a computer has ever done for me.
Kimberly (19:35): That’s a bold statement.
David (19:36): It really is. And it serves to be up there. I will credit it right up there, like the peak experience of using AI in this way as a programmer and seeing it solve a hard debugging problem or produce a draft of some code that just works the first time and how things through it is up there for me with the first time I sat down in front of a Commodore 64 to play Yie Ar Kung Fu with all the kids in the neighborhood when I was five. The mind blown experience of like, here I’m sitting with a joystick for the first time at age five or six and just going like, this is amazing. And then fast forward, I think ninth grade, ’93, 9’4, it’s got to be ’94. Whenever the first version of Netscape Navigator, which wasn’t called that, what was the first thing called?
(20:28): I forget. Andreessen’s first version of a browser. I used that in a university and amazingly they invited all these ninth grade kids in there and we got to make a webpage and we got to hit publish and it was live on the internet for the entire world to potentially see, none of them did, but just this knowledge like, wait a minute, are you telling me that I just published something that a person in South Africa or Japan or New York could see? What? That’s crazy. That’s the level we’re at or that’s the level I’m at in my excitement for AI in this moment, which is kind of wild because I think a lot of people have been blown away for a while. I’ve been blown away for the conversations I’ve been able to have with ChatGPT or other models where I just asking information or whatever, treat it as a better Google, but to see it transformed into this agent form where it’s doing work and just chugging along, asking sometimes for your feedback, other times not at all, just saying, I’m done. The problem with this is it’s so easy to fall into these hyperbolic dreams of like, holy shit, if it does this now, after we’ve only known this technology in the broader consciousness for three years, where are we next year?
(21:51): And then people’s brains start freewheeling into like, “They’re going to change all the jobs tomorrow. They’re going to kill us all.” They’re going to do all these things. And I get that. I really do because the ramp has been so steep and the reality of where we even are today is so amazing that I forgive anyone for going, well, what is this going to be like a little further down the line, right? But I don’t even want to focus on that because nobody knows, nobody knows where this is going. Nobody knows whether LLMs are going to tap out and we’re actually almost near the top of the curve in terms of how much better they’re going to be. Nobody knows if we’re five minutes away from breakthrough that gives us AGI, artificial general intelligence. Nobody knows if it’s going to be another 10 years or it’s going to be six months.
(22:43): So can we just focus on how good it is right now? Can we just focus on if this was all it gave us, was this capacity in this moment, how truly incredible an achievement of mankind it is to have given birth to this. And you know, as soon as you start talking about these things and these terms, you’re like, The Matrix. That’s what he said. Like he said, mankind celebrated itself as it gave birth to AI and we marveled at our ingenuity and then we set fire to the sky. So I’m trying to exist in some realm where I am endlessly fascinated and upbeat about what we’re making computers do and not falling into the trap of hyperbolic extrapolations of what the future’s going to be in either five minutes, five years or 50 years.
Kimberly (23:36): Okay. So let me ask you this because you have mentioned a couple times having these agents work for you, but you’re always reviewing, maybe an 80%, you’re finishing the last 20% or maybe even redoing it after you’ve gotten ideas. We get a lot of questions from people who maybe are designers and are using AI to program or are beginner programmers and are using AI. How important do you think it is, like you’re obviously expert level, for someone who is not as experienced, can they get this much satisfaction out of AI as you have where you’re clearly putting in kind of that final mile?
David (24:13): I think they can get more out of AI than I can get out of AI. All the things I’m asking AI to do, I know I could do given enough time, given enough dedication, attention, I could do it. I haven’t yet seen it do something for me where I just go like, I couldn’t have done that. No way, no how, any amount of time. That’s not what I’m experiencing. That is what non-programmers are experiencing when they’re asking the AI to build their idea and they interact with it and it produces the damn thing without them understanding a lick of what’s going on underneath. That kind of magic is closer to the magic of me playing Yie Ar Kung Fu for the first time. I didn’t know how to program a game. I was just marveling at the fact that someone did and it was this cool fighting game and the little characters were moving on the screen when I was moving the joystick and that was incredible.
(25:07): And we’re democratizing the access to creativity in a way that very few moments in history have done so broadly. Now, I obviously look back at the history of computer programming and look at like, you know what? We’ve had other similarities here where we gave non-programmers the power to create programs. Excel is a wonderful example of that. There are companies everywhere to this day that run off monstrous Excel spreadsheet with cells doing math and all sorts of invocations. I see it all the time in racing that are essentially small programs written by non-programmers to do business tasks of various kinds that they, in many cases, either couldn’t afford, didn’t have access to, didn’t think through having a professional programmer do for them and they didn’t have the skills themselves to do it. So Excel gave them this power. Now, what’s funny is that Jason actually got started with his work in software using something sort of kind of similar, which was this program on the Mac, I believe, called FileMaker Pro.
Kimberly (26:16): Oh, I totally used to use FileMaker Pro. Yeah.
David (26:19): Oh, you did?
Kimberly (26:20): Yeah, back in the day.
David (26:21): Yes. And I’m sure you had that experience too where I’m actually creating programs. I don’t necessarily know how it’s happening, how it’s working. I know what I want and I can somehow string things together despite not having a full understanding of what that is. And I can get something that works, something that helps, something that does what I needed to do. And AI already today, not even thinking, well, can you do three months from now, six months from now? No, no. Today is doing this for tons of people and they are justifiably extremely excited. Now, that’s not to say that what these models are producing today have the level of quality where I’d say I’d want to put my personal data into it. I’ve seen enough horror examples of the AI making something that appears to work but is then leaking like a sieve or woefully insecure or whatever you have.
(27:16): But in many cases, it also just doesn’t matter. Not everything has the criticality of a pacemaker or a bank. There are lots of kinds of programs that exist at the early stages of the criticality ladder where even if it’s catastrophically wrong about what it’s doing and it loses everything, or even deletes your computer, do you know what? That’s a risk worth taking because cost of that is not that great. And the upside of getting programs that didn’t exist yesterday because you’re letting it do this is just worth it. Again, not in all situations, not in all cases, but this is where, especially these non-programmers who’ve had this experience building something they could not have built otherwise, that’s actually kind of good, that actually looks kind of great and that was made really quickly is accelerating and they start extrapolating. Well, okay, now maybe it’s producing something that’s not super secure and whatever.
(28:14): Well, hey, hello, innovator’s dilemma. Is that not everything? Is that not everything that starts feeling like a toy? Is that not everything where, well, it’s not good enough for my enterpris grade military demands and needs. No, maybe it’s not. Well, not maybe. It’s definitely not without supervision, but so what? That characterizes almost every single innovation we’ve had that completely upended the technology industry from day one. Everything starts out that way. Everything starts out being feeble, not enough, can’t do the top end stuff, and then it grows into it. And I think this is why this very moment is so magic because we can all see the progress. We can all see how quickly it’s improving and no one knows where it’s going to go. No one knows, are we going faster? Are we going hyperbolic now? Are we going to be five steps away from the AI writing its own AI code?
(29:13): And then boom, that’s how you get the singularity and all the either dystopia or utopia versions of the world that people like to project. I don’t know. Holy shit, that’s exciting. To be alive right now at this moment in human history, holy crap, what a privilege. There are very few other moments where you go like, in a two-year span, how did the world potentially completely rewrite itself? Even the internet, which really has rewritten the world in almost all the ways that a computer could, took quite a lot longer than that. From when I got involved with that first HTML page in ninth grade till the internet has rewritten society for everyone broadly. That was 10 years, 12 years, maybe something like that. We’re barely three years into the public consciousness of AI existing. Now, I know there’s a long history, literally going back to the 50s of people doing research into all this stuff.
(30:17): So it’s not like, oh my god, overnight success. How did AI just happen? Well, it was just one guy in a shed like for two weeks. No, no, no. There’s a long history of that and obviously great respect to be paid to those people, but it all entered into the public consciousness when we got ChatGPT. And already going from that, the first thing, and especially on the image generation too, I’d say this is even more obvious where you see the first image generated by Midjourney V1.
Kimberly (30:42): With the seven fingers.
David (30:44): Seven fingers and mutant faces. And now we’re at, yeah, there’s no way a human could tell that the latest AI models when asked to generate hyperrealistic photos taking the iPhone style isn’t actually legit or not. Boom, two and a half years, three years. So again, it’s that velocity, it’s that acceleration that’s making people so excited, making me so excited because I just love computers. I love when we make computers do new things. And I choose to be optimistic about us as a species being able to make those new capabilities do more good than harm. Because it’s really easy to imagine all the worst things that can happen.
Kimberly (31:30): Oh yeah. There’s been many sci-fi movies telling us everything bad that’s gonna happen.
David (31:35): Yes. Well, it’s actually, we don’t even need to use our imagination. Just watch any sci-fi movie produced in the last, I don’t know, 50 years. In fact, speaking of, I just watched 2001 of Space Odyssey, Stanley Kubrick’s movie from 1968 or 64, I think even it is. 64, I believe. And you’re like, motherfucker, how did he know? How did he know that Hal 9000 would say, “Sorry, Dave. I just can’t do that, Dave.” And now you see all these papers from the Frontier Labs going like, “Oh, when we’ve really tickled the AI, it actually tried to report us to the authorities, lock us out of our own computers and basically blackmail us into certain things.” We were like, man, that was eerily spot on from ’64. By the way, amazing movie, you should totally watch it because in some ways it is the most timely depiction of all the fears we might have about AI.
(32:35): And on the other hand, it’s the most 64 movie you can imagine that starts with literally 15 minutes of apes running around, hitting each other over the top of the head with bones. You’re like, man, is there any content today that gets published where you just have a 15 minute scene that basically doesn’t go anywhere and have this one point at the end about an obelisk? That’s a really curious place to end up.
Kimberly (33:01): Clearly our attention span was different in 1964 than it is currently.
David (33:06): It was, which is why it’s a good actual challenge because as I was sitting, I was watching it on the plane. And unfortunately I say the plane also had wifi. Therefore, at any one time I had the temptation that I could hop on something more immediately simulating. And I was like, no, do you know what? In this moment, I’m going to sit down and watch a two and a half hour movie where the script could be summarized in about 90 seconds. It’s not a complicated movie. It’s an amazing movie. It’s not a complicated movie. There’s not a lot to keep track of. The plot lines, there’s about five of them. So anyway, just get to this point that we’re at this moment. Everything is amazing. Everything is scary. Everything is potential and reality at the same time. It is an incredible time to be alive.
(33:55): And I wish for more people to take that stance in the face of uncertainty because there’s a million reasons. You could spend your whole life being worried about all sorts of things. Do you know what? Fucking in ’64 when 2001 came out, we had the threat of nuclear war. At any given time, we could all be gone and a bunch of people, fewer then than now, worried their lives sick with something that never fucking happened. So could you just wait until the bomb drops? Okay, maybe there’s a few people who need to think about it and do some scenarios and some modeling. But if you’re not one of them, and if you don’t work at Goddamn DARPA or the Pentagon or research lab, don’t spend your time worrying about the end of the world. If it’s going to come, enjoy the time you have until then.
(34:45): Be optimistic, be happy until the final flash. When AGI shuts off the world and we have to lit the sky on fire. That’s my philosophy as a technologist too, even though I can see all the scenarios. I’m saying, do you know what? No, that’s not how I’m going to go to bed tonight. I’m going to go bed tonight going like, holy shit, this is amazing. Best time to be in love with computers and see what could happen and wake up in the morning genuinely curious of where this is going to go.
Kimberly (35:18): I had another question, but I feel like we’re ending on such a high note. We should just wrap it up there. REWORK is a production of 37signals. You can find show notes and transcripts on our website at 37signals.com/podcast. Full video episodes are on YouTube. And if you have a question for Jason or David about a better way to work and run your business, leave us a video question. You can do that at 37signals.com/podcastquestion or send us an email to rework@37signals.com.