Look Back Less
“If you’re going to look back at all, you should glance back but not stare.”
— Jason Fried
In this episode of the REWORK podcast, host, Kimberly Rhodes sits down with Jason Fried, and David Heinemeier Hansson, to dive into a compelling question: How much attention should businesses really give to the past?
Jason introduced this concept in his recent HEY World post called “Look Back Less,” where he urged businesses to redirect their gaze toward the future, steering clear of excessive retrospective analysis.
Join Jason and David as they explore the limitations of retrospection, highlighting the power of personal stakes, gut instincts, and a readiness to take risks, all under the sway of the market’s response.
Listen in as Jason and David navigate the intricacies of this business philosophy, touching on recent Twitter and YouTube comments that both echo and challenge their perspectives.
Tune in to discover why continuous forward movement may hold the key to sustained success.
Check out the full video episode on YouTube.
Key Takeaways
- Judgment Calls vs. Data-Driven Decisions: challenge the conventional wisdom of relying solely on past analyses for predicting outcomes.
- Discover the power of founder-led decision-making, where personal stakes and a willingness to take risks fuel faster decisions and a culture of experimentation.
- Why the ability to admit mistakes is vital to maintaining agility and adaptability.
- Uncover the significance of gut instincts and unarticulated wisdom in decision-making.
- Explore the vital role of continuous forward movement in navigating the ever-evolving landscape of business.
REWORK is a production of 37signals. You can find show notes and transcripts on our website. Full video episodes are available on YouTube and X (formerly known as Twitter).
If you have a question for Jason or David about a better way to work and run your business, leave us a voicemail at 708-628-7850 or email, and we might answer it on a future episode.
Links & Resources
- From Jason’s HEY World: Look back less
- Books by 37signals
- HEY World
- The REWORK Podcast
- The REWORK Podcast on YouTube
- The 37signals Dev Blog
- 37signals on YouTube
- The REWORK Podcast on YouTube
- @37signals on X
- 37signals on LinkedIn
Sign up for a 30-day free trial at Basecamp.com
Transcript
Kimberly (00:00): Welcome to REWORK, a podcast by 37signals about the better way to work and run your business. I’m your host, Kimberly Rhodes, and as always, I’m joined by the co-founders of 37signals, Jason Fried and David Heinemeier Hansson. This week we’re talking about a post that Jason recently wrote on his HEY World blog called Look Back Less, where he argues that businesses need to look forward and not so much back looking at the postmortems and analyzing things that have happened previously. So Jason, I know a lot of people commented on Twitter about this post, but tell us a little bit about what you’re thinking and then I have a couple of YouTube comments and Twitter comments I want to read for you. So tell us where this started.
Jason (00:41): Well, I was actually doing a podcast a few days ago and this topic of retrospectives made its way into the conversation. And it just reminded me how much I think they’re pretty worthless in our sphere of work, which is, let’s call it product development, software writing, product development, design, that kind of stuff. It definitely makes sense if you have a reproducible, you’re making widgets or there’s a very predictable output like you’re running an electrical plant and all of a sudden something goes wrong, you can trace it back and you know what it should look like on the other side and what are the inputs or if you’re making widgets, and I think in the example is if you’re stamping circles and they come out as slightly less than circular, like something went wrong, you can trace that back, fix that problem, whatever. When you’re dealing with a bunch of people making software, which is this sort of ambiguous, amorphous thing, and every time you do something, it’s different.
(01:37): The output’s different. We make this feature, we make this feature, we make this feature, we make this feature. The output is not the same. Yeah, it has to work, but a million different things are different about it than the last time you did it. I don’t think there’s a lot of lessons, frankly, to draw from things when they don’t go right. And the ones that you can draw are pretty fucking obvious, I think. And if they’re not, you’re not going to find them anyway. What you’re going to do is you’re going to find something that you can hang your hat on and go, it was because we didn’t communicate enough, or we should have brought QA in earlier. Had we only done that, then everything would’ve been fine. Or had we had another designer or whatever it was, you come up with these things, there’s no counterfactuals to prove them right or wrong, but you’re like, that was the reason.
(02:20): That was the reason and next time, and I don’t think that that’s ever really what went wrong. Sometimes things just don’t go right and you don’t really know why, or they did go as right as they could have no matter what you would’ve done. And also the idea that people say, well, if I could have gone back and done something differently, would I have? You couldn’t have. You were in the same exact moment, the same history of your life and everything before that, the same people in the room, you would’ve made the exact same decision. So let’s not go back and think we could change time and again, I don’t think a lot of those lessons play forward in a way that if they’re obvious, they will. If they’re not obvious, they won’t. So my point is, if you’re going to look back at all, you should glance back but not stare. And I think retrospectives and postmortems and these sort of reviews are about staring back and trying to squint to find the thing that went wrong. Again, if it’s a process that’s very clear and repeatable, different story, but in the kinds of stuff that we do, I don’t think it’s repeatable and I don’t think there’s a lot of lessons to draw.
David (03:20): I think the reasons why you can’t draw these lessons in a productive way is you don’t get to test your hypothesis. So let’s say you involved QA a little earlier next time on the next project. How is that a controlled variable? How do you know whether that intervention actually worked or didn’t work? And I think we talked about involving QA earlier. What the worst part of this, the worst part of retrospective and so tempting to do is to introduce additional process. Next time we have to make sure that we X. Now there’s another fucking line on the checklist of things you’re supposed to do. You do that long enough and that list is going to get tedious. It’s going to be a box ticking event. Now, if you’re launching a freaking rocket into space, I’m really glad that there are people who have checklists and that they go through it methodically and they check that the valve is tightened and all of this stuff because it’s repeatable and you’ll literally know whether you did it right or not, because if you did it wrong, people may die. In product development it’s just not like that at all. You don’t get to run the same experiment. You don’t get to run the hypothesis, and more likely than not, you’re going to do what feels like you should do something. Oh, not everything went well, so we got to do something, right? We can’t just do the same thing again because then we’re going to get the same outcome. No, you’re not because all the inputs are different. All the people are different. I think for us, we do retrospectives on things like outages, and it’s usually around repeatable processes. It is things like we were supposed to switch over from this database to that database. There is actually a checklist and it is a repeatable process then it’s great, it’s wonderful. What’s funny about the other side of it is it actually took until Jason posted that post for me to really think back upon all the many postmortems and retrospectives we’ve done on product development and think, do you know what?
(05:15): I can’t think of a single thing, not one goddamn insight that had come out of any of it. And we’ve been doing these for 10, 15 years probably at least, where I go like, oh yeah, that was a good use of two hours worth of time. In fact, I think it’s worse than useless. It is giving you the false idea that if you just analyze it well enough, as Jason said, you could have gone back and you could have gotten different outputs from those kinds of insights. You know what? Probably not, right? And this is part of the understanding here is that when it comes to product development where you don’t actually know even what you want, there’s not an output that’s even defined. You’re trying to build a feature that people will like, well, what does that look like? It could be a million different things.
(06:01): That’s why you can change the scope and so on. That’s what the magic of product development is, that there’s not a defined output. So when that’s what you’re trying to build up, what you’re actually, the main organ that works for that is the intuition, is your gut. And that’s already been processed. You’ve already done the retrospective just going through the motions of building the thing you were building, and maybe it didn’t work out as perfectly as it should have. All those inputs, they’re already down here in the belly swirling around being ready for the next time. And trying to fully articulate that wisdom is a complete fool’s errand. You actually can’t articulate it because it’s all in these subtle weights. To me, I think, do you know what, this is one of the reasons why I think AI might actually be onto something here, because no one fucking understands how AI actually produces what it does because it boils down to sort of a gut sense of what’s the next token, what’s the next word going to be on the basis of everything that I’ve read and know. That’s much the same with good product people, good product developers, good product designers.
(07:03): Can they deduce exactly why the button is like this or why we’re cutting this amount of scope? No, they can’t, they can’t articulate it. And if you force them to, they will come up with a bullshit story.
Jason (07:15): It’s funny, this my next post I’m working on is all about everything’s a judgment call. People are like, this is a data-driven decision. No, it’s not. If a human’s making it, it’s a judgment call. It’s always a judgment call in the end. You can have a bunch of different inputs, but it’s still a judgment call. Otherwise, let the computer make the decision. But if we’re unwilling to do that and we’re bringing a human in, then that human is bringing to bear a million different variables and they don’t even know half of them that point them in this direction or that direction. So anyway, that’s the next, maybe we’ll do a podcast on that when that post comes out, but…
Kimberly (07:50): Okay, so I’m curious how this kind of ties back to analytics as well because I know we don’t have a giant analytics team or someone who’s posting reports every week about how things are going. Part of me thinks that this kind of ties into that as well.
Jason (08:05): We used to, and what we found is that we’re just not an analytical driven product company. Like David says, we do look at analytics and data and performance numbers when we’re looking at serving up pages and more technical infrastructure kind of stuff to make sure we’re serving stuff up properly and doesn’t take forever and where the long queries and where can we knock those down and all that kind of stuff. But as far as product decisions, we’re not driven by the data. We’re curious about it. I’m curious about this and I’m curious about that, but data of course, tells a story that already happened. It doesn’t talk about what could happen or what could be. If you mean this change or that change, you’re still running blind. You do not know what’s going to happen until you do it. The market will tell you the truth.
(08:50): Everything else is an estimate and a guesstimate, which is fine. It’s not a bad thing to some educated stuff behind your gut, but ultimately you just don’t know until you make the thing. And so we’d rather just make the thing and find out for real than to sit around and pontificate and think about and spend weeks perhaps paralyzed by choice because we’re not sure what to do because what we don’t know what’s going to happen in that in the meantime, we could have built the thing and found out for real. I’d much rather just do that. That’s just the choice that we typically make. And I think that that ultimately leads to the honest output and the honest decision and the honest inputs in the end versus the analysis of it. Analysis again does not tell you what’s going to happen. Anything that’s forward looking as a prediction, and certainly some predictions are more certain than others, and there’s higher probability in some ways.
(09:36): In some ways there’s not. It’s all based on what was and things can be different and who knows what really mattered in timing and the market and the world. I mean, I remember when we were going to launch HEY, we were about to launch hay and then there’s a massive social uprising in the United States I think at the time. And we said we should hold off on this. Had we launched it then or had Apple not given us a bunch of shit trying to, they didn’t give us a bunch shit. They said, you can’t be in the App store. We didn’t plan that. That was a huge thing that happened that helped us. We didn’t know that was going to happen. Then it happened, and then what do we do about it? Well, we play into it, but that’s always moving forward. All this stuff is about moving forward. What are you going to do with what you have? Not what do you think you’re going to have, but what are you going to do with what you actually have? What presents itself and how do you work with that? That’s what I think is the skill to really build up and ultimately the only honest approach to doing this kind of work.
David (10:28): And I think it’s that honesty that has led us to the realization that we’d rather not have a full-time data analyst. And we’ve had three data analysts over the past, I want to say 15 years, and they’re wonderful, careful, diligent people who could dive into the data and produce a statistically significant result based on that data. And we chased all sorts of insights for years and years. Oh, can we deduce exactly why it is Basecamp customers convert, for example? That’s usually the holy grail for SaaS companies. If you can look at a trial account, what does it take for that trial account to become a paid customer? Can we do more of that stuff? And we would come after months of analysis, dozens and dozens of pages, lots of fancy calculations, and I mean fancy in a respectful way here, like rigorous mathematical inquiries. And the answer would be, oh, people buy Basecamp more if they can get others to use it with them.
(11:31): Okay? I mean, I could probably have told you that. And also is that even actionable? What can we do? So we try to make it actionable. What if we somehow rejigger the invite flow and we ask you to invite people earlier on. Do you know what? It’s not that easy. These insights are not that actionable. This is usually, again, the holy grail of data analytics is you produce actionable insights and then you can take those actionable insights and you plug 'em into the system and voila, the business goes up and to the right. Not my experience, not our experience. Again, that does not mean there aren’t times where this is quite valuable, especially on things like where the process is quite repeatable. I remember one clear victory we had, which was we changed the marketing page on an old product we had called Highrise where we had a given marketing page and we changed it to something else, and then we kind of just went like, all right, that is what it is.
(12:27): And then six months later we did a reverse analysis on whether it worked or it had not, and it didn’t. It did not work. It actually materially harmed the signup rate and we lost a fair amount of business on that. So I think the A/B testing of the incremental changes there can work, but even there, you can get sucked into a local maxima. Jason has changed the whole scope of how we present Basecamp multiple times, and usually when it’s a big shift, it’s got to go on the gut of it. We’re going to do this. The same thing we had when we were playing around with different pricing options for Basecamp. We had a lot of really sophisticated analysis to go into it, and in the end, we had to rely on the fact that Jason’s gut told him this is the right way to go because all the analysis in the world was not going to tell us what the market was going to do.
(13:16): And in that particular instance, it’s recent enough that it’s vivid. We really would’ve been better off if we just thrown shit against the wall. So this is one of those areas where if you have a function, if you have a full-time data analyst, you will feel obliged to do full-time data analysis, which in some cases perhaps is useful and the repeatable processes, and we have a bunch of things, a performance and dah, dah, dah, dah, great. On these kinds of things. Where should the business go? Who should we target? How should we go about it? There are as many pitfalls as there are trampolines in terms of getting the business to the next point, and I think most people underestimate just how much it can slow down decision making and how much of a false aura of certainty it can give you. And where I often see this is that when it’s not your own skin in the game, when it’s not a founder-led decision, people want data to back them up in such a way that they can say, do you know what the data said?
(14:17): I’m not out on a limb here. I’m not going out on the board. You can’t just hold me personally accountable for the decision. This is why there’s such power in having founder-led businesses. Jason can go, I think this is the right move. Do I know for sure? No, I don’t if I’m wrong, my ass on the line. Who’s going to fire me? That gives you the confidence and to some extent for a fancy word here, courage to try risky things, and that always has to be measured against like okay, so that looks risky. Changing your entire pricing model on an existing product. What do we do with existing customers? Dah, dah, dah. Yeah, there’s risk in that. Do you know what? There’s also risk in? Doing fucking nothing. Staying the course, traveling down a road that’s going to lead to a dead end at some point. It’s not like the fully analytical approach to life and business has this miraculously better outcome rate.
(15:10): Absolutely not. Most businesses that continue to innovate and do well, they are driven by people who will jump into the unknown, go like, yeah, I don’t know. We’re going to find out. We’re going to see what happens, and then hopefully I get enough chances to be right and hopefully my gut is attuned enough with the market and I don’t exactly know how to articulate why it is I believe this is right or why this is a danger moment. I mean, this is one of those areas where the truth often comes out the day before launching, and it’s like you have this supercomputer in your gut and it’s computing all these things, and you’re about to push go live, and suddenly the supercomputer comes up and say like, no, I don’t feel good about this. This doesn’t smell right. We’ve had this time and again, and you know what? We can have all the reams of data in the world, all the data analysis. It doesn’t hold a candle most of the time on those leaps of faith to the supercomputer in your gut.
Jason (16:06): The other thing is, yeah, that’s why it’s always a judgment call. And it’s also why companies are just so disingenuous about this. They always talk about everything’s, they’re very analytical. Yet when you hire an executive and executives typically are put in charge to make decisions, what are they looking for in executive? Experience. What is experience? It’s not reading a spreadsheet like anyone can read a spreadsheet. There’s not people who are better at reading spreadsheets than others. I mean, there is, but that bar is relatively low. It’s cleared and then that’s that. It’s experience, it’s judgment. They’ve seen a lot of things. They’ve absorbed a lot of things. They’ve been through a lot of things. These are human experiences. This is what judgment is. It’s a collection of human experiences, some that and some that you don’t know. We don’t know what colors every decision we make.
(16:49): We don’t know what made us think this or made us think that. We can think. We know, we don’t really know, but it’s experience. So they’re not going to hire an executive with one year experience. They’re going to hire an executive with 25 years of experience. That simply means that they’re 25 years ahead on human intuition on things because they’ve seen a lot. So companies actually thoroughly value experience and judgment, yet they for whatever reason, don’t want to give the credit when making decisions, but ultimately that is the decision. That is how decisions are made, I believe. And the other thing is that I think a lot of companies are simply just really to David’s point about people being held accountable is that it’s all about everyone’s searching for certainty and validation. We think the only certainty is to build the thing and put it out there.
(17:32): That’s how if you want to know, make it and make it fast and make it often and then you’ll find out. I’d rather do that than wonder and guess and predict. Just let’s make the thing and see what happens. Now we’re not making new products every two days. That would be ridiculous, but how many consequential decisions are you really making in a year anyway? Not probably that many consequential ones. So pick a lane, make something, put it out in the market and find out what happens, and then you can adjust from there based on real information. So pre certainty is not really ever on the plate for us. We just want to feel good about it and feel good enough about it. In fact, all we’re really curious about is how maybe is something, not like how certain is it, but how maybe is it? Is it very maybe? Is it a little maybe? That’s the best you can really do, and I think as long as you’re honest about that, I think you can move faster, and that’s kind of what we’re after.
David (18:22): This is one of the areas where the heuristic, I picked up the key heuristic I picked up from the Elon Musk biography was the intention that he looks at himself and his decision making framework and go like, if I am not wrong, badly so, at least 10% of the time when we’re making decisions about what goes into the product or what we can cut, how fast we can go, how optimistic a deadline could be, whatever. I’m not close enough to the line. This is one of the things that really was think that clicked for me. I’ve always thought about, all right, I know you have to go close to the line to sort of be able to squint and see where it is. I hadn’t thought about it in the sense of, you know what? You should think about it as if not one out of 10 of your decisions is wrong, full on has to be reversed,
(19:15): you’re not going fast enough, you’re not making quick enough decisions. You should move faster, make more of them and know that the majority of decisions are reversible. I think this is really what should give you so much faith in your ability to make more decisions quicker and make them more decisive is that almost everything can be reversed. When I think back on the 20 year history we’ve had, I’m even struggling to think the last time we made a fully permanent and irreversible decision where if we were really, really wrong, there was just nothing we could do about it and it would suck tremendously versus I can think of probably 10,000 decisions that fell in the other camp where, do you know what? We had some maybe, a lot of maybe a little maybe on the decision going in, and we knew that it was fine to trust that amount of maybe because we could always revert, we could always go back, and I think the main cost there is your ego. And this is where ego really comes in the way, and this is where ego really comes in the way when you’re hiring executives who feel like their situation is on the line, if they make the wrong calls.
(20:30): Again, why this whole founder setup is so important to some extent because you can put the ego out of the way. Not that there aren’t plenty of founders including us who don’t occasionally or perhaps even a lot of the times have an ego that’s in the way, but at least you have a method of thinking about getting it out of the way. Do you know what? If I’m willing to be wrong, we can go faster. If I’m willing to be wrong 10% of the time, we can go really fast and we can revert the times when we are wrong and we can go like, yep, didn’t call that one right. So isn’t that the human condition? Didn’t call that one, right? Who the fuck goes through the whole of it and go like, yep, right on everything. Someone who made no decisions at all, which is the worst decision of all.
Kimberly (21:17): Okay, before we wrap up, I want to get your hot takes on this one tweet response that we got because I think a lot of people just don’t get it. This idea of not looking back, it’s such a traditional business strategy. Someone on Twitter wrote, am I missing something? How can you focus on doing better next time without identifying what needs to improve. Without creating space to address problems the main focus moment to moment is simply getting the work done, issues fester and less addressed. Do you have thoughts on that?
Jason (21:46): Yeah. David, do you want to take that first? I’ll add a thought.
David (21:49): Yes. To me, it goes exactly to the parallel I tried to draw earlier with the AI model. We’ve already been through this. Yep. I did all the work. It didn’t pan out. That was probably weeks, maybe it was months of learning. That’s the real learning. It’s not sitting afterwards and trying to make a list of it. The real learning happened in those two months. That was the training data. The training data’s already in the gut. It’s already being fed into the supercomputer and the next time, do you know what? You will have all of those things in your mind. You don’t need to articulate them. Now, again, repeatable processes, blah, blah, blah. We have given all the caveats, but when it comes to product development, you do not need to articulate everything.
(22:31): You need to realize that the value was the experience that it already comes pre-programmed with the set of lessons that you may not even know because what does it mean? Let’s involve QA earlier. What is too early? Is it like five months in advance? Is it a little, you’re going to have that at your fingertips in the moment, your weights, which is what they talk about in AI, right? You’re training the weights of the language model. Your weights have been altered by going through the experience. It’s not like you can’t absorb that knowledge unless you sit down and talk in a circle for two hours. What other fucking bullshit? I mean, how preposterous is it to think that it’s those two hours? That’s where all the lessons and learnings are going to happen. What? No, I remark it off into race car driving, so I usually drive with professional drivers.
(23:27): I get the data where I can see exactly what they do on a split second basis. I can see exactly how much throttle they have, exactly how much steering wheel does that mean I can replicate what they’re doing? No, it does not. The only way I get to be a better race car driver is I can look at that, but then I got to go out and it, right? I got to actually get the experience. I got to get the muscle memory weights just right. I can’t just look at a piece of paper. I can’t just look at a set of conclusions and learn anything from it, and perhaps that was the greatest fallacy of all. I used to think that you could do second order benefits from retrospectives, that you could have a group of people who had worked on a project and they come up with five things they learned or 10 things, and then other people could learn something from that. No, absolutely not. If you can’t learn from the two hours someone sat together and tried to compile this list, there’s no freaking way you’re going to learn anything by spending two minutes reading the main conclusions of something and now you have the wisdom that took them two months to acquire. Absolutely not. It is delusional in the worst sense of sort of the academic intellectual dilution that all wisdom is articulated. No, it’s not.
Jason (24:42): Yeah, I don’t have much more to add. That was a spectacular answer. The thing I would say though is that I just think that you reflect moving forward. You don’t reflect back. You learn by doing, and you learned by doing. You had the experience and you know what went wrong or you know what went right, and if you’re completely shocked, if you thought everything went beautifully well, and you put this thing out in the world and it’s an abject failure, and you’re floored and shocked by it, I don’t think you’re going to, you’re fucked anyway. If your judgment is that off, that completely off that looking back and replaying the tape, which you don’t have anyway, so you’re remembering the tape. You’re not replaying any tape. There’s no tape. You’re remembering the things you could have done, and then you just pick and choose because you have to and you assume that these are the things that were wrong, and then next time you’ll do, you’ll do it better.
(25:30): I don’t believe that that’s true. I really truly don’t. I think you’ve internalized, like David said, you’ve internalized the training data. You went through the process and you’re going to get better by doing more of it, not by reviewing it. It’s like you learn to play an instrument. You get better by practicing more. Now you can practice poorly and there’s all this deliberate practice and whatnot. You can improve your practicing skills, but you improve your practicing skills ultimately by practicing more forward, not backwards. You don’t review your lesson with your teacher over two hours and go through all the things you did wrong on the guitar. He’s like, no, next time, let’s do this. Let’s do this, and you try that and do that, and that’s how you learn. That’s how we learn everything in life is you learn by doing. You learn going forward.
(26:12): We’re not sitting, reviewing every last thing that we do. We’re making mistakes all the time. You go, that was stupid. I know that I won’t do that again or whatever. It’s all about forward. It’s all about doing. So yeah, I don’t think there’s a lot of gold nuggets sitting around that haven’t been mined, that are just sitting there that only if you look harder, you’ll find them, especially in the further away you get from the moment, the worse your memory is about it, and people start to make up stories about what it was and what happened and what could have been better. I just don’t buy any of that, frankly. So yeah, the thing is we’re not suggesting you don’t learn. You have to learn, but you’ve already learned and now you need to surface and you need to surface it by doing it again. That’s how you get better, so yeah.
Kimberly (26:55): Okay. Well, I will link to Jason’s original post in our show notes. REWORK is production of 37signals. You can find show notes and transcripts on our website at 37signals.com/podcast. You can also see full video episodes on YouTube and Twitter, and if you have a question for Jason or David about a better way to work and run your business, leave us a voicemail at 708-628-7850. You can also text that number or send us an email to rework@37signals.com.