Leaving the Cloud: The Finale
You’ve no doubt heard the 37signals team talking about leaving the cloud. Well, now the transition is complete!
In this episode of Rework, 37signals co-founder David Heinemeier Hansson and Director of Operations Eron Nicholson sit down with host Kimberly Rhodes to discuss the unexpected speed of the move, the decisions, the hurdles they faced, and the behind the scenes of the meticulous process of moving their major applications, including those that had never before been run outside the cloud.
Tune in as they share the secrets behind their successful approach and the unexpected trick that allowed them to tackle individual components without risking major disruptions. Plus, they address questions from listeners, covering topics such as backups, physical resets, and the future of their infrastructure.
Listen in for an eye-opening conversation that challenges the prevailing narratives of the cloud era and delves into the untapped potential of running your own infrastructure.
[00:00] - Kimberly sits down with 37signals co-founder and CTO David Heinemeier Hansson, and director of operations, Eron Nicholson to discuss 37signals move away from the cloud.
[00:39] - David shares their surprise at the quick completion of the move and the challenges they faced in planning and technology decisions.
[01:26] - Out in six months: how the team, led by Eron, tackled the various aspects such as logging, failover, and managing two data centers while resolving numerous open questions along the way.
[03:06] - An opportunity to question existing principles and processes, resulting in a novel approach. It felt like a product launch!
[05:04] - Eron reflects on the accelerated timeline and how the entire ops and SIP team worked towards the same goal.
[06:32] - How the criticality ladder approach allowed for smoother progress to more complex applications like Basecamp Classic.
[08:47] - Eron explains the logistics and the unexpected trick that helped the team tackle individual components without risking significant disruptions.
[10:52] - Moving HEY, 37signals most critical and complex app, that had never before been run outside the cloud.
[11:23] - Kimberly opens the floor to a few user questions from Twitter, the first one from Pedro: "Did your company buy or already own your own data centers, or are you renting space in existing data centers? Is that even an option?
[11:50] - Running your own data center requires a gargantuan scale and enormous investment, but renting space in data centers makes it easy and capital-efficient.
[14:29] - Amnesia of the pre-cloud era: using data centers is more accessible than most people think.
[15:38] - Eron shares how to make the data centers work for you and your company.
[16:39] - Kimberly shares a question from Moshi on Twitter: “Congrats on the move. Before deciding to move, did you try negotiating with any of the large clouds?”
[16:54] - David shares their unique advantage in cloud pricing negotiations.
[19:46] - Why the cloud math doesn’t work: the fundamental misalignment (and huge surprise costs) that led 37signals to leave the cloud and how running your own infrastructure makes those financial surprises disappear.
[21:11] - Kimberly shares a telling comment made by someone on David’s Twitter account.
[22:26] - How the cloud marketing campaign has successfully convinced people they’re dumber than they really are.
[24:49] - “If you possess the know-how and resources, it absolutely makes sense to manage your own infrastructure.” Eron shares the advantage that made the transition less daunting.
[25:32] - Kimberly shares a question from Demetro: “What about backups? How do you do it now? And what if the server needs a physical reset? How quickly can it be done?”
[25:47] - Eron explains the multiple layers of 37signals backup strategy involving multiple facilities.
[26:43] - Kimberly asks about the next undertaking for the 37signals team.
[26:52] - Eron shares the loose ends that need to be tied up and the next big question for the 37signals team.
[27:46] - David emphasizes the importance of taking a breather after the intense move to prepare for future challenges.
[28:36] - “Rework” is a production of 37signals. You can find show notes and transcripts on our website at 37signals.com/podcast. If you have a question for David and Jason about running a business, leave a voicemail at 708-628-7850 or email us with questions to have it answered on an upcoming episode.
Links and Resources:
Do you have a question for Jason and David? Leave us a voicemail at 708-628-7850.
Leaving the Cloud | REWORK
Leaving the Cloud Part 2 | REWORK
Sign up for a 30-day free trial at Basecamp.com
HEY World | HEY
37signals on YouTube
The REWORK podcast
The 37signals Dev Blog
@reworkpodcast on Twitter
@37signals on Twitter
Kimberly (00:00): Welcome to Rework, a podcast by 37signals about the better way to work and run your business. I’m your host, Kimberly Rhodes. If you’ve been following along in the podcast, you’ve heard us talking about leaving the cloud. That 37signals was making a move to host their products on their own servers instead of cloud services. Well, that move is complete here to talk about it. Our 37signals co-founder and CTO David Heinemeier Hansson, along with someone who’s joined us a couple times on the podcast director of operations, Eron Nicholson. David, I’m gonna start with you to get us started. I know that making this move happened was a big deal and we got it done really quickly.
David (00:39): Yeah, it was surprisingly quick actually, when we started talking about doing this last year, we had a much longer process in mind. We thought, do you know what? This is gonna be something that potentially could take years, at least a year. Um, and this is what’s so insidious about these big moves that before you start, it’s actually a little difficult to plan what velocity can you move forward with? And at that time, we weren’t even sure exactly what kind of technology we were gonna use, which didn’t make it easier. The only frame of reference we really had was when we moved into the cloud, and that literally took years. That was a huge operation of materially changing how the apps were deployed and putting everything into containers and just figuring it all out. So that was the rule of thumb we had to go on, and I thought like, yeah, okay, that’s what it’s, that’s what it’s gonna be and that’s gonna be worth it.
(01:26): But it turned out quite differently. We, in six months, went from, okay, we’re doing it and we’re doing it this way to being out with seven major applications, six of them, these heritage applications as we like to say, things we’re maintaining for existing customers, but we’re no longer selling. And then also, uh, the major flagship of this whole operation, HEY, our email service that was born in the cloud, and we actually had to figure out like, how do we run that on our own servers? This was perhaps the most challenging of all the services we pulled back, but it also went way smoother than perhaps we could have imagined. And in this whole process, we built some of our own tooling to do this. We built on top of a lot of tooling that’s already completely solid. We can depend on, we’re building on top of KVM to turn these massive new machines that we ordered into smaller slices as we can use them.
(02:20): And then we use Docker to run our applications inside of these containers. And then we use this new piece of tech that we built called Kamal to do the deployment. Zero downtime deploys and we can push out a new version of the app and no one is any wiser about the fact that it’s gone out. And all of that happened in just six months with, um, the amazing operations team that Eron’s been leading and whole crew of people on that team figuring out how to do things from first principles. And to note here, not just figuring out like how to sort of get these containerized apps to run our own servers, figuring out how we do loggings, figuring how we do fail over, figuring out how deal with our two data centers. There were just so many open questions that we had to sort out that were kind of put into the whole thing.
(03:06): We were gonna roll everything out in a novel, new way. So that’s always a great opportunity to question all your existing principles and processes and everything, and rejig what you always wished you could. This is one of the reasons why I likened this move to a product launch. When we launch a new product, when we work on a new product as when we worked on HEY, it’s always that moment where you go like, okay, so we’ve been building apps this way in the past. What do we wanna change? What would you do if you got to start with green field operations? And this was really green field. The servers we got were very green. They came shipped straight on pallets from Dell, brand spanking new machines. They were incredibly powerful, incredibly fast. I think a hundred and, uh, 92 threads her machine with two CPUs in each machine, just an massive amount of computing power, which for example, meant that the whole VM slice up needed a new approach because it was rare that we would throw a whole machine after a single service.
(04:08): Now we could fit multiple services on a single machine, all that stuff. And I think that, um, to, to compress all of that into six months, I think is, is beyond our wildest dreams now. There’s like a slight asterisk at the end of that. Like we’re outta the clouds. All the major applications are running in terms of the applications and the databases and so forth. But there’s a bit of mop up around. And then there’s the big thing, which is our file storage. We’re still storing eight petabytes of data in AWS’s S3 service. We’re leaving that one for next year. That’s a huge move in itself. And one that’s a tons of unknowns around, but the major operations have been completed. It’s mop up and we have, uh, Eron’s incredible team to, to thank for pulling that off.
Kimberly (04:51): Eron, so I’m gonna go to you because I know that when we first talked back in October of last year, it was like, this is gonna take years. As David mentioned, it took much less time. What do you think contributed to that, to speed up the process so much?
Eron (05:04): Yeah, it, it’s been incredible. Uh, and the whole team is, is really to thank for that. Um, it took us two to three years to move all of this to the cloud and so we didn’t think it’d take that long to come back, but our, our plan was to maybe do some small stuff within six months and then hopefully to, to take on some bigger chunks and, and have HEY and the other big apps pushed out after a year. And we’ve definitely beaten that. Um, I think part of it is, is the, the focus and the resources that we marshaled to get there. You know, we purchased a large amount of servers. We made a big splash there. And this has been one of the only times that I can remember in the, the decade plus that I’ve worked here that we’ve had so much the, the whole ops team and most of our SIP team pointed in the same direction at the same time. And that that, that really has given us the firepower to make this happen quickly. But it’s also just the fact that we, we reconsidered the way that we were building apps for the data center and decided to do things in a different way that’s much more streamlined. It’s much faster. It’s all based on open source technology. There’s no vendor lock-in and it’s, it’s been really incredible to see the team put each of these buildings blocks together that all combined to make the whole process just run really, really well. It’s, it’s been great to see.
David (06:32): One of the things I’ve really liked about the process was the use of our criticality ladder, which is something we’ve used in the past. When we moved into the cloud, we started with the least critical application first, such that we could run quite fast because we weren’t so scared that if there was gonna be a minor hiccup or a minor outage even, that it was gonna be catastrophic. So as when we went into the cloud, the first application we pulled out was good old faithful Tada List, the free application we launched back in 2005 that has not been available for new signup in well over a decade, but still somehow have hundreds if not just under a thousand users every week that continue to use it. So that was the guinea pig that we pulled things out with and that was really where we honed the process of how does the, the bit of new technology that we made ourself, the Kamal stuff, how does that actually work?
(07:22): How can we make it reliable? How do we set things up on the um, the VM structure? We had some debates internally about how should the database run, for example. Should the database run inside of VM? Should it run inside a container? Should it have its own dedicated host? And we got to settle all of those big questions on a relatively easy application with very low criticality, which meant that by the time we got to the more difficult stuff, the more higher criticality stuff. When we got into Basecamp Classic, for example, we were ready to just put the pedal to the metal. We knew how we were gonna do it, mostly. Most of the great unknowns had been answered. So the minor hiccups we encountered with the major apps were just not a big deal, which allowed us the, the faith to go quickly and also to paralyze the process.
(08:06): There were times where we were working, I think at most on three apps at the same time, where we have like, hey, one person or a person and a half or even two people on one thing and then there’s another thing. And that helped really compress it because we had arrived at a method of how we were gonna do the exit that uh, really sped things up. But I think this general notion of a criticality ladder, which was something we even used when we moved HEY because HEY didn’t move in one huge leap. It wasn’t like a master switch. We pulled in, boom, everything is now in our own hardware and everything is off the off the cloud. We moved it in I think four or five or maybe even six different steps where we took, oh, here’s some caching servers. We can move those over first.
(08:47): Here’s some database servers. We can move those. We have multiple database servers. We can move some of the lighter ones, we can move some job services. We can do all these things individually, piecemeal where we can roll back if something goes wrong or we can deal with slight, um, degradation in the service on a miniature scale says that it’s not so daunting. I think that’s how a lot of people think about it when they think of leaving the cloud. Oh man, we might have this large application. It has all these things going on. How can we do it piecemeal? And I think one of the tricks that we hadn’t even thought about, or I at least hadn’t thought about was we have stuff in the cloud where we have a data center that we’re co-locating that’s really close. I think Eron, correct me if I’m wrong, but like a millisecond apart, which meant that we could do these partial moves, um, without having to inject like a 20 millisecond delay or something on database queries, which just wasn’t gonna fly, then you need that big bang move. So it is a technique I suggest people look into it. If you’re in the cloud and you don’t even have a co-location yet for a data center, could you put the data center, you’re gonna put your own stuff in like quite close to the cloud? And you give yourself a lot more options in terms of exiting in a graceful measured manner.
Eron (10:04): Yeah, that’s, that’s correct. The facility we’re in, in Ashburn and we’re in two facilities, one in Ashburn and one near Chicago. Uh, but our Ashburn facility is I think a half mile away physical distance from, from Amazon’s US East One. So we have direct connections into Amazon, uh, between those two facilities and, and yeah, it’s less than a millisecond to uh, to go between the two. So that made it such that we could sort of bridge between our facility and Amazon’s for a period of time to do that piecemeal move. Uh, because the, the nice thing about going up the criticality ladder is we also went up the complexity ladder at the same time. Cuz Tada List is very simple. It, it doesn’t really have any dependencies on much other than a database. But as you keep going up that ladder, you get to apps that have Redis and have rescue and have other things that are needed.
(10:52): And the final destination was HEY, which is far and away our most critical and our most complex app, which had also never been run outside the cloud. So the decision by the team to move that a piece at a time was, was absolutely needed because we did roll back several times. We would, we brought mail over and we found some issues and we sent it back to the cloud while we worked those out and then we brought it back and it was all calm and collected and took a little longer than flipping a big switch like we did with some of the other apps, but was absolutely the right call.
Kimberly (11:23): And David, you mentioned Kamal and I know you did a walkthrough of that on YouTube, so I’ll link to that in the show notes. But when you posted about leaving the cloud or having left the cloud, you got a lot of comments on Twitter. I pulled some questions, I thought that we could just tee those up for you guys. Um, since we were talking about data centers, this one came from Pedro. “Did your company buy or already own your own data centers or are you renting space in existing data centers? Is that even an option?”
David (11:50): Yeah, I think this is one of those questions that initially I was very surprised to see that the orders of magnitude people have in their heads about running your own hardware. They’re off by a substantial margin. No one is building their own data centers. Like that’s not even just us. You need to be at such a gargantuous scale. Meta, okay fine, they’re probably building their own data center. Apple possibly could. I know they’re also renting a bunch of cloud stuff, but they could. That’s the level of scale. You need to be a multi-billion dollar operation in almost all cases to we’ve even contemplate building your own data centers. Building your own data center is like a hundreds of millions of dollars of investment to do. And then to operate that, very difficult. But there are thousands of data centers around the world operated independently, like in some sense like storage facilities that you can rent a closet or two or in our case eight, we have eight so-called racks four in each of the data centers where what you’re renting is a, is just a plot, and you rent the power, and you rent the bandwidth, and then you rent the physical location to put that stuff into.
(13:00): So that’s what we’ve done. We have these two data centers as Eron mentioned, we have one in Virginia, Ashburn and we have one near Chicago. And we don’t even go there. That’s the other thing I think most people don’t realize. It’s not just that we’re renting this space, it’s that we’re not there physically. I posted when the hardware we had ordered, this huge half a million dollar order we had placed with Dell, when that arrived, it was two pallets, one for each data center. We never touched those pallets. We had, um, service company called Deft, which is one we are using that’s this, this white glove service where a pallet can arrive and we tell Deft like, hey, we got some new servers, they have people in the data center. We pay them, um, to then unwrap those servers to put them into our rack to connect them.
(13:48): And what we see then is an IP address that pops up and go like, ping, all right, it’s ready to be set up, which is actually quite similar to how the cloud works. So I think this is some of the misconception when we say, well we’re moving outta the cloud, people think in their head, some people think, oh you’re putting like a big computer under someone’s desk inside your office. That’s something I’ve heard, which was something that happened, right? Like people actually did that in the early days of the internet. You just, you had it on your physical premise, which I actually think is an interesting idea and possibly could work for someone. We’re a little, well actually a lot beyond that scale, but this idea that you rent your space in makes it very easy to approach. There’s not a big capital investment. You don’t have to think about…
(14:29): I’ve heard a lot, oh then you have to have security guards. Are you hiring security guards? Like no, no, no, no, no. These data centers, they have security guards, they have some one managing the fire suppression system. They have someone dealing with the generators. If the power goes out, they, you get all of that. So it is really a lot easier than I think most people think. And what’s fascinating to me is, this is something everyone knew 5, 6, 7 years ago cuz this is how all the internet ran. Everyone were in a setup like this, whether they owned the service directly or they were also renting the service from someone, that server sat in a co-located space owned by someone else, owned by an Equinox or any of these big players that are, are out there. It’s only really the amnesia that’s set in in the last five, six, seven years. People have been born in the cloud and they just think like, oh, what is this ancient technology that you’re using, these servers? It’s like trying to understand how the pyramids were built as though we didn’t have the knowledge as though that didn’t exist, an entire body of experts and people who’d been doing this for decades. But um, yeah, that’s a fun one.
Eron (15:38): Yeah, it’s, it’s extra amusing to me because no one really builds data centers, certainly not companies at our scale. Uh, the hyperscalers, you know, Google and, and AWS and, and then are building data centers a lot. Uh, but the amusing thing is that Deft are our data center facilities company. They don’t build buildings. They, they rent rooms in a data center facility from someone else who’s building the actual physical infrastructure there. There’s whole companies who only do that. Uh, and so yeah, to even consider us building a facility to host our servers is, is ridiculous cuz even if we spent hundreds of millions of dollars, that would be to build a pretty bad data center. You know, these are huge capital outlays to, to do it well and there requires a ton of expertise, but Deft will be more than happy to to sell you a server and you don’t have to manage anything beyond that. They will be happy to sell you a half rack or a quarter rack. Take as you want and, and get whatever resources you need for your company and, and make 'em work for you.
Kimberly (16:39): Well this is another question that came on Twitter, and I feel like we’ve answered it a couple times but it keeps coming up so I’m gonna let you guys tackle it. Moshi wrote, “Congrats on the move. One question before deciding to move. Did you try negotiating with any of the large clouds?”
David (16:54): Yeah, I think this is a fun one because we had perhaps the best cards on hand of any company our size that I can imagine when dealing with AWS. It is no secret Jeff Bezos owns a minority stake in our company, which we were not exactly shy to bring up in any negotiation, um, meeting, whether that really mattered, it probably did. As everything does, there’s some influence in that at least making sure we get the best possible pricing for our tier. We’re never gonna get the pricing of a Netflix or someone else that just buys vast, vast capacity with these hyperscalers. But we got really good pricing for our size and we also did everything on our end to optimize that budget. We signed up for long term contracts. The, the vast majority of services that we were using before we decided to exit, were signed up for at least a year worth of compute that we buy in advance on a contract that says, hey, even if you stop using it, six months in which we’re actually in a little bit now, we exited before some of those contracts expired and we’ll have to pay some of that out regardless of whether we’re actually using the services or not.
(18:03): And then for S3, all our eight petabytes of file storage, we signed up for four year contract to get the best possible pricing. So there’s all that and then there’s the fact that we were on a schedule of monthly budget reviews for the cloud. We had someone, Fernando, comb over everything, add it all up in chart and graphs figure out making sure that nothing was running long, nothing was sized too big or whatever. So we’ve done everything to boil that budget down. That’s how we ended up on quote unquote only 3.2 million if we had had an unoptimized cloud budget. I’ve seen some of those, I’ve seen what people get. There’s no doubt in my mind we could easily have spent $10 million a year and even a fairly optimized budget that wasn’t bound in as much, could even have spent double the amount that we had been spending.
(18:54): In fact, prior to us really putting a focus on it, we were overspending on the cloud as most everyone does. I think this is one of those secrets of the cloud that so much of the profit that comes out of it is that people accidentally end up buying too much, not using the right thing, not tying in for what they need and so forth. And that’s to the benefit of all the hyperscalers and to the detriment of everyone running in it, which is one of those financial aspects of this that I think is underappreciated. When we buy a bunch of servers, we do that analysis once. We sit down, how much do we need? How many e CPUs do we need? We we’re really careful about spacing it out and getting the right stuff and then once it’s bought, there’s no more risk. It’s not like we can suddenly run a query on a database that costs us tens of thousands of dollars, which is actually something that can happen in the cloud.
(19:46): If you run the wrong query on this massive network of machines, they’ll just bill you for the cost that it costs to run that and it can be in the tens of thousands. And those kinds of surprise billings are completely normal with anyone in the cloud. It disappears when you buy your own stuff. You have fixed capacity, it’s known capacity, but you’ve all spent it out and there’s nothing more to, um, to deal with. Now all that being said, Eron and I still had a nice cordial conversation with the kind folks at aws. We’ve been dealing with our account manager and a few others when we announced that we were leaving the cloud, and of course AWS as any good vendor would do, wanted a conversation to like, hey, why are you leaving? Could we have done something different? What is it? But I think the thing is, the fundamentals just don’t line up for us.
(20:33): We have this stable business, we’ve been here for 20 plus years. We can afford a half a million dollar outlay on two pallets worth of servers. You’re never gonna make that math work in the cloud. The only way the cloud could make that work was essentially to give us stuff, right? Like all right, we’re not gonna run our normal financial models, we’re just gonna give you a bunch of stuff that’s not a business model and it doesn’t really work. And then any other customers of theirs say like, oh, um, 37signals is getting this bonkers deal. Um, we won that too. It, it doesn’t compute. So there was nothing they could say, there was nothing they could do to alter the underlying math of our exit.
Kimberly (21:11): I think that it’s funny that correlates with a comment on your Twitter. Someone wrote as someone who works on the ops back end of a major cloud provider, this is the way. Nothing we do can’t be done by a small team with some basic knowledge and a few racks. The services we sell are hideously profitable, relying on customer ignorance of how easily they could do it.
David (21:30): This is one of those things that I think, I love that comment because it well confirms my priors, so you always love those. But it also gives me this image of essentially the Mechanical Turk. People have this idea that the cloud is this highly sophisticated, battleship, galaxy brain construction that runs completely automated, right? Like there’s a bunch of robots just running around and there’s basically no one in the data centers, right? No, no that, no, that’s not it. These hyperscalers employ thousands and thousands of people who run around doing all the same stuff that our team runs around and does, like gotta work with the instances, gotta sometimes reboot a machine, something is stuck, got lock. There’s so many manual components and inside the hyperscalers too that they’re keen to downplay in terms of their image. Like no, this is so much more sophisticated and we have all this magic dust technology.
(22:26): No you don’t. First of all, you’re running mainly on the same open source software as anyone could and as we are. None of the fundamental paradigms, containerizations or VMs or whatever else have you are different whether you run it on your own hardware or or not. But the cloud marketing campaign has been exceptionally successful in convincing people that they’re dumber than they really are. That no one else have this capacity to be able to manage their own hardware, which is just such a disappointing lesson for the industry to internalize when you know that’s not the truth and that we have validated that that wasn’t the case for literally decades. The entire internet was built and it scaled and it got to where it is on everyone running their own hardware. And there are hundreds of thousands of highly qualified people who know how to do this.
(23:18): We have a team of about 10 of them. We needed that team when we ran the cloud. Now we need that team when we run our own stuff because the savings in terms of productivity are not actually that great. What happens is you just take, the work needs to happen regardless. There’s no internet service that’s providing something that just runs at scale. Maybe you can make something small just run for a couple of years with no one tinkering with it, but something the size of HEY, for example, needs a bunch of people moving bits around all the time to just keep the thing alive. That’s true in the cloud too. You’re just paying other people. So the cloud is not just other um, someone else’s computer, it’s also just someone else’s Ops staff and you could just not do that. Instead of renting computers, instead of renting an Ops staff, you could buy the computers and you could hire the Ops staff.
(24:08): Now that’s not a solution to everyone in all situations. You need a certain scale. I mean, we’re large enough to have a team of 10 people doing this stuff. If you’re just starting out, all the sort of caveats we’ve gone over in this question or still present ever moreso present, if you are at the scale, we don’t even need one operations person. No, you should, you should rent an operation person. You should rent like an eighth slice of an operation person that’s running inside the hyperscaler. Um, but as soon as you go into whole quantities, I need a whole computer, I need a whole operations person and then that starts adding up, it’s gonna be so much more expensive to rent that from someone else.
Eron (24:49): The one advantage we had as well is we never stopped running things in the data center. We never moved Basecamp 4 out of the data center. We just didn’t get that far along in our cloud journey to the point where everything was on the cloud. We always had our data center facilities. Uh, we moved one of them during this whole process, which was also fun, but we never lost the expertise. And that it is a high bar for, for some organizations is if you don’t have a staff who has done this and has the expertise and knows how to do it, then yes, it can be a little daunting and the cloud is much easier at that scale. But if you have the people who know how to do it and you have the facility and the computers in place to do it, then it absolutely makes sense to do it on your own.
Kimberly (25:32): Okay, Eron, I have a quick question for you before we wrap up. This one comes from Demetro, “What about backups? How do you do it now? In the cloud, it’s a one button click. Do you use Raids?” I don’t even know what that is. “What if the server needs a physical reset? How quickly can it be done?”
Eron (25:47): So there’s a couple things there. Yes, of course we have backups. We have backups in multiple facilities in multiple layers for the databases. For instance, we take backups that live on a backup server that we have in our racks and then we also ship those to Amazon. We ship those to S3 and we store them there to have them offsite in cold storage. The S3 ones aren’t very useful in an emergency because it would take a long time to get them back out of there. But that’s, you know, the extreme worst case backup scenario is we have something there. In terms of turning a computer on or off. That’s what we have the, the technicians at Deft for and and we lean on them pretty heavily to, to do anything that requires actual hands-on. Although that’s fairly rare. Uh, we have ways to get into the servers if they’re misbehaving. We have out-of-band access to all the servers and the network gear and everything like that. So we mostly just need the data center technicians for, you know, times when we need a disc replaced or a cable that went bad replaced or, or something that.
Kimberly (26:43): Okay. So before we wrap up, what’s next? I mean this was a huge undertaking, Eron, I think you’ve said there’s still some cleanup to do. What’s next on the ops team’s focus?
Eron (26:52): Yeah, we have moved out of the cloud with an asterisk and uh, and there’s a fair amount of work left to do within that that asterisk. All the apps that customers use have moved, but we still have some components. Uh, we have a search cluster for Basecamp that needs to be moved and we have a pretty large set of logging facilities that we use that are also still in the cloud and those are running on Amazon’s open search platform, uh, which has been pretty good but is pretty expensive certainly in terms of what it’s going to cost us to run it on our own. So that is going to be the work over the next hopefully six weeks. We can, we can tackle those, but they might go a little bit longer. We’ll, we’ll just have to see. And then, yeah, the big question beyond that is what we mentioned earlier with S3, uh, we’ll have to figure out what to do with the eight petabytes of, of our customers most important data and we’ll figure that out next summer for a a 2025 exit from the cloud yet again.
David (27:46): I think also just taking a breather. I mean this was an absolute, I don’t know if it was sprint is the right word. There were certainly moments of this that were a sprint, and I think getting everything back and just consolidating our gains, making sure that everything that we’ve set up is set up in the best way. Every sort of shortcut we might have taken along the way is paved and proper, and for the whole team just to take a a breather as well. I mean we always do this again with product launches, like there is a push to get the launch done and then there’s a follow-up phase afterwards and then you need a bit of a time to recuperate and make sure that you don’t just rush into the next sprint. So getting that done and, and getting team and the processes just fully solid as good as they can be, then looking at the next challenge.
Kimberly (28:36): Okay. Well Eron, thanks for joining us again. You’re a podcast regular working on your five timer jacket. Uh, we appreciate you being here. Rework is a production of 37signals. You can find show notes and transcripts on our website at 37signals.com/podcast. Full video episodes are available on YouTube and Twitter. And as always, if you have a specific question about a better way to work and run your business, leave us a voicemail at 708-628-7850 and we just might answer your question on an upcoming show.