Bring your AI agents to Basecamp
Opening things up changes how people use your product. In this episode, David Heinemeier Hansson walks through a new feature that makes Basecamp accessible to AI agents through a Command Line Interface (CLI). He shares how this update can lead to entirely new ways of working for those that are technically minded.
Watch the full video episode on YouTube
Key Takeaways
- 00:10 - Redefining product accessibility
- 03:27 - Opening the door to AI agents
- 10:07 - Starting small and seeing where it leads
- 13:37 - The future for agent accessibility with other 37signals products
Links & Resources
- Bring your own AI agents to Basecamp
- Record a video question for the podcast
- Watch The REWORK Podcast on YouTube
- Sign up for Basecamp for free
- 30-day free trial of HEY
- Fizzy – a new take on kanban
- Books by 37signals
- Jason Fried on X
- David Heinemeier Hansson on X
Transcript
Kimberly (00:00): Welcome to Rework, a podcast by 37signals about the better way to work and run your business. I’m Kimberly Rhodes from 37signals team joined by Jason Fried and David Heinemeier Hansson. We are talking about tech things and AI things and Basecamp has recently been made agent friendly, so your AI agents can work within Basecamp. I’m not an expert. I will let you guys talk about it. David, do you want to kick us off?
David (00:25): Sure. I’ll start by quibbling with the term as we did just before hitting record. Agent friendly I think is a fine term, but I think it’s actually to me more about accessibility. And this was the contention we had about what does that mean? Traditionally in web design, accessibility means making it easier for folks who might be blind or might have limited vision, might be colorblind, have all of these factors that make it difficult for them to use a website or an application that has not been designed with any of those factors in play. So then you do accessibility work. You do contrast testing. Oh, if you have this color next to this color, that doesn’t work very well if someone is color blind or they can’t see low contrast well. So you change it. So you get some different colors introduced, you increase the contrast, you do all those things.
(01:17): For full-on blind users, you make sure that keyboard accessibility is really high, that you can do everything with a keyboard. When you tap from one element to the other, it goes in a flow and a rhythm that makes sense and it doesn’t just jump all over the place in something that doesn’t make sense. So that’s actually how I think about the work we’ve been doing for agent accessibility. You have these AI agents that are incredibly smart at a ton of things, and then they still kind of stumble when they have to use a website. They can barely just do it, but they’re really slow. I did a big test a while back about a month ago when we set off to do this work where it’s like, I wonder if we even need to do anything. Can the agents we have right now, the models that are out there, the harnesses that people run, Claude Code and OpenCode and all the other ones, can they actually just do it?
(02:11): Do they even need any help? And I was shocked by how successful the agents were at using just a browser. I could tell one of these agents to sign up for Basecamp, to introduce itself in Campfire. The same thing with Fizzy, the same thing with HEY, but it was really slow. It was one of those things where I can see the future, but I can’t see when it’s going to arrive. This looks like something that might be at least a year out. It could be two, it could be five. Who knows? If we want to make Basecamp a great place to work with agents today, we got to make it fast. No one’s going to have the patience to sit around and wait for agents taking minutes and minutes to do work when you ask them in a chatbot interface and they just … They’re spitting out facts.
(03:00): That kind of pace needs to be captured when you’re working with them in Basecamp. So that’s what all this agent accessibility work is for. It’s sort of like a little ramps, right? Little wheelchair accessibility. Maybe you could get up with your wheelchair some other way, but be very cumbersome and maybe even a little dangerous. But if we just make it a little ramp, you could just roll right in. So that’s what we’ve done for Basecamp here by creating a command line interface, a CLI tool that the agents can use because what we found over just the last six to nine months is that AI becomes super powerful when you give them these tools inside of a terminal. That’s what all this explosion of productivity has been about, especially the last three or four months on agent accelerated development work, like programmers and designers suddenly being able to do way more stuff because they’re not just asking an LLM that’s been trained on the whole internet and getting an answer back.
(03:58): No, they’re asking the agent to do stuff and the agent will try to do something and just like a human will realize that often they don’t know how to do it. They call the tool wrong. They do all these things, but if it can stay in a feedback loop where it can try something, adjust its approach and then try again and then succeed, it can really quickly move on. But that loop requires that the tools are fast to use. And as impressed as I was that the agents could use a browser and could use a browser really well, you also realize that when you know how they do it, why it’s so slow. Right now, the agents are literally taking like a screenshot of the screen and then running it through this image analyzer to break down what are all the elements, what are all the fields, and then trying to reason about it.
(04:49): Very impressive, as I said, but also very cumbersome and very slow. Versus if they do all this work through a CLI, a command line interface, that’s just text. Literally what these LLMs have been trained, like trillions of tokens, they’ve gone through text, text, text. What’s the next command? What’s the next token? They’re so good and so fast at that. So if they can stay in that little loop, you can get the tokens per second rate to be quite close to when it’s just writing your silly story. And the difference between those things is everything, right? The speed, as we often say, is a feature. And when it comes to agents being able to do things for you and you having some patience to wait for it, speed is everything. It’s the difference between, I can’t be bothered to it’s easier to ask my agent to do it.
(05:36): And I’ve been experimenting a bunch with our new agent accessibility on ramps here, our CLI. And I’ve been mind blown how quickly, especially the fastest agents like Kimi K2.5 that I’ve been using recently, which is one of these super fast open weight models that if you just ask it to write a story or solve a problem, logically can do 200 tokens a seconds. That’s a hundred plus words a second. Super fast, right? And then I asked it to actually, just for the heck of it, design me a Basecamp project that spells out the launch campaign for Basecamp 5. And of course I didn’t even give it all the context, so it wasn’t going to give me the launch campaign we’re going to ship with, but the pace by which it filled out the to-do list, it broke it all down, it added comments, it added messages, it added items on the schedule.
(06:28): Well, if you want to launch in four weeks, you got to start doing this the week before, you got to do this. I was just like, holy smoly, this is incredible. And it’s so funny because when you think about it, why does that feel special? If I had asked ChatGPT-3 18 months ago about just creating a text document with a launch plan for Basecamp, it would have done that. And I would probably also have been surprised like, that’s neat, but then what am I going to do with it? That’s the magic of this agent accessibility stuff. You take all the wisdom, the insight, the summarization, all the things we like about LLMs, and then you make it usable. It’s not just a block of text. No, it’s an entire project. And here’s some to-dos. Some of them are assigned to you, some of them are assigned to the agent, some of them are assigned to Jason or whoever is working on this stuff, and suddenly all that intelligence becomes actionable.
(07:26): I mean, it’s so funny, when you talk about these things, you’ve really got to be careful not to sound like an airport advertisement for Salesforce or something. Intelligence becomes actionable, boom. But here are sometimes some of these revelations actually you sit with and you go like, oh, wow. Yes. Much the same way as I’ve had it programming with these agents. For a long time, I was very curious. I was very fascinated. I loved asking it stuff. And then I went off and I wrote my little code by myself, by my two little hands here. And then suddenly things flip because the agents could do stuff. They could use the terminal, they could run code, they could run tests, they could rewrite code that failed to test, they could get into that whole loop. And suddenly I went from like, uhhh this is neat, but I’ll write it myself to this is freaking incredible.
(08:19): You go ahead. I’ll intervene if I need to. And if we can get to that for everything else, for setting up a project, for figuring how to divide the work, for checking in on things and checking up on things, and we can have those kinds of superpowers at our fingertips because it’s all in Basecamp, because it’s all shared in Basecamp. It’s not just like I have a personal little conversation here with an agent. No, the whole company, the whole project, everyone can suddenly collaborate around this stuff. And not necessarily even with one agent, but with multiple. We’ve started doing this extensively already at Basecamp where we are driving some of these internal processes that we have and we have a bunch of different agents. Some have a personal agent that they’re just using to act on their behalf. And then we have some automated agents that do all of this stuff too and chime in and follow up.
(09:09): And I know this kind of explosion is going on at a fair number of companies on the leading edge, but then there’s the rest of the 98% of the world who may have used ChatGPT, but who haven’t done anything with agents, who are not running in the CLI. And if you’re very plugged into X and Twitter and wherever else that people are discovering and sharing all this stuff at high velocity, you think that this is how the entire world is already working. No, they’re not. Absolutely not. And all this work that we’re doing right now, the agent accessible version of Basecamp that uses the CLI and the skills and so on, even that is not going to reach the broad masses. We are missing a step yet where this becomes as easy as using ChatGPT, which hundreds of millions of people have used.That has actually become very mainstream very quickly, but we got to skate to where the puck is going.
(10:03): And I think that that’s what’s exciting about this work, that over the last almost two years, we’ve had a bunch of experiments internally with AI infused features. These are the things where you try to bake the AI into the product itself and like, well, maybe it can summarize some things or it can suggest some things or it can help you in other ways. And the track record for most companies on that work is, let’s say, mixed. And if you’re less charitable, say completely fucked, right? In fact, it has been so bad that an entire contingency of consumers have come to equate AI with product degradation, with bullshit. Microsoft literally last week had to come out with a mea culpa saying, “Sorry, we shoved AI crap into Paint, into Notepad, into all these crevices of Windows. We hear you, you don’t want that.” And of course they don’t because it was tacked on and it wasn’t actually helpful enough.
(11:07): And you got to contrast that with the consumer excitement for something that’s obviously good, like ChatGPT and the other chat-based interfaces. Hundreds of millions of people are using it, they’re considering it integral to their daily lives, they’re paying for it, all these things. So it’s not that people hate AI. What they hate is shitty AI globbed on because you want a sticker on your product, right? Now, there are ways to do this well. There are ways to put AI into a product and have it be something that feels fully native and integral to it and people wouldn’t reject, but that’s quite hard. We’ve tried, as I said, for a couple of years to see if we could find a bunch of angles on this and we tried many things and we shipped virtually none of it because it didn’t pass that final muster, because we didn’t want the Notepad, Paint backlash of shipping something crappy.
(12:03): So we continue to work on that and I’m sure we’re going to nail it. We’re going to find some ways where this really fits and it really makes a lot of sense. But until then, we can give everyone the agent accessible version of Basecamp, the agent friendly version of Basecamp that they can use with these tools they already love. No one who’s using Claude Code on a daily basis or OpenCode are going to tell you like, “Well, I don’t want that. I don’t want to be able to just command Basecamp with my fingertips and tie it together with GitHub or Sentry or anything else.” They’re going to say, “Yes, please” because this is such an obvious win. So that’s also a bit of a recommendation I’d actually say for anyone else who’s trying to figure out how AI plays a role in the product. Do you know what?
(12:47): Until you figure that out, if you haven’t already, just make it accessible. Just make it easier for agents to use this. This is that glue that unlocks all the promise that we told customers for literally 20 years was going to be there with APIs and 99.9% of them never touched our APIs because it required a programmer. It required all sorts of stuff to do it. It was just too expensive to do. Agent accessibility, CLIs, skills, this whole bucket takes all of that promise and then delivers it not quite on the silver platter. We’re not quite there yet, but I don’t know, in a slightly less cumbersome box. Quite a few people can open that and then, by the end of the year would be my guess, this is actually going to go mainstream because it’s going to be shipped in bulk through these interfaces that everyone uses.
Kimberly (13:36): Okay. So Basecamp is now accessible to agents. What are we thinking for our other products? HEY, Fizzy, are those on the horizon or it’s just a Basecamp solution right now?
David (13:48): Oh, it’s coming for everything. In fact, we already have some pretty good accessibility for Fizzy because our friend, Rob, put out an open source CLI for it. In fact, we ended up hiring Rob Zolkos to work with us on the Basecamp CLI because he did such a great job of the Fizzy CLI. We’re going to apply a bunch of those lessons that we’ve now taken really polishing the Basecamp CLI. This is one of those AI inception moments where like, we’re trying to make Basecamp more accessible to the agents and we’re making the CLI and the agent is actually making the bulk of the CLI. The vast majority of the code that goes into this CLI was written by an agent and it got like 65% of that done in no time at all. And then we spend weeks and weeks and weeks and weeks getting it to, if not 100%, then 97%.
(14:36): Now we can take all of those lessons. We can apply it to the work that Rob already did on Fizzy and then we can put out an official fully polished Fizzy CLI end skill. And then after that, 100% I want for HEY. I want this for my email. I want my email to be tied together with my Fizzy when I’m using that or my Basecamp when I’m using that. Bringing all these things together and having this executive agent that I can just tell to do stuff. I mean, one of the things I use HEY for a lot, for example, is travel and having an agent to be able to just go in and look these things up when I just want a piece of information out of my email. I don’t want to troll through it. I don’t want the email. I just want the fact that’s in the email.
(15:19): I’m really excited for having full accessibility for HEY, both on the email side, but also on the calendar side and tying all these things together. So we’ll do it for all of it. And I think anything going forward, this is going to be baked. This is going to be table stakes that your application is agent accessible, both for the direct use with that application, but just as much because it kind of plugs in to this broader ecosystem. So many successful applications over the years, Slack is a good example, were successful because they ended up with this ecosystem that was kind of proprietary to Slack and integrations with it. What agent accessibility is doing is basically bringing that to everyone. All those moats just come tumbling down because there’s no specific advantage to just having something inside of one application. In fact, the whole game here is that your agent can talk to anything you use, can access anything you use, wherever your data is, it can move it back here or everywhere and tie it all together. That’s super exciting.
Kimberly (16:19): Okay. Well, you can read more about what we’re doing at basecamp.com/agents. This has been a production of 37signals. You can find show notes and transcripts on our website at 37signals.com/podcast. Full video episodes on YouTube. And if you have a question for Jason or David about a better way to work and run your … what’s the line?
David (16:37): Run your agent.
Kimberly (16:38): I was literally about to say that. I was like, wait, I need an agent to say this for me. If you have a question for Jason or David about a better way to work and run your business, leave us a voicemail at 37signals.com/podcastquestion, or you can send us an email to rework@37signals.com. That is not… Is that right? You guys, that’s it.
David (17:00): Is email address right? I think it is.
Kimberly (17:02): Oh my gosh. I don’t know why.
David (17:03): I’d have an agent check it in post.
Kimberly (17:05): Literally, I need an agent to do the exit.