As a part of our ongoing Nuts & Bolts series I asked for questions from readers about the kinds of things they’d like to see covered. One of the topics that came up several times was how we manage our database servers.

All of our applications, with the exception of Basecamp, follow a pretty similar model: We take a pair of Dell R710 servers, load them up with memory and disks, and setup a master/slave pair of MySQL servers. We use the excellent Percona Server for all of our MySQL instances and couldn’t be happier with it.

Here’s an example of one of our MySQL servers. In this case, the Highrise master database server:

  • Dell R710
  • 2 x Intel Xeon E5530 Processors
  • 96GB RAM
  • 6×146GB 15,000 RPM SAS drives

For the disk configuration we take the first two drives and put them into a RAID1 volume that is shared between the root filesystem and MySQL binary logs. The remaining drives are placed into a RAID10 volume which is used for the InnoDB data files.

We only use RAID controllers that have a battery backup for the cache, disable read-ahead caching, and turn on write-back caching. With this setup we’re able to configure MySQL to immediately flush all writes to the disk rather than relying on the operating system to periodically write the data to the drives. In reality, the writes will be staged to the controller’s cache, but with the battery backup we are protected from unexpected power outages which could otherwise cause data loss. In addition, since the controller is caching the writes in memory, it can optimize the order and number of writes that it makes to the physical disks to dramatically improve performance.

As far as MySQL configuration is concerned, our configuration is pretty standard. The most important tips are to maximize the InnoDB buffer pool and make sure that you have a BBU enabled RAID card for writes. There are other important configuration options, but if you do those two things you’re probably 75% of the way to having a performant MySQL server.

Here are some of the most important configuration options in the Highrise MySQL config file:

sync_binlog = 1
innodb_file_per_table
innodb_flush_log_at_trx_commit = 1
innodb_flush_method = O_DIRECT
innodb_buffer_pool_size = 80G

I’m not going to talk much about backups other than to say you should be using XtraBackup, also from our friends at Percona. It is far and away the best way to do backups of MySQL.

For Basecamp, we take a somewhat different path. We are on record about our feelings about sharding. We prefer to use hardware to scale our databases as long as we can, in order to defer the complexity that is involved in partitioning them as long as possible—with any luck, indefinitely.

With that in mind, we went looking for an option to host the Basecamp database, which is becoming a monster. As of this writing, the database is 325GB and handles several thousand queries per second at peak times. At Rackspace, we ran this database on a Dell R900 server with 128GB of RAM and 15×15,000 RPM SAS drives in a Dell MD3000 storage array.

We considered building a similar configuration in the new datacenter, but were concerned that we were hitting the limits of I/O performance with this type of configuration. We could add additional storage arrays or even consider SAN options, but SSD storage seemed like a much better long term answer.

We explored a variety of options from commodity SSD drives to PCI-express based flash memory cards. In the end, we decided to purchase a pair of MySQL appliances produced by Schooner Information Technology. They produce a pretty awesome appliance that is packed with a pair of Intel Nehalem processors, 64GB of RAM, 4×300GB SAS drives, 8 x Intel X25-E SSD drives. Beyond the hardware, Schooner has done considerable work optimizing the I/O path from InnoDB all the way down through the system device drivers. The appliances went into production a few weeks ago and the performance has been great.

I sat down with Jeremy Cole of Schooner a few weeks ago and recorded a couple of videos that go into considerably more detail about our evaluation process and some thoughts on MySQL scaling. You can check them out here and here.