Oct 1, 2011

Open Source Hardware

Back in 2010 I stopped buying test servers from Dell and began building them from components using Intel i7 processors, X58-based mother boards, and modular power supplies from Ultra.  It was a good way to learn about hardware.  Besides, it was getting old to pay for Dell desktop systems with Windows, which I would then wipe off when installing Linux.  Between the educational value of understanding the systems better, selecting the exact components I wanted, and being able to fix problems quickly, it has been one of the best investments I have ever made.  And it didn't cost any more than equivalent Dell servers.

For this reason, a couple of recent articles about computer hardware caught my attention.  First, Dell is losing business as companies like Facebook build their own customized servers.  Open source database performance experts like Peter Zaitsev have been talking about full-stack optimization including hardware for years.  Google built their original servers using off-the-shelf parts.  Vertical integration of applications and hardware has since gone mainstream.  If you deploy the same application(s) on many machines, balancing characteristics like cost, performance, and power utilization is no longer a specialist activity but a necessity of business.  It's not just cutting out the Microsoft tax but many other optimizations as well.

Second, developments in hardware itself are making custom systems more attractive to a wide range of users.  A recent blog post by Bunnie Huang describes how decreases in the slope of CPU clock speed increase over time mean you can get better cost/performance by building optimized, application-specific systems now than waiting for across-the-board improvements.  Stable standards also drive down the difficulty of rolling your own.  Components on mid-range servers are sufficiently standardized it is easier to build basic systems from components than to put together a bicycle from scratch.  Try building your own wheels sometime if you don't believe this.

Easily customizable hardware has important consequences.  At a business level, Dell and other mainline hardware vendors will adapt to lower margins, but the market for generic, mid-range appliances has evaporated.  Starting around 2005 there was a wave of companies trying to sell open source databases, memcached, and datamarts on custom hardware.   Most seem to have moved away from hardware, like Schooner,  or folded entirely (like Gear6 and Kickfire).  The long-term market for such appliances, to the extent it exists, is in the cloud.

The other consequence is potentially far more significant.  The traditional walls that encapsulated hardware and software design are breaking down.  Big web properties or large ISPs like Rackspace run lean design teams that integrate hardware with open source software deployment.  This not just a matter of software engineers learning about hardware or vice-versa.  It is the tip of a much bigger iceberg.  Facebook recently started the Open Compute Project, which is a community-based effort to design server infrastructure.   In their own words:
By releasing Open Compute Project technologies as open hardware, our goal is to develop servers and data centers following the model traditionally associated with open source software projects. That’s where you come in.
Facebook and others are opening up data center design.  Gamers have been building their own systems for years.  Assuming Bunnie's logic is correct, open hardware will apply to wide range of devices from phones up to massive clusters.  Community-based, customized system designs are no longer an oddity but part of a broad movement that will change the way all of us think about building and deploying applications on any kind of physical hardware.  It will upset current companies but also create opportunities for new kinds of businesses.  The "cloud" is not the only revolution in computing.  Open source hardware has arrived.  

6 comments:

Greg Smith said...

Part of what's enabling this shift is companies architecting for "lots of small servers" instead of the more traditional "one giant redundant server". Once you've made that sort of shift, it becomes a lot easier to fall into a fully build your own approach.

I've been building PCs from parts for 15 years, and do a lot of specific hardware recommendations for database servers. The main thing that determines whether that's appropriate or not is how many systems like that you're willing to build. If you have a single server in a data center, troubleshooting problems when they appear is impossible. Companies in that position find a Dell service contract money well spent the first time they have a problem, and someone with a pile of spare parts arrives to sort out which one broke.

If you're Facebook and deploying a giant number of identical servers, you don't have this problem. You have the staff, expertise, and similar systems to do your own troubleshooting. All of the build-my-own systems I put together are in very similar pairs for just this reason. I recovered from the last failure I saw in a few minutes; just swapped components with the working box until I confirmed the power supply was the bad part. The savings per unit does help contribute toward having a good spare parts bin too.

Robert Hodges said...

100% agree. Oracle Exadata and IBM mainframes are pretty safe from hardware hackers because they deploy in very small numbers. The recent appliance companies tried to scale the "lots of small servers" model at the same time that the companies who should have been their market were doing the same thing for themselves.

To make things work locally it helps enormously to build similar systems exactly as you say. I buy mother boards, video cards, and the like in pairs at this point for that reason. It's analogous to parity bits, just expressed in a different way.

Bruce Momjian said...

Wow, that is very interesting. I had not connected the move from monolithic servers to several smaller servers as allowing a single server to go down and for parts from other servers to be used to find the faulty part. It certainly makes support easier, i.e. we never had a spare VAX sitting around for parts because it was much too expensive.

Robert Hodges said...

@Bruce, Part of the secret sauce is keeping not just hardware but the apps simpler as you spread them across more hosts. Problems with hardware often present as app failures and vice-versa. (Example: occasional flipped bits coming back from a flakey disk.) Simple apps also run on simpler hardware that is easier to optimize.

Sticking with the same versions of everything from base hardware/firmware up to apps is also a good idea. Rob Wultsch mentioned this in a talk he did yesterday at Oracle OpenWorld--it's a good point. Definitely not the way we used to run enterprise apps a few years ago.

Khan said...

Robert,

Can you talk a bit about your experiences with building your own hardware?

I'm a strictly user level customer for postgresql; I'm using it to for location based services built on polygons, serving to mobile apps, and am having a really difficult time trying to estimate what hardware needs I'll have to take into account for going to market.

My present setup is a 2.13 GHz Core 2 dual, 2 Gb RAM server which suffices for about 30 queries/second, but I fear/hope that I'll have 10,000 customers of which some substantial fraction could theoretically be connected simultaneously.

Having some guidelines for scaling CPU/RAM benchmark performance with postgresql throughout would be really helpful.

Robert Hodges said...

@Khan, You are exactly right that you need benchmarks first and foremost. You can learn a lot about system performance using tools like pgbench as well as sysbench. Sysbench works on MySQL but it has nice tests for I/O and CPU performance that work on any host.

Your application is going to need much beefier hardware than what you described. I have been very pleased with Intel i7/920 CPUs and X58 motherboards from Intel and ASUS. These give you 4 cores with hyperthreading (=8 usable cores for Linux). However, this is really a gaming platform, which focusses on CPU speed; most gamers also install very capable video cards that soak up most of the power used by the whole host.

For databases, the really big issue is i/o and memory. For active servers you want as much data as possible in RAM. Also, if you do write to storage you want it to be really fast. If you really get as busy as you describe there's no way you can work off 7200 RPM SATA drives. You'll want to use a RAID with SAS drives or SSDs.

So... Figure out what your dataset size is going to be and how many users will be active at once. You can estimate things roughly with pgbench to see whether you can get by with this type of architecture or whether you need to go to a dual-socket board that gives you more cores and more memory. Plus it will allow you to vet the i/o subsystem. You'll need to build at least a couple of hosts linked with replication to make your site HA, so you need to factor that in as well.