SSD and servers

SSD or Solid State Drives is one of the next hot tech fad… besides multicore CPUs/GPUs, cloud computing, virtualization, Brittney Spears comeback, Mel Gibson hot, new young girlfriend…

Heh, sorry, let my hand type faster than my brain.

My brothers and I were discussing about the merits of adding SSD to servers.  One of the things discussed was whether a new interface (connector, bus, whatnot) is needed to make most efficient use of SSD in servers.  People were looking at adding dedicated bus, slot, whatever to motherboard so you can get the highest throughput from SSD.

My argument is that, that is not really needed, except for the jobs that demand the highest possible speed.  And even then, it should really be decided on a sliding scale.  How much is an additional percent of extra speed worth to you?

I think that for most people, using the existing SATA/SAS (I/II/III) interface is “good enough”.  Yes, you do not get the best speed, but the benefits still make it worth using, and the low cost will induce people to use it.

SATA interfaces come for free on all modern motherboards.  Most servers support hotswap SATA drives.  Not all support hot swap PCI/PCI-X/PCI-e slots.  Secondly, with hotswap SATA/SAS drives, you can do it without having to open up the case and/or powering down the system.  If you have more than a handful of servers to upgrade, you will appreciate this.

Next step up above SATA is SAS, which is mainly used in I/O intensive applications (such as DB servers, LDAP servers, etc.).

Don’t forget too that if you go with add-on cards — PCI, PCI-X, PCI-e — you also most likely will have to deal with drivers.  Your OS is not going to automatically make use of these add-on SSD cards without drivers.