Welcome!

XML Authors: Roger Strukhoff, David Smith, Michael Bushong, Elizabeth White, Jason Bloomberg

Related Topics: Virtualization, XML, SOA & WOA, AJAX & REA

Virtualization: Blog Feed Post

The Many Places You Can Go with SSD Caching Software

SSDs are a disruptive technology in that they are clearly changing the enterprise storage market

Disruptive technology is a term used to describe an idea or invention that typically disrupts an existing market, often completely displacing an earlier technology.  Sometimes disruptive is great (e.g. digital cameras and cell phones), and sometimes disruptive is not so great (e.g. laser video disks).  SSDs are a disruptive technology in that they are clearly changing the enterprise storage market.  The verdict is still out on just how disruptive SSDs will be; as is the case with many disruptive technologies, it is going to take some time to figure out exactly what to do with SSDs.  But a clear leader in the employment of SSDs is to use them as cache, which takes advantage of their incredibly high speeds, while minimizing the cost and hassle of implementation.

One of the key design decisions during the early development of VeloBit HyperCache was figuring out the optimal location of an SSD-based cache.  By “location,” I am referring to the placement of SSD within a computer’s storage architecture.  In order to understand VeloBit’s place in the storage stack, it is first necessary to understand the basic layout of a typical computer storage system.

Figure 1 shows a simplified IT system architecture model where the application software sits atop the operating system (OS) and corresponding file system.  The file system interfaces to HDDs and SSD using a block device driver.  The block device driver performs the actual IO with the HDDs and SSDs.  Hard disks in enterprise environments typically sit behind storage controllers, which contain their own (typically volatile) cache, along with a array of disks.  SSDs can also sit behind these storage controllers, although the VeloBit model allows you to directly attach an SSD to your server, without the need to purchase expensive specialty SSD arrays.

Additionally, in typical deployments, file systems sit atop a volume virtualization layer. In Linux, this is typically the LVM, or Linux Volume Manager.  Using this technology allows system administrators the flexibility of storage virtualization (such as RAID, snapshots, thin provisioning, and disk expansion) without incurring the added cost of SAN-side virtualization platforms.  Finally, it’s possible to deploy SSD caching as a part of this virtualization layer, as will be discussed later in this article.

SSD Caching SW locations resized 600

Figure 1: Simplified IT System Architecture

Back to deploying SSD as cache - there are two components to any SSD caching system: (1) the SSD itself, and (2) SSD caching software.  The SSD can either be installed in the server directly, or inside the storage array.  SSD caching software is more flexible, and be installed in several different locations:

  1. In the storage controller
  2. Between the volume virtualization layer and the block device layer
  3. As a part of the volume virtualization layer itself
  4. On top of the file system

These locations are marked on the diagram in Figure 1 with the corresponding circled numbers.

Let’s talk about the pros and cons of each location.

SSD Caching Software In The Storage Controller
Location 1 in Figure 1 shows the SSD caching software residing in the storage controller.  These controllers contain dedicated processors to manage all IO operations to the storage array, and algorithms can be implemented to determine what data should reside on the slower, higher-capacity disks, and what data should be sent to SSD for high-speed access. This solution:
•    Is easy to install, assuming you’re already purchasing a storage array
•    Provides very high performance
•    Can be incredibly expensive, since it comes along with an enterprise storage system
•    Is usually hardware dependent and typically vendor specific (results in vendor lock-in)


Examples of this solution would be SSD cards from High Point and LSI.

SSD Caching Software Above the Block Layer
A device driver is typically software developed to control specific hardware at a very low level.  In this case, I am grouping the SSD in with SSD caching software and calling the whole thing a device driver because the combination of the SSD and SSD caching software acts as a transparent device driver for an application accessing primary storage.  This is shown as the dashed line in location 2 in Figure 1. The SSD/Caching software combination is:
•    Easy to install – this implementation is completely transparent to both the lower-level storage, as well as all file systems and applications.
•    High performance
•    Very hard to develop – being such a low-level driver requires intimate knowledge of the operating system and block device drivers.
•    Hardware independent – since there’s no direct interaction with storage (storage access is abstracted by the block device driver), this solution works with any primary storage and SSD.
•    Applications independent – by inserting itself just above the block device driver, this type of SSD cache requires no file system or application changes.


Velobit HyperCache SSD caching software is an example of this solution.

SSD Cache In the Volume Virtualization Layer
The use of SSD caching software at location 3 in Figure 1 means the SSD caching software works inside the volume virtualization layer. This requires changes in the existing hard drive and SSD map configuration.  Using SSD caching software at this location is:
•    Moderate performance – performance can be limited by the virtualization layer itself.
•    Easy to develop – this caching method uses already-existing tools, inside the virtualization layer.
•    Very difficult to install – since the cache is built as a new virtual volume, your entire volume management needs to change to use the cache.
•    Hardware independent

FlashCache from Facebook is an example of this solution.


SSD Caching Software On Top Of The File System
The use of SSD caching software at location 4 in Figure 1 means the SSD caching software works at a higher level – the file system level. By sitting above, or inside, the file system, a user can specify precisely which files (and by association, applications) to cache.  Using SSD caching software at this location is:
•    Low performance
•    Very easy to develop
•    Very easy to install
•    Hardware independent
•    Environment specific – many databases don’t use a file system, to achieve maximum possible performance; therefore, file system-based caching won’t work in these environments.  Additionally, while most Windows installations use NTFS, many file systems exist for Linux, and it isn’t practical to support all available platforms.

CacheWorks from Nevex is an example of this solution.

Conclusion
The table below summarizes the various installations of SSD caching software discussed above.  If vendor lock-in is not a concern, running SSD caching software in the SSD controller offers the best combination of features and performance.  However, vendor lock-in is expensive and limits the options for product choices.  Using SSD caching software in conjunction with the SSD as a device driver for the HDD offers all the benefits of installing in the SSD controller without the problem of vendor lock-in.

Table SSD Caching SW locations

Read the original blog entry...

More Stories By Peter Velikin

Peter Velikin has 12 years of experience creating new markets and commercializing products in multiple high tech industries. Prior to VeloBit, he was VP Marketing at Zmags, a SaaS-based digital content platform for e-commerce and mobile devices, where he managed all aspects of marketing, product management, and business development. Prior to that, Peter was Director of Product and Market Strategy at PTC, responsible for PTC’s publishing, content management, and services solutions. Prior to PTC, Peter was at EMC Corporation, where he held roles in product management, business development, and engineering program management.

Peter has an MS in Electrical Engineering from Boston University and an MBA from Harvard Business School.