Welcome!

Industrial IoT Authors: Yeshim Deniz, Liz McMillan, Elizabeth White, Stackify Blog, SmartBear Blog

Related Topics: @CloudExpo, Linux Containers, Containers Expo Blog, @BigDataExpo, SDN Journal

@CloudExpo: Article

Performance: The Key to Data Efficiency By @Permabit | @CloudExpo [#Cloud]

Data efficiency encompasses a variety of different technologies that enable the most effective use of space on a storage device

Data efficiency - the combination of technologies including data deduplication, compression, zero elimination and thin provisioning - transformed the backup storage appliance market in well under a decade. Why has it taken so long for the same changes to occur in the primary storage appliance market? The answer can be found by looking back at the early evolution of the backup appliance market, and understanding why EMC's Data Domain continues to hold a commanding lead in that market today.

Data Efficiency Technologies
The term "data efficiency" encompasses a variety of different technologies that enable the most effective use of space on a storage device by both reducing wasted space and eliminating redundant information. These technologies include thin provisioning, which is now commonplace in primary storage, as well as less extensively deployed features such as compression and deduplication.

Compression is the use of an algorithm to identify data redundancies within a small distance, for example, finding repeated words within a 64 KB window. Compression algorithms often take other steps to increase the entropy (or information density) of a set of data such as more compactly representing parts of bytes that change rarely, like the high bits of a piece of ASCII text. These sorts of algorithms always operate "locally", within a data object like a file, or more frequently on only a small portion of that data object at a time. As such, compression is well suited to provide savings on textual content, databases (particularly NoSQL databases), and mail or other content servers. Compression algorithms typically achieve a savings of 2x to 4x on such data types.

Deduplication, on the other hand, identifies redundancies across a much larger set of data, for example, finding larger 4 KB repeats across an entire storage system. This requires both more memory and much more sophisticated data structures and algorithms, so deduplication is a relative newcomer to the efficiency game compared to compression. Because deduplication has a much greater scope, it has the opportunity to deliver much greater savings - as much as 25x on some data types. Deduplication is particularly effective on virtual machine images as used for server virtualization and VDI, as well as development file shares. It also shows very high space savings in database environments as used for DevOps, where multiple similar copies may exist for development, test and deployment purposes.

The Evolution of Data Efficiency
In less than ten years, data deduplication and compression shifted billions of dollars of customer investment from tape-based backup solutions to purpose-built disk-based backup appliances. The simple but incomplete reason for this is that these technologies made disk cheaper to use for backup. While this particular aspect enabled the switch to disk, it wasn't the driver for the change.

The reason customers switched from tape to disk was that backup and particularly restore to and from disk, respectively, is much, much faster. Enterprise environments were facing increasing challenges in meeting their backup windows, recovery point objectives, and (especially) recovery time objectives with tape-based backup systems. Customers were already using disk-based backup in critical environments, and they were slowly expanding the use of disk as the gradual price decline of disk allowed.

Deduplication enabled a media transition for backup by dramatically changing the price structure for disk-based vs tape. While the disk-based backup is still more expensive, deduplication has made it faster and better.

It's also worth noting that Data Domain, the market leader early on, still commands a majority share of the market. This can be partially explained by history, reputation and the EMC sales machine, but other early market entrants including Quantum, Sepaton and IBM have struggled to gain share, so this doesn't fully explain Data Domain's prolonged dominance.

The rest of the explanation is that deduplication technology is extremely difficult to build well, and Data Domain's product is a solid solution for disk-based backup. In particular, it is extremely fast for sequential write workloads like backup, and thus doesn't compromise performance of streaming to disk. Remember, customers aren't buying these systems for "cheap disk-based backup;" they're buying them for "affordable, fast backup and restore." Performance is the most important feature. Many of the competitors are still delivering the former - cost savings - without delivering the golden egg, which is actually performance.

Lessons for Primary Data Efficiency
What does the history of deduplication in the backup storage market teach us about the future of data efficiency in the primary storage market? First, we should note that data efficiency is catalyzing the same media transition in primary storage as it did in backup, on the same timeframe - this time from disk to flash, instead of tape to disk.

As was the case in backup, cheaper products aren't the major driver for customers in primary storage. Primary storage solutions still need to perform as well as (or better than) systems without data efficiency, under the same workloads. Storage consumers want more performance, not less, and technologies like deduplication enable them to get that performance from flash at a price they can afford. A flash-based system with deduplication doesn't have to be cheaper than the disk-based system it replaces, but it does have to be better overall!

This also explains the slow adoption of efficiency technologies by primary storage vendors. Building compression and deduplication for fully random access storage is an extremely difficult and complex thing to do right. Doing this while maintaining performance - a strict requirement, as we learn from the history of backup - requires years of engineering effort. Most of the solutions currently shipping with data efficiency are relatively disappointing and many other vendors have simply failed at their efforts, leaving only a handful of successful products on the market today.

It's not that vendors don't want to deliver data efficiency on their primary storage, it's that they simply haven't been able to develop it so far and have underestimated the difficulty of this task.

Hits and Misses (and Mostly Misses)
If we take a look at primary storage systems shipping with some form of data efficiency today, we see that the offerings are largely lackluster. The reason that offerings with efficiency features haven't taken the market by storm is because they deliver the same thing as less successful disk backup products - cheaper storage, not better storage. Almost universally, they deliver space savings at a steep cost in performance, a tradeoff no customer wants to make. If customers simply wanted to spend less, they would buy bulk SATA disk rather than fast SAS spindles or flash.

Take NetApp, for example. One of the very first to the market with deduplication, they proved that customers wanted efficiency - but they were also quickly turned off by the limitations of the ONTAP implementation. Take a look at the NetApp's Deduplication Deployment and Implementation Guide (TR-3505). Some choice quotes include, "if 1TB of new data has been added [...], this deduplication operation takes about 10 to 12 hours to complete," and "With eight deduplication processes running, there may be as much as a 15% to 50% performance penalty on other applications running on the system." Their "50% Virtualization Guarantee* Program" has 15 pages of terms and exceptions behind that little asterisk. It's no surprise that most NetApp users choose not to turn on deduplication.

VNX is another case in point. The "EMC VNX Deduplication and Compression" white paper is similarly frightening. Compression is offered, but it's available only as a capacity tier: "compression is not suggested to be used on active datasets." Deduplication is available as a post-process operation, but "for applications requiring consistent and predictable performance [...] Block Deduplication should not be used."

Finally, I'd like to address Pure Storage, which has set the standard for offering "cheap flash" without delivering the full performance of the medium. They represent the most successful of the all-flash array offerings on the market today and have deeply integrated data efficiency features, but they struggle to meet a sustained 150,000 IOPS. Their arrays deliver a solid win on price over all of the flash arrays without optimization, but that performance is not going to tip the balance for primary in the same way Data Domain did for backup.

To be fair to the above products, there are lots of others that must have tried to build their own deduplication and simply failed to deliver something that meets their exacting business standards. IBM, EMC VMAX, Violin Memory and others surely have tried to build their own efficiency features, and have even announced promises to deliver over the years, but none have shipped to date.

Finally, there are some leaders in the primary efficiency game so far! Hitachi is delivering "Deduplication without Compromise" on their HNAS and HUS platforms, providing deduplication (based on Permabit's AlbireoTM technology) that doesn't impact the fantastic performance of the platform. This solution delivers savings and performance for file storage, although the block side of HUS still lacks efficiency features.

EMC XtremIO is another winner in the all-flash array sector of the primary storage market. XtremIO has been able to deliver outstanding performance with fully inline data deduplication capabilities. The platform isn't yet scalable or dense in capacity, but it does deliver the required savings and performance necessary to make a change in the market.

Requirements for Change
The history of the backup appliance market makes the requirement for change in the primary storage market clear. Data efficiency simply cannot compromise performance, which is the reason why a customer is buying a particular storage platform in the first place. We're seeing the seeds of this change in products like HUS and XtremIO, but it's not yet clear who will be the Data Domain of the primary array storage deduplication market. The game is still young.

The good news is that data efficiency can do more than just reduce cost; it can also increase performance as well - making a better product overall, as we saw in the backup market. Inline deduplication can eliminate writes before they ever reach disk or flash, and deduplication can inherently sequentialize writes in a way that vastly improves random write performance in critical environments like OLTP databases. These are some of the requirements for a tipping point in the primary storage market.

Data efficiency in primary storage must deliver uncompromising performance in order to be successful. At a technical level, this means that any implementation must deliver predictable inline performance, a deduplication window that spans the entire capacity of the existing storage platform, and performance scalability to meet the application environment. The current winning solutions provide some of these features today, but it remains to be seen which product will capture them all first.

Inline Efficiency
Inline deduplication and compression - eliminating duplicates as they are written, rather than with a separate process that examines data hours (or days) later - is an absolute requirement for performance in the primary storage market, just as we've previously seen in the backup market. By operating in an inline manner, efficiency operations provide immediate savings, deliver greater and more predictable performance, and allow for greatly accelerated data protection.

With inline deduplication and compression, the customer sees immediate savings because duplicate data never consumes additional space. This is critical in high data change rate scenarios, such as VDI and database environments, because non-inline implementations can run out of space and prevent normal operation. In a post-process implementation, or one using garbage collection, duplicate copies of data can pile up on the media waiting for the optimization process to catch up. If a database, VM, or desktop is cloned many times in succession, the storage rapidly fills and becomes unusable. Inline operations prevent this bottleneck, one called out explicitly in the NetApp documentation above where at most 2 TB of new data can be processed per day. In a post-process implementation a heavily utilized system may never catch up with new data written!

Inline operation also provides for the predictable, consistent performance required by many primary storage applications. In this case, deduplication and compression occur at the time of data write and are balanced with the available system resources by design. This means that performance will not fluctuate wildly as with post-process operation, where a 50% impact (or more) can be seen on I/O performance, as optimization occurs long after the data is written. Additionally, optimization at the time of data write means that the effective size of DRAM or flash caches can be greatly increased, meaning that more workloads can fit in these caching layers and accelerate application performance.

A less obvious advantage of inline efficiency is the ability for a primary storage system to deliver faster data protection. Because data is reduced immediately, it can be replicated immediately in its reduced form for disaster recovery. This greatly shrinks recovery point objectives (RPOs) as well as bandwidth costs. In comparison, a post-process operation requires either waiting for deduplication to catch up with new data (which could take days to weeks), or replicating data in its full form (which could also take days to weeks of additional time).

Capacity and Scalability
Capacity and scalability of a data efficiency solution should seem to be obvious requirements, but they're not apparent in the products in the market today. As we've seen, a storage system incorporating deduplication and compression must be a better product, not just a cheaper product. This means that it must support the same storage capacity and the performance scalability of the primary storage platforms that customers are deploying today.

Deduplication is a relative newcomer to the data efficiency portfolio, and this is largely because the system resources required, in terms of CPU and memory, are much greater than older technologies like compression. The amount of CPU and DRAM in modern platforms means that even relatively simple deduplication algorithms can now be implemented without substantial hardware cost, but they're still quite limited in the amount of storage that they can address, or the data rate that they can accommodate.

For example, even the largest systems from all-flash array vendors like Pure and XtremIO support well under 100 TB of storage capacity, far smaller than the primary storage arrays being broadly deployed today. NetApp, while they support large arrays, only identify duplicates within a very small window of history - perhaps 2 TB or smaller. To deliver effective savings, duplicates must be identified across the entire storage array, and the storage array must support the capacities that are being delivered and used in the real world. Smaller systems may be able to peel off individual applications like VDI, but they'll be lost in the noise of the primary storage data efficiency tipping point to come.

Shifting the Primary Storage Market to Greater Efficiency
A lower cost product is not sufficient to substantially change customers' buying habits, as we saw from the example of the backup market. Rather, a superior product is required to drive rapid, revolutionary change. Just as the backup appliance market is unrecognizable from a decade ago, the primary storage market is on the cusp of a similar transformation. A small number of storage platforms are now delivering limited data efficiency capabilities with some of the features required for success: space savings, high performance, inline deduplication and compression, and capacity and throughput scalability. No clear winner has yet emerged. As the remaining vendors implement data efficiency, we will see who will play the role of Data Domain in the primary storage efficiency transformation.

More Stories By Jered Floyd

Jered Floyd, Chief Technology Officer and Founder of Permabit Technology Corporation, is responsible for exploring strategic future directions for Permabit’s products, and providing thought leadership to guide the company’s data optimization initiatives. He has previously deployed Permabit’s effective software development methodologies and was responsible for developing Permabit product’s core protocol and initial server and system architectures.

Prior to Permabit, Floyd was a Research Scientist on the Microbial Engineering project at the MIT Artificial Intelligence Laboratory, working to bridge the gap between biological and computational systems. Earlier at Turbine, he developed a robust integration language for managing active objects in a massively distributed online virtual environment. Floyd holds Bachelor’s and Master’s degrees in Electrical Engineering and Computer Science from the Massachusetts Institute of Technology.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
SYS-CON Events announced today that Datera will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Datera offers a radically new approach to data management, where innovative software makes data infrastructure invisible, elastic and able to perform at the highest level. It eliminates hardware lock-in and gives IT organizations the choice to source x86 server nodes, with business model option...
Coca-Cola’s Google powered digital signage system lays the groundwork for a more valuable connection between Coke and its customers. Digital signs pair software with high-resolution displays so that a message can be changed instantly based on what the operator wants to communicate or sell. In their Day 3 Keynote at 21st Cloud Expo, Greg Chambers, Global Group Director, Digital Innovation, Coca-Cola, and Vidya Nagarajan, a Senior Product Manager at Google, will discuss how from store operations...
Nordstrom is transforming the way that they do business and the cloud is the key to enabling speed and hyper personalized customer experiences. In his session at 21st Cloud Expo, Ken Schow, VP of Engineering at Nordstrom, will discuss some of the key learnings and common pitfalls of large enterprises moving to the cloud. This includes strategies around choosing a cloud provider(s), architecture, and lessons learned. In addition, he’ll go over some of the best practices for structured team migrat...
With major technology companies and startups seriously embracing Cloud strategies, now is the perfect time to attend 21st Cloud Expo October 31 - November 2, 2017, at the Santa Clara Convention Center, CA, and June 12-14, 2018, at the Javits Center in New York City, NY, and learn what is going on, contribute to the discussions, and ensure that your enterprise is on the right path to Digital Transformation.
Recently, REAN Cloud built a digital concierge for a North Carolina hospital that had observed that most patient call button questions were repetitive. In addition, the paper-based process used to measure patient health metrics was laborious, not in real-time and sometimes error-prone. In their session at 21st Cloud Expo, Sean Finnerty, Executive Director, Practice Lead, Health Care & Life Science at REAN Cloud, and Dr. S.P.T. Krishnan, Principal Architect at REAN Cloud, will discuss how they bu...
Infoblox delivers Actionable Network Intelligence to enterprise, government, and service provider customers around the world. They are the industry leader in DNS, DHCP, and IP address management, the category known as DDI. We empower thousands of organizations to control and secure their networks from the core-enabling them to increase efficiency and visibility, improve customer service, and meet compliance requirements.
Digital transformation is changing the face of business. The IDC predicts that enterprises will commit to a massive new scale of digital transformation, to stake out leadership positions in the "digital transformation economy." Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Santa Clara Convention Center in Santa Clara, CA, Oct 31-Nov 2, will find fresh new content in a new track called Enterprise Cloud & Digital Transformation.
SYS-CON Events announced today that N3N will exhibit at SYS-CON's @ThingsExpo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. N3N’s solutions increase the effectiveness of operations and control centers, increase the value of IoT investments, and facilitate real-time operational decision making. N3N enables operations teams with a four dimensional digital “big board” that consolidates real-time live video feeds alongside IoT sensor data a...
SYS-CON Events announced today that NetApp has been named “Bronze Sponsor” of SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. NetApp is the data authority for hybrid cloud. NetApp provides a full range of hybrid cloud data services that simplify management of applications and data across cloud and on-premises environments to accelerate digital transformation. Together with their partners, NetApp emp...
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a reality. However, most organizations today are building their data infrastructure to focus solely on addressing immediate business needs vs. a platform capable of quickly adapting emerging technologies to address future ...
SYS-CON Events announced today that Avere Systems, a leading provider of hybrid cloud enablement solutions, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere Systems was created by file systems experts determined to reinvent storage by changing the way enterprises thought about and bought storage resources. With decades of experience behind the company’s founders, Avere got its ...
SYS-CON Events announced today that Avere Systems, a leading provider of enterprise storage for the hybrid cloud, will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Avere delivers a more modern architectural approach to storage that doesn't require the overprovisioning of storage capacity to achieve performance, overspending on expensive storage media for inactive data or the overbui...
SYS-CON Events announced today that IBM has been named “Diamond Sponsor” of SYS-CON's 21st Cloud Expo, which will take place on October 31 through November 2nd 2017 at the Santa Clara Convention Center in Santa Clara, California.
SYS-CON Events announced today that Ryobi Systems will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ryobi Systems Co., Ltd., as an information service company, specialized in business support for local governments and medical industry. We are challenging to achive the precision farming with AI. For more information, visit http:...
High-velocity engineering teams are applying not only continuous delivery processes, but also lessons in experimentation from established leaders like Amazon, Netflix, and Facebook. These companies have made experimentation a foundation for their release processes, allowing them to try out major feature releases and redesigns within smaller groups before making them broadly available. In his session at 21st Cloud Expo, Brian Lucas, Senior Staff Engineer at Optimizely, will discuss how by using...
In this strange new world where more and more power is drawn from business technology, companies are effectively straddling two paths on the road to innovation and transformation into digital enterprises. The first path is the heritage trail – with “legacy” technology forming the background. Here, extant technologies are transformed by core IT teams to provide more API-driven approaches. Legacy systems can restrict companies that are transitioning into digital enterprises. To truly become a lead...
SYS-CON Events announced today that Daiya Industry will exhibit at the Japanese Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Ruby Development Inc. builds new services in short period of time and provides a continuous support of those services based on Ruby on Rails. For more information, please visit https://github.com/RubyDevInc.
SYS-CON Events announced today that CAST Software will exhibit at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 - Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. CAST was founded more than 25 years ago to make the invisible visible. Built around the idea that even the best analytics on the market still leave blind spots for technical teams looking to deliver better software and prevent outages, CAST provides the software intelligence that matter ...
As businesses evolve, they need technology that is simple to help them succeed today and flexible enough to help them build for tomorrow. Chrome is fit for the workplace of the future — providing a secure, consistent user experience across a range of devices that can be used anywhere. In her session at 21st Cloud Expo, Vidya Nagarajan, a Senior Product Manager at Google, will take a look at various options as to how ChromeOS can be leveraged to interact with people on the devices, and formats th...
SYS-CON Events announced today that Yuasa System will exhibit at the Japan External Trade Organization (JETRO) Pavilion at SYS-CON's 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Yuasa System is introducing a multi-purpose endurance testing system for flexible displays, OLED devices, flexible substrates, flat cables, and films in smartphones, wearables, automobiles, and healthcare.