Industrial IoT Authors: Yeshim Deniz, Elizabeth White, Stackify Blog, SmartBear Blog, Liz McMillan

Blog Feed Post

VMware or Microsoft?–Agentless Backup for Virtual Environments

Today’s post comes to us courtesy of an old friend and coworker, who is still a friend but who now works for Veeam.  I’m talking about none-other than Chris Henley

Thank you, Chris, for this excellent write-up on backing up virtual environments!


One of the most important things to remember when talking about backup best practices in virtual environments is that virtual environments are not physical environments. I know that sounds really silly but that it is really quite important because physical environments have a different architecture than virtual environments. When we talk about running one operating system and one underlying hardware set, it’s important to understand that one hardware, one disk, one operating system, relationship demands a specific architectural design for the software that would be used to backup that physical architecture. In that physical environment the software designers used an architecture that focused heavily on the use of agents to provide the interactions between backup software and the physical hardware they were trying to back up. This agent based approach was incredibly successful for a very long time. Decades! The agent based approach is still successful in physical environments today, and probably represents the best possible backup solution for the physical environment. The problem is virtual environments are not physical environments, and the world of IT is headed for the virtual environment. Virtual environments differ from physical environments because the hypervisor, whether that’s VMware or Hyper-V, provides a layer of abstraction between the underlying hardware and overlying operating systems that will actually run above the hypervisor in the virtual architecture. The important consequence that goes right along with this architectural change in the virtual world means that if you try and use the agent based approach of the past in conjunction with a virtual environment it just doesn’t work. Now the reason it doesn’t work is not that you couldn’t force the old agent based model into the virtual environment where you added an agent to every virtual machine and then monitored, managed, administrated, and maintained those agents. The challenge here is that the virtual environment would demand a dramatic additional measure of work in order to get the backup operations to work properly, and frankly it is not necessary. VMware and Microsoft, the two major players in the hypervisors space, with ESX and Microsoft Hyper-V respectively, have each made a recommendation that we do not use agents in the virtual machines! Instead the recommendation is that you use an open set of APIs and connect to those APIs using standards that will allow us to interact with a virtual machine. This technique allows the software to interact with the underlying host for that Hyper-V or ESX VM. The host provides the tracking mechanism for us to do data protection or data protection mechanisms. Agentless data protection is a big deal.

The Agentless Backup Approach.

When we think about Hyper-V we want to make certain that we take an agentless approach to backup, replication, restoration, monitoring, and management so that we maximize the capabilities that have been built into the hypervisor by Microsoft as well as minimizing the impact in resources that data protection will have on the actual virtual machines themselves. The Microsoft VSS process allows for the imaging of virtual machines in their entirety along with the associated binary, configuration, xml, snapshot, settings, and any other associated virtual machine files which would allow you to make a very complete copy of a virtual machine and its data for backup or other data protection uses. The cool thing is that this is all without the use of any installed agent inside the virtual machine. Of course all of this relies on the fact that you are using the standards based approach, where you have built a set of tools that work directly in conjunction with VSS, and with the way that Hyper-V is built.

When we think about Agentless Systems we don’t necessarily mean that we will not use any agents anywhere in the architecture. Instead what we’re talking about is the fact that the agents will not be installed in the virtual machines. In most cases the actual software that is going to provide data protection to a virtual environment running Hyper-V will have some kind of interactive component that is actually installed or configured on the Hyper-V host. These “agents” and I use the term loosely run in conjunction with the windows operating system that is actually supporting that Hyper-V host. Generally these “agents” come in the form of drivers and or services. They are really not agents in the traditional sense. The key here is that when we make the installation of components that those installed components are not going to the virtual machines, meaning there is no additional overhead to the running virtual machine, or to its application based workflow, or services, and you are not providing any additional requirement for the usage of administrative time and resources necessary to update and manage those agents.

The VSS process

Microsoft has this really cool process called the volume shadow copy service and it is the base for agentless backup of VM’s in Hyper-V. The Volume Shadow Copy service is not new, in fact, it has been around since 2003. Microsoft introduced the volume shadow copy service with Windows Server 2003 and initially it was designed to provide just what its title suggests, shadow copies or previous version copies of existing documents inside the Windows Server operating system. Today we rely on that same functionality and in fact the same VSS.exe service that was used for volume shadow copies to make image copies of virtual machines in Hyper-V. It’s important that you have a brief understanding of the volume shadow copy service so let’s talk about it now.

The volume shadow copy service is made up of three essential components first the Vss.exe service, second the VSS Requestor, and finally the VSS writer.

The VSS.exe service is responsible for taking requests from a VSS Requestor and fulfilling those requests. In this case the requests will be associated with virtual machines and image copies of those VM’s. VSS is installed with each version of Windows Server.

The VSS Requestor will formulate requests to the VSS service for a specific image to be created of a specific virtual machine. The VSS Requestor is not written by Microsoft; instead it’s a piece of software that is written by a third party in order to formulate a request that would then be passed to the VSS Service. You can make your own VSS Requestor with a little help from Microsoft who provides code samples and guidance for those interested in writing a VSS Requestor.

The VSS writer is responsible for taking the image copy of the data that is requested. The VSS writer does the actual writing of that data to disk. Depending on exactly what is requested there are a number of different VSS writers that might be used. For example if you wanted to make an image of a virtual machine running on Hyper-V the volume shadow copy service would use the Hyper-V VSS writer in order to write the image of the virtual machine that was requested by the requester.


For more information on the VSS process please see the following link to Technet.Microsoft.com. http://technet.microsoft.com/en-us/library/cc785914(v=WS.10).aspx

Fast recovery

Agentless backup is cool, VSS process is cool, and new ways to implement the 3-2-1 rule are cool, none of this really makes any difference if we can’t get that data back quickly. The defining point in any disaster recovery plan is the ability to recover the data. When we think about recovering data, not only is it important that we understand where the data is located, it’s also important that we know and can clearly work with the format in which the data is stored, and be able to extend the new capabilities to enable advanced data recovery options at a moment’s notice. Virtual machines are built to run application workloads and those application workloads support lots of individual users. A virtual machine running Microsoft Exchange is providing e-mail services to the users in an organization. Those users do not want downtime of the virtual machine that supports their email. In the event of data loss (small or large scale) as administrators we need to find a way to recover e-mail items direct from the backup into the running virtual machine that is supporting the Microsoft Exchange email application. The data protection market has changed dramatically over the past two years with companies focusing more and more on application specific tools and less and less on the legacy methods of data restoration.

With innovative tools like Veeam’s Explorer for Exchange an organization might receive a request from a user who needs to recover an erroneously deleted email message with an associated attachment. The tool allows for the mounting of the Exchange.edb database from within the backup file. Once mounted the helpdesk professional can then search for the desired email, or simply select the user’s mailbox and browse to the email. At this point the email can be restored to the running Exchange VM, emailed directly back to the user, saved as an .msg file, or a .pst file. All of this is done in seconds while the user is on the phone, and while the Exchange server is still running and providing the desired services to the rest of the network.

This new paradigm of agentless data protection at the application level is changing the way we think about data protection and disaster recovery in virtual environments. Best of all its free!

Get the Veeam Backup Free Edition tools at http://www.veeam.com

Read the original blog entry...

More Stories By Kevin Remde

Kevin is an engaging and highly sought-after speaker and webcaster who has landed several times on Microsoft's top 10 webcast list, and has delivered many top-scoring TechNet events and webcasts. In his past outside of Microsoft, Kevin has held positions such as software engineer, information systems professional, and information systems manager. He loves sharing helpful new solutions and technologies with his IT professional peers.

A prolific blogger, Kevin shares his thoughts, ideas and tips on his “Full of I.T.” blog (http://aka.ms/FullOfIT). He also contributes to and moderates the TechNet Forum IT Manager discussion (http://aka.ms/ITManager), and presents live TechNet Events throughout the central U.S. (http://www.technetevents.com). When he's not busy learning or blogging about new technologies, Kevin enjoys digital photography and videography, and sings in a band. (Q: Midlife crisis? A: More cowbell!) He continues to challenge his TechNet Event audiences to sing Karaoke with him.

@ThingsExpo Stories
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
Predicting the future has never been more challenging - not because of the lack of data but because of the flood of ungoverned and risk laden information. Microsoft states that 2.5 exabytes of data are created every day. Expectations and reliance on data are being pushed to the limits, as demands around hybrid options continue to grow.
Poor data quality and analytics drive down business value. In fact, Gartner estimated that the average financial impact of poor data quality on organizations is $9.7 million per year. But bad data is much more than a cost center. By eroding trust in information, analytics and the business decisions based on these, it is a serious impediment to digital transformation.
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
As IoT continues to increase momentum, so does the associated risk. Secure Device Lifecycle Management (DLM) is ranked as one of the most important technology areas of IoT. Driving this trend is the realization that secure support for IoT devices provides companies the ability to deliver high-quality, reliable, secure offerings faster, create new revenue streams, and reduce support costs, all while building a competitive advantage in their markets. In this session, we will use customer use cases...
Digital Transformation: Preparing Cloud & IoT Security for the Age of Artificial Intelligence. As automation and artificial intelligence (AI) power solution development and delivery, many businesses need to build backend cloud capabilities. Well-poised organizations, marketing smart devices with AI and BlockChain capabilities prepare to refine compliance and regulatory capabilities in 2018. Volumes of health, financial, technical and privacy data, along with tightening compliance requirements by...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
The best way to leverage your Cloud Expo presence as a sponsor and exhibitor is to plan your news announcements around our events. The press covering Cloud Expo and @ThingsExpo will have access to these releases and will amplify your news announcements. More than two dozen Cloud companies either set deals at our shows or have announced their mergers and acquisitions at Cloud Expo. Product announcements during our show provide your company with the most reach through our targeted audiences.
DevOpsSummit New York 2018, colocated with CloudEXPO | DXWorldEXPO New York 2018 will be held November 11-13, 2018, in New York City. Digital Transformation (DX) is a major focus with the introduction of DXWorldEXPO within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive over the long term. A total of 88% of Fortune 500 companies from a generation ago are now out of bus...
With 10 simultaneous tracks, keynotes, general sessions and targeted breakout classes, @CloudEXPO and DXWorldEXPO are two of the most important technology events of the year. Since its launch over eight years ago, @CloudEXPO and DXWorldEXPO have presented a rock star faculty as well as showcased hundreds of sponsors and exhibitors! In this blog post, we provide 7 tips on how, as part of our world-class faculty, you can deliver one of the most popular sessions at our events. But before reading...
DXWordEXPO New York 2018, colocated with CloudEXPO New York 2018 will be held November 11-13, 2018, in New York City and will bring together Cloud Computing, FinTech and Blockchain, Digital Transformation, Big Data, Internet of Things, DevOps, AI, Machine Learning and WebRTC to one location.
DXWorldEXPO LLC announced today that "Miami Blockchain Event by FinTechEXPO" has announced that its Call for Papers is now open. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Financial enterprises in New York City, London, Singapore, and other world financial capitals are embracing a new generation of smart, automated FinTech that eliminates many cumbersome, slow, and expe...
DXWorldEXPO | CloudEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that ICOHOLDER named "Media Sponsor" of Miami Blockchain Event by FinTechEXPO. ICOHOLDER give you detailed information and help the community to invest in the trusty projects. Miami Blockchain Event by FinTechEXPO has opened its Call for Papers. The two-day event will present 20 top Blockchain experts. All speaking inquiries which covers the following information can be submitted by email to [email protected] Miami Blockchain Event by FinTechEXPO also offers s...
With tough new regulations coming to Europe on data privacy in May 2018, Calligo will explain why in reality the effect is global and transforms how you consider critical data. EU GDPR fundamentally rewrites the rules for cloud, Big Data and IoT. In his session at 21st Cloud Expo, Adam Ryan, Vice President and General Manager EMEA at Calligo, examined the regulations and provided insight on how it affects technology, challenges the established rules and will usher in new levels of diligence arou...
Dion Hinchcliffe is an internationally recognized digital expert, bestselling book author, frequent keynote speaker, analyst, futurist, and transformation expert based in Washington, DC. He is currently Chief Strategy Officer at the industry-leading digital strategy and online community solutions firm, 7Summits.
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...
Enterprises have taken advantage of IoT to achieve important revenue and cost advantages. What is less apparent is how incumbent enterprises operating at scale have, following success with IoT, built analytic, operations management and software development capabilities - ranging from autonomous vehicles to manageable robotics installations. They have embraced these capabilities as if they were Silicon Valley startups.
Cloud Expo | DXWorld Expo have announced the conference tracks for Cloud Expo 2018. Cloud Expo will be held June 5-7, 2018, at the Javits Center in New York City, and November 6-8, 2018, at the Santa Clara Convention Center, Santa Clara, CA. Digital Transformation (DX) is a major focus with the introduction of DX Expo within the program. Successful transformation requires a laser focus on being data-driven and on using all the tools available that enable transformation if they plan to survive ov...