Click here to close now.




















Welcome!

Industrial IoT Authors: Continuum Blog, XebiaLabs Blog, Patrick Hubbard, Cloud Best Practices Network, Elizabeth White

Related Topics: @CloudExpo, Microservices Expo, Containers Expo Blog

@CloudExpo: Blog Feed Post

Virtual Strategy - Virtually Right

With a private cloud strategy and dynamic data center you can quickly respond to rapid business fluctuations

With a private cloud strategy and dynamic data center you can quickly respond to rapid business fluctuations. But how do you get there?

This post was originaly published as thanksgiving weekend special at virtual-strategy.com.
In the article I discussed some approaches for building a dynamic data center that not only addresses complexity and reduces cost, but also accelerates business response time, to ensure that organization realizes the true promise of cloud computing, business agility and customer responsiveness.

Cloud computing presents an appealing model for offering and managing IT services through shared and often virtualized infrastructure. It’s great for new business start-ups who don’t want the risk of a large on-premise technology investment, or organizations who can’t easily predict what the future demand will be for their services. But for most of us with existing infrastructure and resources, the picture is very different. We want to capitalize on the benefits of the cloud ― on demand, low risk, affordable computing ― but we’ve spent years investing in rooms stacked high with hardware and software to run our daily mission critical jobs and services.

So how do organizations in this situation make the shift from straight-forward server consolidation to a dynamic, self-service virtualized data center? How do they reach the peak of standardized IT service delivery and agility that is in step with the needs of the business? Many virtualization deployments stall as organizations stop to deal with challenges like added complexity, staffing requirements, SLA management, or departmental politics. This “VM stall” tends to coincide with different stages in the virtualization maturity lifecycle, such as the transition from tier 2/3 server consolidation to mission-critical tier 1 applications, and from basic provisioning automation to a private/hybrid cloud approach.

The virtualization maturity lifecycle
The simple answer is to take it step-by-step, learning as you go, building maturity at every step. This will earn you the skills, knowledge, and experience needed to progress from an entry-level virtualization project to a mature dynamic data center and private cloud strategy.

It’s called the virtualization maturity lifecycle, and it builds in four steps. Just like pilots start their training on small planes (going full cycle from take-off to landing) before they move onto large commercial jets, it is advisable for organizations to implement these virtualization maturity steps iteratively. For example, start a full maturity cycle on test and development servers before moving to mission critical servers and applications.
Start easy, by consolidating servers, to increase utilization and reduce your current carbon footprint. To ensure deep insight and continuity in support of the migration from physical to virtual, you might want to leverage image backup and physical-to-virtual restore tools that allow you to move your physical IBM, Dell and HP images directly to ready to run VM images for VMware, Sun, Citrix and Microsoft.

The next step involves optimizing the infrastructure. Apart from maintaining consistency, efficiency, and compliance across the virtual resources (which is proving fast to be even more complex in virtual than in physical environments), we analyze, monitor, (re-)distribute and tune our applications and services.

While optimizing, we also discover and document the rules we will automate in the next phase. Rules about which applications best fit together, what areas are suitable for self service and which type of services are most important. As you can imagine the answers to this last question will be very different for a nuclear plant (safety first) compared to an online video rental service (customers first), which it is why it is such an important step. If you skip this stage and go straight into automation, you’ll likely end up in the same situation that you’re in today, just automated.

A successful cloud strategy is all about agility and flexibility, and the next step in the virtualization maturity lifecycle helps take care of automation and the orchestration of your (now) virtual services. You can empower users to help themselves ― industrialize processes ― without calling IT for every service request. Automation has many advantages here. It is the catalyst to standardize your virtual infrastructure, integrate and orchestrate processes across IT silos, and accelerate the provisioning of virtual cloud services. Once the industrialized provisioning process is live, automation technologies can then also be used to monitor demand volumes, utilization levels and application response times and to assist root-cause analytics to help isolate and remediate virtual environment issues.

The final stage is the centerpiece of a cloud strategy, a position which allows you to manage the definition, demand, and deployment of IT services: the dynamic data center. Your now agile infrastructure, delivered from a secure, highly available data center, enables you to quickly respond to rapid business fluctuations. To reach a dynamic data center, you need to automate the entire process of service delivery from request to fulfilment. This includes centralized service requests, automating the approval process so that department heads can quickly approve or reject requests, a standard and repeatable provisioning process, and standard configurations.

This goes much further than the traditional dream of a “lights out” data center, which basically was a static conveyor belt-like factory where all labor was automated away. The dynamic data center is like a modern car factory, where robots perform almost all tasks, but in ever changing sequences and configurations, guided by supply-chain-lead orchestration.

The new normal
As we all know, technology changes fast. This advancement in technology is creating a “new normal” where relationships with customers are increasingly in a digital form and technology is no longer an enabler or accelerator of the business― it has become the business.

This is a theme picked up by Peter Hinssen, one of Europe's thought leaders on the impact of technology on our society. He evangelizes this new normal, arguing that in a digital world there will be new rules that define what is acceptable for IT, including zero tolerance for digital failure, an era of “good enough” functionality (60% functionality in six weeks rather than 90% in six months), and the need to move your architectures―including your new cloud architecture―from “built to last” to “designed to change”.
The lifecycle approach described earlier may be just what you need to help align your IT organization to what Hinssen calls the new normal. First you determine where opportunities exist for consolidation and rationalization across your physical and virtual environments ― assessing what you have in your data center environment and establish a baseline for making decisions that take you to the next stage. Next, to achieve agility, you have to automate the provisioning and de-provisioning of virtualized resources, including essential elements, such as identities, and other management policies such as access rights.

The next step in delivering an on-time, risk-free (zero failure) cloud computing strategy is service assurance. You need to manage IT service quality and delivery based on business impact and priority — top-to-bottom and end-to-end. That includes, for example, delivering a superior online end-user experience with low-overhead application performance management, and end-to-end visibility into traffic flows and device performance. The new normal also needs to be secure. IT security management technologies must be applied against current regulations and end-user needs, which enable the virtual layer to be more secure.

All these factors combined ultimately lead to agile IT service delivery. With agility, you can build and optimize scalable, reliable resources and entire applications quickly. By embarking on the virtualization maturity roadmap, you can move closer to a dynamic data center and successful cloud strategy.

Any shortcuts?
This evolutionary approach may sound very procedural (and safe). You may also be thinking, is this the only way? What if I need it now?  Is there no revolutionary approach to help me get straight to a private cloud much more quickly? Just like developing countries, which have skipped the wired POTS phone system and moved directly to a 100% wireless infrastructure, a revolutionary approach does exist. The secret lies in the fact that – in addition to the application itself - the infrastructure required to deploy an application can be virtualized – load balancers, firewalls, NAS gateways, monitoring tools, etc.  This entire entity – the application and the required infrastructure it needs to be successfully deployed – can then be managed as a single object. Want to deploy a copy of the application? Simply load the object and all of the associated virtual appliances are automatically loaded, networked, secured and made ready.  This is called an application-centric cloud.

With traditional virtualization, the servers are the parts that are virtualized, but afterward, these virtual servers, networks, routers, load balancers and more, still need to be managed and configured to work with the other parts of the data center, a task as complex and daunting as it was before. This is infrastructure-centric cloud.  With full application-centric clouds, the whole business service (with all its involved components) is virtualized becoming a virtual service (instead of a bunch of virtual servers) which reduces the complexity of managing these services significantly.

As a result, application-centric clouds can now model, configure, deploy and manage complex, composite applications as if they were a single object. This enables operators to use a visual model of an application and the required infrastructure, and to store that model in the integrated repository.  Users or customers can then pull that model out of the repository, reuse it and deploy it to any data center around the world with the click of a button.  Interestingly, users deploy these services to a private cloud, or to an MSP, depending on who happens to offer the best conditions at that moment.  Sound too futuristic?  Far from it.  Several innovative service providers, like DNS Europe, Radix Technologies, and ScaleUp, are already doing exactly this on a daily basis.

For many enterprises, governments and service provider organizations, the mission for IT today is no longer just about keeping the infrastructure running. It’s about the critical need to quickly create new services and revenue streams and improve the competitive position of their organization.
Some parts of your organization may not have time to evolve into a private cloud. For them, taking the revolutionary (or green field) approach may be best, while for other existing revenue streams, an evolutionary approach, ensuring investment protection, may be best.  In the end, customers will be able to choose the approach that best fits the task at hand, finding the right mix of both evolutionary and revolutionary to meet their individual needs.

Read the original blog entry...

More Stories By Gregor Petri

Gregor Petri is a regular expert or keynote speaker at industry events throughout Europe and wrote the cloud primer “Shedding Light on Cloud Computing”. He was also a columnist at ITSM Portal, contributing author to the Dutch “Over Cloud Computing” book, member of the Computable expert panel and his LeanITmanager blog is syndicated across many sites worldwide. Gregor was named by Cloud Computing Journal as one of The Top 100 Bloggers on Cloud Computing.

Follow him on Twitter @GregorPetri or read his blog at blog.gregorpetri.com

@ThingsExpo Stories
With the Apple Watch making its way onto wrists all over the world, it’s only a matter of time before it becomes a staple in the workplace. In fact, Forrester reported that 68 percent of technology and business decision-makers characterize wearables as a top priority for 2015. Recognizing their business value early on, FinancialForce.com was the first to bring ERP to wearables, helping streamline communication across front and back office functions. In his session at @ThingsExpo, Kevin Roberts, GM of Platform at FinancialForce.com, will discuss the value of business applications on wearable ...
While many app developers are comfortable building apps for the smartphone, there is a whole new world out there. In his session at @ThingsExpo, Narayan Sainaney, Co-founder and CTO of Mojio, will discuss how the business case for connected car apps is growing and, with open platform companies having already done the heavy lifting, there really is no barrier to entry.
With the proliferation of connected devices underpinning new Internet of Things systems, Brandon Schulz, Director of Luxoft IoT – Retail, will be looking at the transformation of the retail customer experience in brick and mortar stores in his session at @ThingsExpo. Questions he will address include: Will beacons drop to the wayside like QR codes, or be a proximity-based profit driver? How will the customer experience change in stores of all types when everything can be instrumented and analyzed? As an area of investment, how might a retail company move towards an innovation methodolo...
The Internet of Things (IoT) is about the digitization of physical assets including sensors, devices, machines, gateways, and the network. It creates possibilities for significant value creation and new revenue generating business models via data democratization and ubiquitous analytics across IoT networks. The explosion of data in all forms in IoT requires a more robust and broader lens in order to enable smarter timely actions and better outcomes. Business operations become the key driver of IoT applications and projects. Business operations, IT, and data scientists need advanced analytics t...
Contrary to mainstream media attention, the multiple possibilities of how consumer IoT will transform our everyday lives aren’t the only angle of this headline-gaining trend. There’s a huge opportunity for “industrial IoT” and “Smart Cities” to impact the world in the same capacity – especially during critical situations. For example, a community water dam that needs to release water can leverage embedded critical communications logic to alert the appropriate individuals, on the right device, as soon as they are needed to take action.
SYS-CON Events announced today that HPM Networks will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. For 20 years, HPM Networks has been integrating technology solutions that solve complex business challenges. HPM Networks has designed solutions for both SMB and enterprise customers throughout the San Francisco Bay Area.
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of technology leadership, Micron's memory solutions enable the world's most innovative computing, consumer,...
As more intelligent IoT applications shift into gear, they’re merging into the ever-increasing traffic flow of the Internet. It won’t be long before we experience bottlenecks, as IoT traffic peaks during rush hours. Organizations that are unprepared will find themselves by the side of the road unable to cross back into the fast lane. As billions of new devices begin to communicate and exchange data – will your infrastructure be scalable enough to handle this new interconnected world?
Through WebRTC, audio and video communications are being embedded more easily than ever into applications, helping carriers, enterprises and independent software vendors deliver greater functionality to their end users. With today’s business world increasingly focused on outcomes, users’ growing calls for ease of use, and businesses craving smarter, tighter integration, what’s the next step in delivering a richer, more immersive experience? That richer, more fully integrated experience comes about through a Communications Platform as a Service which allows for messaging, screen sharing, video...
SYS-CON Events announced today that Pythian, a global IT services company specializing in helping companies leverage disruptive technologies to optimize revenue-generating systems, has been named “Bronze Sponsor” of SYS-CON's 17th Cloud Expo, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. Founded in 1997, Pythian is a global IT services company that helps companies compete by adopting disruptive technologies such as cloud, Big Data, advanced analytics, and DevOps to advance innovation and increase agility. Specializing in designing, imple...
In his session at @ThingsExpo, Lee Williams, a producer of the first smartphones and tablets, will talk about how he is now applying his experience in mobile technology to the design and development of the next generation of Environmental and Sustainability Services at ETwater. He will explain how M2M controllers work through wirelessly connected remote controls; and specifically delve into a retrofit option that reverse-engineers control codes of existing conventional controller systems so they don't have to be replaced and are instantly converted to become smart, connected devices.
SYS-CON Events announced today that IceWarp will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA. IceWarp, the leader of cloud and on-premise messaging, delivers secured email, chat, documents, conferencing and collaboration to today's mobile workforce, all in one unified interface
WebRTC has had a real tough three or four years, and so have those working with it. Only a few short years ago, the development world were excited about WebRTC and proclaiming how awesome it was. You might have played with the technology a couple of years ago, only to find the extra infrastructure requirements were painful to implement and poorly documented. This probably left a bitter taste in your mouth, especially when things went wrong.
Too often with compelling new technologies market participants become overly enamored with that attractiveness of the technology and neglect underlying business drivers. This tendency, what some call the “newest shiny object syndrome,” is understandable given that virtually all of us are heavily engaged in technology. But it is also mistaken. Without concrete business cases driving its deployment, IoT, like many other technologies before it, will fade into obscurity.
Consumer IoT applications provide data about the user that just doesn’t exist in traditional PC or mobile web applications. This rich data, or “context,” enables the highly personalized consumer experiences that characterize many consumer IoT apps. This same data is also providing brands with unprecedented insight into how their connected products are being used, while, at the same time, powering highly targeted engagement and marketing opportunities. In his session at @ThingsExpo, Nathan Treloar, President and COO of Bebaio, will explore examples of brands transforming their businesses by t...
As more and more data is generated from a variety of connected devices, the need to get insights from this data and predict future behavior and trends is increasingly essential for businesses. Real-time stream processing is needed in a variety of different industries such as Manufacturing, Oil and Gas, Automobile, Finance, Online Retail, Smart Grids, and Healthcare. Azure Stream Analytics is a fully managed distributed stream computation service that provides low latency, scalable processing of streaming data in the cloud with an enterprise grade SLA. It features built-in integration with Azur...
Akana has announced the availability of the new Akana Healthcare Solution. The API-driven solution helps healthcare organizations accelerate their transition to being secure, digitally interoperable businesses. It leverages the Health Level Seven International Fast Healthcare Interoperability Resources (HL7 FHIR) standard to enable broader business use of medical data. Akana developed the Healthcare Solution in response to healthcare businesses that want to increase electronic, multi-device access to health records while reducing operating costs and complying with government regulations.
For IoT to grow as quickly as analyst firms’ project, a lot is going to fall on developers to quickly bring applications to market. But the lack of a standard development platform threatens to slow growth and make application development more time consuming and costly, much like we’ve seen in the mobile space. In his session at @ThingsExpo, Mike Weiner, Product Manager of the Omega DevCloud with KORE Telematics Inc., discussed the evolving requirements for developers as IoT matures and conducted a live demonstration of how quickly application development can happen when the need to comply wit...
The Internet of Everything (IoE) brings together people, process, data and things to make networked connections more relevant and valuable than ever before – transforming information into knowledge and knowledge into wisdom. IoE creates new capabilities, richer experiences, and unprecedented opportunities to improve business and government operations, decision making and mission support capabilities.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at @ThingsExpo, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, described how to revolutionize your archit...