Welcome!

Industrial IoT Authors: Pat Romanski, William Schmarzo, Elizabeth White, Stackify Blog, Yeshim Deniz

Related Topics: Microservices Expo

Microservices Expo: Blog Feed Post

Is Your Application Infrastructure Architecture Based on the Postal Service Delivery Model?

If it is, you might want to reconsider how you’re handling security

If it is, you might want to reconsider how you’re handling security, acceleration, and delivery of your applications before users “go postal” because of poor application performance.

postal-service-spc

Sometimes wisdom comes from the most unexpected places. Take Jason Rahm’s status update on Facebook over the holidays. He’s got what is likely a common complaint regarding the delivery model of the US postal service: the inefficiency of where postage due is determined. Everyone has certainly had the experience of sending out a letter (you know, those paper things) and having it returned a week or more later with a big stamp across it stating: Returned – Postage Due.

As Jason points out, the US postal service doesn’t determine whether postage may be due or not until the package arrives at its destination. If the addressee isn’t willing/able to pay that postage due, the package is of course returned via the delivery service, which incurs round-trip costs of transportation and handling at every point along the way.

If this sounds anything like your application infrastructure architecture, then you might want to reconsider how you’re handling the delivery of applications and where you’re applying policies that may affect the delivery process.


STRATEGIC POINTS of CONTROL

Every architecture has them: strategic points of control. These are points at which decisions can – and should – be made regarding the delivery of applications. Such points of control range from routing to admission control (security and identity management functions) to application-specific authorizations. There are myriad policies that govern access to and delivery of applications and each one is most efficiently applied at a different point in the infrastructure. If every function – admission control, delivery optimization, application  authorization – is applied at the application, it leads to a postal service architecture in which the same costs (both monetary and in performance) are incurred for every request and response, regardless of whether they were actually fulfilled or even legitimate requests.

If the postal service were cost conscious, they’d examine the package at the first strategic point of control based on the destination and the package variables and there determine how much it would cost before it ships that happy box of caffeine off only to be returned – days or weeks later – for lack of proper postage that it should have been able to determine in the first place.

The postal service – and you – likely have all the data available at the first point of entry into your application to determine whether the request is legitimate and what optimizations need to be applied before the package enters “the delivery system”, a.k.a. the infrastructure. Incurring costs associated with processing, storage, and risk by processing what could have already been detected as malicious or illegitimate seems a terrible waste of infrastructure on the scale of the waste associated with the postal service.

Why apply compression to data on the application server when that data may need to be examined by other components in the architecture on the way back to the user and may, in fact, degrade performance rather than improve it? Why not apply compression at the last point possible; at the strategic control point that sits between your infrastructure and the “rest of the world”, i.e. the user and their network. Why are requests not examined at the first possible strategic point of control for validity? Why allow what is potentially a dangerous and malicious request pass through the infrastructure so it can be processed by every component in the architecture and potentially wreak havoc throughout the data center? Why not examine the request at the first possible point and accept or reject it before the costs associated with that processing and the risks are incurred by the organization?

All this additional processing on what are illegitimate and malicious requests places a burden on the entire infrastructure and, especially in the case of web and application servers, that burden can translate into reduced performance for legitimate users as well as additional costs in the form of unnecessary increases in resource capacity required to support both illegitimate and legitimate requests.

You can’t eliminate all the costs, of course, but you can significantly reduce them when you apply application delivery policies at the most strategic point in your architecture possible. That means web application and e-mail scrubbing at the outer edges of your network, preventing spam and illegitimate requests from using up bandwidth and processing power on network, application network, storage, and application infrastructure. It means a reduction in the size of your logs, which makes correlation and reporting easier, faster, and less of a chore for IT personnel who must comb through gigabytes of data daily looking for needles in haystacks to help application developers track down errors in application code. It means reducing the overall costs associated with delivering applications to user and improving the performance and reliability of your entire architecture.

Very few IT architects would point to the US postal service as an ideal model of delivery. So if your infrastructure looks anything like the postal service, maybe it’s time to take another look at how you’re applying policies and processing requests and make some modifications to a more cost-effective, efficient service delivery model.

Read the original blog entry...

More Stories By Lori MacVittie

Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

IoT & Smart Cities Stories
While the focus and objectives of IoT initiatives are many and diverse, they all share a few common attributes, and one of those is the network. Commonly, that network includes the Internet, over which there isn't any real control for performance and availability. Or is there? The current state of the art for Big Data analytics, as applied to network telemetry, offers new opportunities for improving and assuring operational integrity. In his session at @ThingsExpo, Jim Frey, Vice President of S...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settl...
@CloudEXPO and @ExpoDX, two of the most influential technology events in the world, have hosted hundreds of sponsors and exhibitors since our launch 10 years ago. @CloudEXPO and @ExpoDX New York and Silicon Valley provide a full year of face-to-face marketing opportunities for your company. Each sponsorship and exhibit package comes with pre and post-show marketing programs. By sponsoring and exhibiting in New York and Silicon Valley, you reach a full complement of decision makers and buyers in ...
Two weeks ago (November 3-5), I attended the Cloud Expo Silicon Valley as a speaker, where I presented on the security and privacy due diligence requirements for cloud solutions. Cloud security is a topical issue for every CIO, CISO, and technology buyer. Decision-makers are always looking for insights on how to mitigate the security risks of implementing and using cloud solutions. Based on the presentation topics covered at the conference, as well as the general discussions heard between sessio...
The Internet of Things is clearly many things: data collection and analytics, wearables, Smart Grids and Smart Cities, the Industrial Internet, and more. Cool platforms like Arduino, Raspberry Pi, Intel's Galileo and Edison, and a diverse world of sensors are making the IoT a great toy box for developers in all these areas. In this Power Panel at @ThingsExpo, moderated by Conference Chair Roger Strukhoff, panelists discussed what things are the most important, which will have the most profound e...
The Jevons Paradox suggests that when technological advances increase efficiency of a resource, it results in an overall increase in consumption. Writing on the increased use of coal as a result of technological improvements, 19th-century economist William Stanley Jevons found that these improvements led to the development of new ways to utilize coal. In his session at 19th Cloud Expo, Mark Thiele, Chief Strategy Officer for Apcera, compared the Jevons Paradox to modern-day enterprise IT, examin...
Rodrigo Coutinho is part of OutSystems' founders' team and currently the Head of Product Design. He provides a cross-functional role where he supports Product Management in defining the positioning and direction of the Agile Platform, while at the same time promoting model-based development and new techniques to deliver applications in the cloud.
There are many examples of disruption in consumer space – Uber disrupting the cab industry, Airbnb disrupting the hospitality industry and so on; but have you wondered who is disrupting support and operations? AISERA helps make businesses and customers successful by offering consumer-like user experience for support and operations. We have built the world’s first AI-driven IT / HR / Cloud / Customer Support and Operations solution.
LogRocket helps product teams develop better experiences for users by recording videos of user sessions with logs and network data. It identifies UX problems and reveals the root cause of every bug. LogRocket presents impactful errors on a website, and how to reproduce it. With LogRocket, users can replay problems.
Data Theorem is a leading provider of modern application security. Its core mission is to analyze and secure any modern application anytime, anywhere. The Data Theorem Analyzer Engine continuously scans APIs and mobile applications in search of security flaws and data privacy gaps. Data Theorem products help organizations build safer applications that maximize data security and brand protection. The company has detected more than 300 million application eavesdropping incidents and currently secu...