Click here to close now.

Welcome!

Industrial IoT Authors: William Schmarzo, Liz McMillan, Elizabeth White, Pat Romanski, AppDynamics Blog

Related Topics: Industrial IoT, @MicroservicesE Blog

Industrial IoT: Blog Post

Eating Our Own Dog Food: dynaTrace Does Continuous APM

How dynaTrace does continuous APM internally in development

I sat together with Stefan Frandl, Test Automation Lead in dynaTrace’s R&D Lab in Linz, Austria to discuss how dynaTrace does Continuous APM in Development. Obviously dynaTrace takes performance very seriously as we preach to our clients that Continuous Application Performance Management is a critical component across the Application Lifecycle. The earlier in the Lifecycle you manage and get your performance under control the less you have to worry about actual problems later on when you ship your product.

In the discussion I had with Stefan he talked about how dynaTrace transitioned from traditional performance management to where we are now – which means: “eat our own dog food” and “live the dynaTrace Continuous APM message”.

In this article we learn that it is not simply done by plugging in an APM Solution and all your performance problems are automatically detected. It is about building a robust continuous integration environment with meaningful functional and performance tests. It is about having “buy in” from your engineers. It is about figuring out what needs to be measured – what can be measured – which measures you can actually trust and figuring out which measures indicate your performance health status.

What are the problems that performance management in development solves?

I’ve been a developer for many years – so I have no problems ranting a bit about an attitude that many of us have. Meaning: We always “think” that our code is fast enough. And that might be true on my local dev-machine with enough RAM and CPU power that can easily handle the single user load when testing the newly implemented feature or recent bug fix.
In addition to this problem Stefan listed the following areas:

  • Continuous changes on the codebase by different people over a longer period of time increase the probability of small problems sneaking in and accumulating over time into big problems
  • Multiple “New Features” or “Bug Fixes” across the code base from different developers impacting each other
  • Different hardware reveals different problems – especially in multi-threaded environments
  • Other software running on the target machine impacts your application performance

Executing performance tests only at the end of a sprint/iteration or as the very last step before a product release will uncover all the small accumulated problems or environmental related problems at once. Finding all these problems late means additional effort by the dev team to analyze these problems (going back in the change log, getting back into the code, …) and jeopardizes the project schedule.

Furthermore a developer cannot verify if his improvements are real improvements or only improving the product in his local test setup.

Therefore: focus early and continuously on the performance aspect of your code.

Why the traditional approach failed

Prior to “eating our own dog food” we approached performance management in two traditional ways:

Using Profilers
Developers used profilers on their local machine to identify hot spots on manually executed test cases, e.g.: clicking through the main use case of the new feature. This is of course a valid approach and identifies general performance problems like non optimized algorithms – non-performing usage of collections, “wasted” memory …

The usage of a profiler is limited to low load environments. Why is that? Because profilers – in order to capture all this information – have a significant impact on application performance and don’t work well under heavy load. That means that problems that happen outside the “one user” test scenario are harder to catch with a profiler. I don’t say it’s impossible as you can run profilers in different modes that lower the overhead – but then you often don’t get the detailed data you need. So – there is a big trade-off here. Concurrency problems often only occur in high load scenarios which can’t be covered by profilers. So these problems remain undiscovered.

Using manual timings
Adding custom timers in the code is another approach that was used. Developers added their own time measuring statements in what they believed are critical methods in their code. This approach works better in high load environments – but brings two problems with it:

  1. it requires code changes and it is limited to your own code
  2. you have to manually dig through the collected information and try to make sense of it -> timings alone often don’t help you either so you need to add additional logging to get things like method arguments, …
  3. it’s very hard to compare results collected from different machines (e.g. different hardware)

The manual effort on the one side and the inability to manage performance under heavy load on the other made the traditional approaches fail.

Challenge with accurate and stable measuring

Measuring execution time is a thing that is not too hard to do with all the options that the runtime, application server or operating system provides. But it is not easy to measure the right thing and to measure it accurately. Here are some of the problems Stefan ran into when measuring execution time:

Garbage Collection
In a managed environment like Java and .NET, the Garbage Collector plays a big role in application performance. GC Runs and their impact on performance are unpredictable across runs. Even though you run the same test in the same environment with the same parameters it doesn’t mean that the GC runs consistently. Measuring execution times of Methods in Java & .NET Applications is therefore done by extracting the time the Garbage Collector “suspended” a method from execution. What does that mean? When you take a timestamp at the beginning and at the end of the method the difference is not necessarily the pure execution time. If the GC kicked in while your method was executing it impacts your measured time. In order to get accurate execution time it is therefore necessary to subtract the GC Collection time. This Use Case is supported by dynaTrace in a way that it measures execution time including and excluding GC. But it is still very important to monitor GC times and GC activations. A fast implementation that produces a lot of garbage and therefore adds high load to the CPU may degrade the overall performance of the system by starving other threads.

dynaTrace captures the runtime suspension time per method and per transaction

dynaTrace captures the runtime suspension time per method and per transaction

Intel SpeedStep
Some of the testing machines that Stefan uses showed very volatile test results. The same test on the same code base could not return stable results. One of the biggest challenges is to really come up with a test environment that produces accurate and stable measures. In his particular case it turned out that Intel SpeedStep caused the unpredictable performance behaviour. This might not be true for all of you out there – but it’s a great data point that hopefully helps some of you when trying to find a stable test environment.

CPU Timings under Windows
Besides execution times – meaning taking a timestamp at the beginning and at the end of the method call to measure execution time – it is possible to get the actual time spent on the CPU for the executing thread. This is a very valuable measure. The difference between CPU and Execution Time is explained by time waiting for I/O, the database, a remote call or time waiting on synchronization.
Another hint from Stefan: CPU timings on Windows Operating Systems can be very inaccurate and are therefore ignored for short runs as they don’t give us enough value.

dynaTrace APM is used to manage application performance. With built-in features separating GC Time from Execution Time, the ability to capture CPU Timings (on OS’s where these values make sense), and the fact that dynaTrace traces individual transactions across tiers down to the method level with almost no overhead enables us to use this data for performance management.

If you want to get stable results you have to have stable tests

We learned that there are certain aspects to consider when collecting execution time measures, e.g.: Extract GC Times. In order to get stable results it is not only necessary to have a stable environment and a stable way to measure timing – the key thing is to have stable tests with realistic test data in a realistic environment.
dynaTrace uses two types of load tests: Unit tests to performance test certain features, and SilkPerformer tests to load test our event collection and sending by putting those applications under load that are actively traced and monitored.

Getting Stable and realistic Unit Tests
The Unit tests are executed in all different environments that the dynaTrace software runs on. The key to getting stable results is that every test method has a setup and teardown to ensure that everything is cleaned up before the next test case executes. The test method itself then runs through several “Warm-Up” runs followed by multiple Test Runs that are taken for performance measurement. The Warm-Up phase is necessary to rule out performance impacts of a JVM/CLR that has just started, heap spaces that have not yet reached its “normal” utilization level or impacts on e.g.: not fully initialized caches. How long does the Warm-Up phase last? An indication that is used here is an execution time volatility of < 5%. This means that the warm-up phase runs until the execution times are stabilized. Following that are multiple Test Runs. Average execution values across these test runs are taken to validate the performance of the tested feature.

The dynaTrace Test Automation Team has invested a lot into a home-grown testing framework that allows the execution and the measurement of the above-described approach.

Warm-Up phase runs as long as tests produce stable results

Warm-Up phase runs as long as tests start producing stable results

Getting Stable Load Testing Results
The concept of a Warm-Up and the actual measured Testing-Phase is not a new concept in general. The SilkPerformer Load-Tests that are used work in the same way where SilkPerformer has the built-in feature of a Warm-Up and Measurement Period which made it easy for us to apply this process to these kinds of load tests as well.

So Stefan – How often do you run your tests and what happens if things go wrong?

We use QuickBuild as our Build/Continuous Integration Server. Every time a build is triggered all functional unit tests are executed giving us immediate feedback about functional correctness of the build. A broken build or failed Unit tests trigger alerts to those developers that checked in code in the respective code base since the last successful build. This gives us the chance to immediately fix functional regressions.

Twice a day we also execute the performance Unit tests as described above. In case of a performance regression the same alerting mechanism is triggered – meaning that the developers who made code modifications in the code are automatically notified about the problem. Larger scale performance tests for critical features are executed every day as it is not feasible to execute them more than once.

Providing the information to the developer
In addition to the Unit test results – whether we are talking about functional or performance tests – we capture transactional tracing data (PurePath’s) with dynaTrace Continuous APM (this is where we “eat our own dog food”). dynaTrace runs on the Continuous Integration Environment and traces all tests starting from the test method through those components that are being tested.

Not only do we use dynaTrace to capture transaction-based information like executed methods, method arguments, SQL Statements, Exceptions, … – we also use dynaTrace Dashboards to make the data easily accessible to everybody. The developer is notified via automatically-triggered Email alerts if a threshold is violated. Afterwards he can have a look at our dashboards that show execution times of individual test cases over time. This is great to see performance regressions on the Entry Point level. From here we can drill deeper to a Triage Dashboard in order to identify the root cause of the identified regression, e.g.: unnecessary exceptions or method calls causing overhead. All this is available at the fingertips of the developer or architect who needs to look at the details.

A Real-Life Example on how to prevent problems from shipping with the product

Now let’s look at a real life example. For version 3.1 of dynaTrace we changed the way we read and write memory dumps. We expected huge performance improvements with that change. In order to verify this and in order to compare it against the existing implementation we created a set of load tests that tested the memory dump feature with different sizes of memory dumps.
The following graph shows the load testing results of different dump sizes.

Performance results over time for different test use cases

Performance results over time for different test use cases

We executed the tests back in May for the existing implementation to have a baseline. On May 19th we ran the first test with the new implementation. We can observe that most of the tests ran faster – but – we had some that were significantly slower. It turned out this was due to an incorrect internal cache strategy which was really fast for large amounts of data but slow for small dumps. Once this problem was fixed we overall got much better times with the new implementation.

Further down the road two bug fixes – that were not intended to “hurt” performance – actually impacted the performance of a single use case dramatically. With the tests in place and with the collection of detailed results, it was easy to identify the problem caused by the fix and solve it in no time.
Further performance improvements that were made early June can be seen and verified that they actually improved the new version.

What was necessary to catch this problem?
It was necessary to have the correct test cases in place. In this case it was essential to run the same test on different environments with different input data. Otherwise the problem would have remained undetected.
Running these tests twice a day allowed the developer who committed the fix to go back in the code that he was still very familiar with and fix the problem for those scenarios that were not tested on his local machine but were tested in the CI Environment.
dynaTrace Continuous APM is used to analyze test executions – it provides accurate timings and in-depth tracing information including method execution times, argument values, database statements, exceptions, … This information is essential for analyzing the root cause of the problem fast in order to bring performance back on track.

Conclusion

dynaTrace uses dynaTrace Continuous APM internally to live what we believe is the correct approach for Continuous Application Performance Management. We learned that an APM Solution is one piece of the puzzle in a Development Environment. In order to do performance management it is essential to have good and stable test cases that are executed continuously. Then the Continuous APM Solution enables you to identify regressions early on providing your developers the in-depth information they need to minimize bug fixing time.

Here is the dynaTrace Test Automation check list for successful performance changes:

  • Create performance test before improving feature performance
  • Carry them out at least 5 times to be sure that they are stable
  • Implement the “Performance” of the feature
  • Rerun the tests and check if your assumptions are correct

Related reading:

  1. Continuous Performance Management in Development Continuous Integration has become a well established practice in todays...
  2. Performance Management in Continuous Integration I recently gave  presentations on Performance Management as part of...
  3. 5 Steps to Automate Browser Performance Analysis with Watir and dynaTrace AJAX Edition I’ve recently been working with several clients to analyze their...
  4. Visual Studio Team System for Unit-, Web- and Load-Testing with dynaTrace Last week I was given the opportunity to meet the...
  5. Do more with Functional Testing – Take the Next Evolutionary Step Functional Testing has always been an activity done by Test...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
The Industrial Internet revolution is now underway, enabled by connected machines and billions of devices that communicate and collaborate. The massive amounts of Big Data requiring real-time analysis is flooding legacy IT systems and giving way to cloud environments that can handle the unpredictable workloads. Yet many barriers remain until we can fully realize the opportunities and benefits from the convergence of machines and devices with Big Data and the cloud, including interoperability, data security and privacy.
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In this session, James Kirkland, Red Hat's Chief Architect for the Internet of Things and Intelligent Systems, will describe how to revolutionize your architecture and...
We’re entering a new era of computing technology that many are calling the Internet of Things (IoT). Machine to machine, machine to infrastructure, machine to environment, the Internet of Everything, the Internet of Intelligent Things, intelligent systems – call it what you want, but it’s happening, and its potential is huge. IoT is comprised of smart machines interacting and communicating with other machines, objects, environments and infrastructures. As a result, huge volumes of data are being generated, and that data is being processed into useful actions that can “command and control” thi...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
All major researchers estimate there will be tens of billions devices - computers, smartphones, tablets, and sensors - connected to the Internet by 2020. This number will continue to grow at a rapid pace for the next several decades. With major technology companies and startups seriously embracing IoT strategies, now is the perfect time to attend @ThingsExpo, June 9-11, 2015, at the Javits Center in New York City. Learn what is going on, contribute to the discussions, and ensure that your enterprise is as "IoT-Ready" as it can be
SYS-CON Events announced today that MetraTech, now part of Ericsson, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Ericsson is the driving force behind the Networked Society- a world leader in communications infrastructure, software and services. Some 40% of the world’s mobile traffic runs through networks Ericsson has supplied, serving more than 2.5 billion subscribers.
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
Thanks to widespread Internet adoption and more than 10 billion connected devices around the world, companies became more excited than ever about the Internet of Things in 2014. Add in the hype around Google Glass and the Nest Thermostat, and nearly every business, including those from traditionally low-tech industries, wanted in. But despite the buzz, some very real business questions emerged – mainly, not if a device can be connected, or even when, but why? Why does connecting to the cloud create greater value for the user? Why do connected features improve the overall experience? And why do...
SYS-CON Events announced today that O'Reilly Media has been named “Media Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York City, NY. O'Reilly Media spreads the knowledge of innovators through its books, online services, magazines, and conferences. Since 1978, O'Reilly Media has been a chronicler and catalyst of cutting-edge development, homing in on the technology trends that really matter and spurring their adoption by amplifying "faint signals" from the alpha geeks who are creating the future. An active participa...
SYS-CON Events announced today that BMC will exhibit at SYS-CON's 16th International Cloud Expo®, which will take place on June 9-11, 2015, at the Javits Center in New York City, NY. BMC delivers software solutions that help IT transform digital enterprises for the ultimate competitive business advantage. BMC has worked with thousands of leading companies to create and deliver powerful IT management services. From mainframe to cloud to mobile, BMC pairs high-speed digital innovation with robust IT industrialization – allowing customers to provide amazing user experiences with optimized IT per...
Imagine a world where targeting, attribution, and analytics are just as intrinsic to the physical world as they currently are to display advertising. Advances in technologies and changes in consumer behavior have opened the door to a whole new category of personalized marketing experience based on direct interactions with products. The products themselves now have a voice. What will they say? Who will control it? And what does it take for brands to win in this new world? In his session at @ThingsExpo, Zack Bennett, Vice President of Customer Success at EVRYTHNG, will answer these questions a...
The 4th International Internet of @ThingsExpo, co-located with the 17th International Cloud Expo - to be held November 3-5, 2015, at the Santa Clara Convention Center in Santa Clara, CA - announces that its Call for Papers is open. The Internet of Things (IoT) is the biggest idea since the creation of the Worldwide Web more than 20 years ago.
An entirely new security model is needed for the Internet of Things, or is it? Can we save some old and tested controls for this new and different environment? In his session at @ThingsExpo, New York's at the Javits Center, Davi Ottenheimer, EMC Senior Director of Trust, reviewed hands-on lessons with IoT devices and reveal a new risk balance you might not expect. Davi Ottenheimer, EMC Senior Director of Trust, has more than nineteen years' experience managing global security operations and assessments, including a decade of leading incident response and digital forensics. He is co-author of t...
The Internet of Things is a misnomer. That implies that everything is on the Internet, and that simply should not be - especially for things that are blurring the line between medical devices that stimulate like a pacemaker and quantified self-sensors like a pedometer or pulse tracker. The mesh of things that we manage must be segmented into zones of trust for sensing data, transmitting data, receiving command and control administrative changes, and peer-to-peer mesh messaging. In his session at @ThingsExpo, Ryan Bagnulo, Solution Architect / Software Engineer at SOA Software, focused on desi...
The multi-trillion economic opportunity around the "Internet of Things" (IoT) is emerging as the hottest topic for investors in 2015. As we connect the physical world with information technology, data from actions, processes and the environment can increase sales, improve efficiencies, automate daily activities and minimize risk. In his session at @ThingsExpo, Ed Maguire, Senior Analyst at CLSA Americas, will describe what is new and different about IoT, explore financial, technological and real-world impact across consumer and business use cases. Why now? Significant corporate and venture...
While great strides have been made relative to the video aspects of remote collaboration, audio technology has basically stagnated. Typically all audio is mixed to a single monaural stream and emanates from a single point, such as a speakerphone or a speaker associated with a video monitor. This leads to confusion and lack of understanding among participants especially regarding who is actually speaking. Spatial teleconferencing introduces the concept of acoustic spatial separation between conference participants in three dimensional space. This has been shown to significantly improve comprehe...
Today’s enterprise is being driven by disruptive competitive and human capital requirements to provide enterprise application access through not only desktops, but also mobile devices. To retrofit existing programs across all these devices using traditional programming methods is very costly and time consuming – often prohibitively so. In his session at @ThingsExpo, Jesse Shiah, CEO, President, and Co-Founder of AgilePoint Inc., discussed how you can create applications that run on all mobile devices as well as laptops and desktops using a visual drag-and-drop application – and eForms-buildi...
There will be 150 billion connected devices by 2020. New digital businesses have already disrupted value chains across every industry. APIs are at the center of the digital business. You need to understand what assets you have that can be exposed digitally, what their digital value chain is, and how to create an effective business model around that value chain to compete in this economy. No enterprise can be complacent and not engage in the digital economy. Learn how to be the disruptor and not the disruptee.
The enterprise market will drive IoT device adoption over the next five years. In his session at @ThingsExpo, John Greenough, an analyst at BI Intelligence, division of Business Insider, will analyze how companies will adopt IoT products and the associated cost of adopting those products. John Greenough is the lead analyst covering the Internet of Things for BI Intelligence- Business Insider’s paid research service. Numerous IoT companies have cited his analysis of the IoT. Prior to joining BI Intelligence, he worked analyzing bank technology for Corporate Insight and The Clearing House Pay...