Welcome!

Industrial IoT Authors: Elizabeth White, Stackify Blog, Yeshim Deniz, SmartBear Blog, Liz McMillan

Related Topics: Industrial IoT, Microservices Expo

Industrial IoT: Blog Post

Eating Our Own Dog Food: dynaTrace Does Continuous APM

How dynaTrace does continuous APM internally in development

I sat together with Stefan Frandl, Test Automation Lead in dynaTrace’s R&D Lab in Linz, Austria to discuss how dynaTrace does Continuous APM in Development. Obviously dynaTrace takes performance very seriously as we preach to our clients that Continuous Application Performance Management is a critical component across the Application Lifecycle. The earlier in the Lifecycle you manage and get your performance under control the less you have to worry about actual problems later on when you ship your product.

In the discussion I had with Stefan he talked about how dynaTrace transitioned from traditional performance management to where we are now – which means: “eat our own dog food” and “live the dynaTrace Continuous APM message”.

In this article we learn that it is not simply done by plugging in an APM Solution and all your performance problems are automatically detected. It is about building a robust continuous integration environment with meaningful functional and performance tests. It is about having “buy in” from your engineers. It is about figuring out what needs to be measured – what can be measured – which measures you can actually trust and figuring out which measures indicate your performance health status.

What are the problems that performance management in development solves?

I’ve been a developer for many years – so I have no problems ranting a bit about an attitude that many of us have. Meaning: We always “think” that our code is fast enough. And that might be true on my local dev-machine with enough RAM and CPU power that can easily handle the single user load when testing the newly implemented feature or recent bug fix.
In addition to this problem Stefan listed the following areas:

  • Continuous changes on the codebase by different people over a longer period of time increase the probability of small problems sneaking in and accumulating over time into big problems
  • Multiple “New Features” or “Bug Fixes” across the code base from different developers impacting each other
  • Different hardware reveals different problems – especially in multi-threaded environments
  • Other software running on the target machine impacts your application performance

Executing performance tests only at the end of a sprint/iteration or as the very last step before a product release will uncover all the small accumulated problems or environmental related problems at once. Finding all these problems late means additional effort by the dev team to analyze these problems (going back in the change log, getting back into the code, …) and jeopardizes the project schedule.

Furthermore a developer cannot verify if his improvements are real improvements or only improving the product in his local test setup.

Therefore: focus early and continuously on the performance aspect of your code.

Why the traditional approach failed

Prior to “eating our own dog food” we approached performance management in two traditional ways:

Using Profilers
Developers used profilers on their local machine to identify hot spots on manually executed test cases, e.g.: clicking through the main use case of the new feature. This is of course a valid approach and identifies general performance problems like non optimized algorithms – non-performing usage of collections, “wasted” memory …

The usage of a profiler is limited to low load environments. Why is that? Because profilers – in order to capture all this information – have a significant impact on application performance and don’t work well under heavy load. That means that problems that happen outside the “one user” test scenario are harder to catch with a profiler. I don’t say it’s impossible as you can run profilers in different modes that lower the overhead – but then you often don’t get the detailed data you need. So – there is a big trade-off here. Concurrency problems often only occur in high load scenarios which can’t be covered by profilers. So these problems remain undiscovered.

Using manual timings
Adding custom timers in the code is another approach that was used. Developers added their own time measuring statements in what they believed are critical methods in their code. This approach works better in high load environments – but brings two problems with it:

  1. it requires code changes and it is limited to your own code
  2. you have to manually dig through the collected information and try to make sense of it -> timings alone often don’t help you either so you need to add additional logging to get things like method arguments, …
  3. it’s very hard to compare results collected from different machines (e.g. different hardware)

The manual effort on the one side and the inability to manage performance under heavy load on the other made the traditional approaches fail.

Challenge with accurate and stable measuring

Measuring execution time is a thing that is not too hard to do with all the options that the runtime, application server or operating system provides. But it is not easy to measure the right thing and to measure it accurately. Here are some of the problems Stefan ran into when measuring execution time:

Garbage Collection
In a managed environment like Java and .NET, the Garbage Collector plays a big role in application performance. GC Runs and their impact on performance are unpredictable across runs. Even though you run the same test in the same environment with the same parameters it doesn’t mean that the GC runs consistently. Measuring execution times of Methods in Java & .NET Applications is therefore done by extracting the time the Garbage Collector “suspended” a method from execution. What does that mean? When you take a timestamp at the beginning and at the end of the method the difference is not necessarily the pure execution time. If the GC kicked in while your method was executing it impacts your measured time. In order to get accurate execution time it is therefore necessary to subtract the GC Collection time. This Use Case is supported by dynaTrace in a way that it measures execution time including and excluding GC. But it is still very important to monitor GC times and GC activations. A fast implementation that produces a lot of garbage and therefore adds high load to the CPU may degrade the overall performance of the system by starving other threads.

dynaTrace captures the runtime suspension time per method and per transaction

dynaTrace captures the runtime suspension time per method and per transaction

Intel SpeedStep
Some of the testing machines that Stefan uses showed very volatile test results. The same test on the same code base could not return stable results. One of the biggest challenges is to really come up with a test environment that produces accurate and stable measures. In his particular case it turned out that Intel SpeedStep caused the unpredictable performance behaviour. This might not be true for all of you out there – but it’s a great data point that hopefully helps some of you when trying to find a stable test environment.

CPU Timings under Windows
Besides execution times – meaning taking a timestamp at the beginning and at the end of the method call to measure execution time – it is possible to get the actual time spent on the CPU for the executing thread. This is a very valuable measure. The difference between CPU and Execution Time is explained by time waiting for I/O, the database, a remote call or time waiting on synchronization.
Another hint from Stefan: CPU timings on Windows Operating Systems can be very inaccurate and are therefore ignored for short runs as they don’t give us enough value.

dynaTrace APM is used to manage application performance. With built-in features separating GC Time from Execution Time, the ability to capture CPU Timings (on OS’s where these values make sense), and the fact that dynaTrace traces individual transactions across tiers down to the method level with almost no overhead enables us to use this data for performance management.

If you want to get stable results you have to have stable tests

We learned that there are certain aspects to consider when collecting execution time measures, e.g.: Extract GC Times. In order to get stable results it is not only necessary to have a stable environment and a stable way to measure timing – the key thing is to have stable tests with realistic test data in a realistic environment.
dynaTrace uses two types of load tests: Unit tests to performance test certain features, and SilkPerformer tests to load test our event collection and sending by putting those applications under load that are actively traced and monitored.

Getting Stable and realistic Unit Tests
The Unit tests are executed in all different environments that the dynaTrace software runs on. The key to getting stable results is that every test method has a setup and teardown to ensure that everything is cleaned up before the next test case executes. The test method itself then runs through several “Warm-Up” runs followed by multiple Test Runs that are taken for performance measurement. The Warm-Up phase is necessary to rule out performance impacts of a JVM/CLR that has just started, heap spaces that have not yet reached its “normal” utilization level or impacts on e.g.: not fully initialized caches. How long does the Warm-Up phase last? An indication that is used here is an execution time volatility of < 5%. This means that the warm-up phase runs until the execution times are stabilized. Following that are multiple Test Runs. Average execution values across these test runs are taken to validate the performance of the tested feature.

The dynaTrace Test Automation Team has invested a lot into a home-grown testing framework that allows the execution and the measurement of the above-described approach.

Warm-Up phase runs as long as tests produce stable results

Warm-Up phase runs as long as tests start producing stable results

Getting Stable Load Testing Results
The concept of a Warm-Up and the actual measured Testing-Phase is not a new concept in general. The SilkPerformer Load-Tests that are used work in the same way where SilkPerformer has the built-in feature of a Warm-Up and Measurement Period which made it easy for us to apply this process to these kinds of load tests as well.

So Stefan – How often do you run your tests and what happens if things go wrong?

We use QuickBuild as our Build/Continuous Integration Server. Every time a build is triggered all functional unit tests are executed giving us immediate feedback about functional correctness of the build. A broken build or failed Unit tests trigger alerts to those developers that checked in code in the respective code base since the last successful build. This gives us the chance to immediately fix functional regressions.

Twice a day we also execute the performance Unit tests as described above. In case of a performance regression the same alerting mechanism is triggered – meaning that the developers who made code modifications in the code are automatically notified about the problem. Larger scale performance tests for critical features are executed every day as it is not feasible to execute them more than once.

Providing the information to the developer
In addition to the Unit test results – whether we are talking about functional or performance tests – we capture transactional tracing data (PurePath’s) with dynaTrace Continuous APM (this is where we “eat our own dog food”). dynaTrace runs on the Continuous Integration Environment and traces all tests starting from the test method through those components that are being tested.

Not only do we use dynaTrace to capture transaction-based information like executed methods, method arguments, SQL Statements, Exceptions, … – we also use dynaTrace Dashboards to make the data easily accessible to everybody. The developer is notified via automatically-triggered Email alerts if a threshold is violated. Afterwards he can have a look at our dashboards that show execution times of individual test cases over time. This is great to see performance regressions on the Entry Point level. From here we can drill deeper to a Triage Dashboard in order to identify the root cause of the identified regression, e.g.: unnecessary exceptions or method calls causing overhead. All this is available at the fingertips of the developer or architect who needs to look at the details.

A Real-Life Example on how to prevent problems from shipping with the product

Now let’s look at a real life example. For version 3.1 of dynaTrace we changed the way we read and write memory dumps. We expected huge performance improvements with that change. In order to verify this and in order to compare it against the existing implementation we created a set of load tests that tested the memory dump feature with different sizes of memory dumps.
The following graph shows the load testing results of different dump sizes.

Performance results over time for different test use cases

Performance results over time for different test use cases

We executed the tests back in May for the existing implementation to have a baseline. On May 19th we ran the first test with the new implementation. We can observe that most of the tests ran faster – but – we had some that were significantly slower. It turned out this was due to an incorrect internal cache strategy which was really fast for large amounts of data but slow for small dumps. Once this problem was fixed we overall got much better times with the new implementation.

Further down the road two bug fixes – that were not intended to “hurt” performance – actually impacted the performance of a single use case dramatically. With the tests in place and with the collection of detailed results, it was easy to identify the problem caused by the fix and solve it in no time.
Further performance improvements that were made early June can be seen and verified that they actually improved the new version.

What was necessary to catch this problem?
It was necessary to have the correct test cases in place. In this case it was essential to run the same test on different environments with different input data. Otherwise the problem would have remained undetected.
Running these tests twice a day allowed the developer who committed the fix to go back in the code that he was still very familiar with and fix the problem for those scenarios that were not tested on his local machine but were tested in the CI Environment.
dynaTrace Continuous APM is used to analyze test executions – it provides accurate timings and in-depth tracing information including method execution times, argument values, database statements, exceptions, … This information is essential for analyzing the root cause of the problem fast in order to bring performance back on track.

Conclusion

dynaTrace uses dynaTrace Continuous APM internally to live what we believe is the correct approach for Continuous Application Performance Management. We learned that an APM Solution is one piece of the puzzle in a Development Environment. In order to do performance management it is essential to have good and stable test cases that are executed continuously. Then the Continuous APM Solution enables you to identify regressions early on providing your developers the in-depth information they need to minimize bug fixing time.

Here is the dynaTrace Test Automation check list for successful performance changes:

  • Create performance test before improving feature performance
  • Carry them out at least 5 times to be sure that they are stable
  • Implement the “Performance” of the feature
  • Rerun the tests and check if your assumptions are correct

Related reading:

  1. Continuous Performance Management in Development Continuous Integration has become a well established practice in todays...
  2. Performance Management in Continuous Integration I recently gave  presentations on Performance Management as part of...
  3. 5 Steps to Automate Browser Performance Analysis with Watir and dynaTrace AJAX Edition I’ve recently been working with several clients to analyze their...
  4. Visual Studio Team System for Unit-, Web- and Load-Testing with dynaTrace Last week I was given the opportunity to meet the...
  5. Do more with Functional Testing – Take the Next Evolutionary Step Functional Testing has always been an activity done by Test...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


IoT & Smart Cities Stories
CloudEXPO | DevOpsSUMMIT | DXWorldEXPO are the world's most influential, independent events where Cloud Computing was coined and where technology buyers and vendors meet to experience and discuss the big picture of Digital Transformation and all of the strategies, tactics, and tools they need to realize their goals. Sponsors of DXWorldEXPO | CloudEXPO benefit from unmatched branding, profile building and lead generation opportunities.
DXWorldEXPO LLC announced today that Big Data Federation to Exhibit at the 22nd International CloudEXPO, colocated with DevOpsSUMMIT and DXWorldEXPO, November 12-13, 2018 in New York City. Big Data Federation, Inc. develops and applies artificial intelligence to predict financial and economic events that matter. The company uncovers patterns and precise drivers of performance and outcomes with the aid of machine-learning algorithms, big data, and fundamental analysis. Their products are deployed...
All in Mobile is a place where we continually maximize their impact by fostering understanding, empathy, insights, creativity and joy. They believe that a truly useful and desirable mobile app doesn't need the brightest idea or the most advanced technology. A great product begins with understanding people. It's easy to think that customers will love your app, but can you justify it? They make sure your final app is something that users truly want and need. The only way to do this is by ...
Digital Transformation and Disruption, Amazon Style - What You Can Learn. Chris Kocher is a co-founder of Grey Heron, a management and strategic marketing consulting firm. He has 25+ years in both strategic and hands-on operating experience helping executives and investors build revenues and shareholder value. He has consulted with over 130 companies on innovating with new business models, product strategies and monetization. Chris has held management positions at HP and Symantec in addition to ...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
The challenges of aggregating data from consumer-oriented devices, such as wearable technologies and smart thermostats, are fairly well-understood. However, there are a new set of challenges for IoT devices that generate megabytes or gigabytes of data per second. Certainly, the infrastructure will have to change, as those volumes of data will likely overwhelm the available bandwidth for aggregating the data into a central repository. Ochandarena discusses a whole new way to think about your next...
Cell networks have the advantage of long-range communications, reaching an estimated 90% of the world. But cell networks such as 2G, 3G and LTE consume lots of power and were designed for connecting people. They are not optimized for low- or battery-powered devices or for IoT applications with infrequently transmitted data. Cell IoT modules that support narrow-band IoT and 4G cell networks will enable cell connectivity, device management, and app enablement for low-power wide-area network IoT. B...
The hierarchical architecture that distributes "compute" within the network specially at the edge can enable new services by harnessing emerging technologies. But Edge-Compute comes at increased cost that needs to be managed and potentially augmented by creative architecture solutions as there will always a catching-up with the capacity demands. Processing power in smartphones has enhanced YoY and there is increasingly spare compute capacity that can be potentially pooled. Uber has successfully ...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...