Welcome!

Industrial IoT Authors: Pat Romanski, Jason Bloomberg, Elizabeth White, Derek Weeks, Harry Trott

Related Topics: Industrial IoT, Microservices Expo

Industrial IoT: Blog Post

Eating Our Own Dog Food: dynaTrace Does Continuous APM

How dynaTrace does continuous APM internally in development

I sat together with Stefan Frandl, Test Automation Lead in dynaTrace’s R&D Lab in Linz, Austria to discuss how dynaTrace does Continuous APM in Development. Obviously dynaTrace takes performance very seriously as we preach to our clients that Continuous Application Performance Management is a critical component across the Application Lifecycle. The earlier in the Lifecycle you manage and get your performance under control the less you have to worry about actual problems later on when you ship your product.

In the discussion I had with Stefan he talked about how dynaTrace transitioned from traditional performance management to where we are now – which means: “eat our own dog food” and “live the dynaTrace Continuous APM message”.

In this article we learn that it is not simply done by plugging in an APM Solution and all your performance problems are automatically detected. It is about building a robust continuous integration environment with meaningful functional and performance tests. It is about having “buy in” from your engineers. It is about figuring out what needs to be measured – what can be measured – which measures you can actually trust and figuring out which measures indicate your performance health status.

What are the problems that performance management in development solves?

I’ve been a developer for many years – so I have no problems ranting a bit about an attitude that many of us have. Meaning: We always “think” that our code is fast enough. And that might be true on my local dev-machine with enough RAM and CPU power that can easily handle the single user load when testing the newly implemented feature or recent bug fix.
In addition to this problem Stefan listed the following areas:

  • Continuous changes on the codebase by different people over a longer period of time increase the probability of small problems sneaking in and accumulating over time into big problems
  • Multiple “New Features” or “Bug Fixes” across the code base from different developers impacting each other
  • Different hardware reveals different problems – especially in multi-threaded environments
  • Other software running on the target machine impacts your application performance

Executing performance tests only at the end of a sprint/iteration or as the very last step before a product release will uncover all the small accumulated problems or environmental related problems at once. Finding all these problems late means additional effort by the dev team to analyze these problems (going back in the change log, getting back into the code, …) and jeopardizes the project schedule.

Furthermore a developer cannot verify if his improvements are real improvements or only improving the product in his local test setup.

Therefore: focus early and continuously on the performance aspect of your code.

Why the traditional approach failed

Prior to “eating our own dog food” we approached performance management in two traditional ways:

Using Profilers
Developers used profilers on their local machine to identify hot spots on manually executed test cases, e.g.: clicking through the main use case of the new feature. This is of course a valid approach and identifies general performance problems like non optimized algorithms – non-performing usage of collections, “wasted” memory …

The usage of a profiler is limited to low load environments. Why is that? Because profilers – in order to capture all this information – have a significant impact on application performance and don’t work well under heavy load. That means that problems that happen outside the “one user” test scenario are harder to catch with a profiler. I don’t say it’s impossible as you can run profilers in different modes that lower the overhead – but then you often don’t get the detailed data you need. So – there is a big trade-off here. Concurrency problems often only occur in high load scenarios which can’t be covered by profilers. So these problems remain undiscovered.

Using manual timings
Adding custom timers in the code is another approach that was used. Developers added their own time measuring statements in what they believed are critical methods in their code. This approach works better in high load environments – but brings two problems with it:

  1. it requires code changes and it is limited to your own code
  2. you have to manually dig through the collected information and try to make sense of it -> timings alone often don’t help you either so you need to add additional logging to get things like method arguments, …
  3. it’s very hard to compare results collected from different machines (e.g. different hardware)

The manual effort on the one side and the inability to manage performance under heavy load on the other made the traditional approaches fail.

Challenge with accurate and stable measuring

Measuring execution time is a thing that is not too hard to do with all the options that the runtime, application server or operating system provides. But it is not easy to measure the right thing and to measure it accurately. Here are some of the problems Stefan ran into when measuring execution time:

Garbage Collection
In a managed environment like Java and .NET, the Garbage Collector plays a big role in application performance. GC Runs and their impact on performance are unpredictable across runs. Even though you run the same test in the same environment with the same parameters it doesn’t mean that the GC runs consistently. Measuring execution times of Methods in Java & .NET Applications is therefore done by extracting the time the Garbage Collector “suspended” a method from execution. What does that mean? When you take a timestamp at the beginning and at the end of the method the difference is not necessarily the pure execution time. If the GC kicked in while your method was executing it impacts your measured time. In order to get accurate execution time it is therefore necessary to subtract the GC Collection time. This Use Case is supported by dynaTrace in a way that it measures execution time including and excluding GC. But it is still very important to monitor GC times and GC activations. A fast implementation that produces a lot of garbage and therefore adds high load to the CPU may degrade the overall performance of the system by starving other threads.

dynaTrace captures the runtime suspension time per method and per transaction

dynaTrace captures the runtime suspension time per method and per transaction

Intel SpeedStep
Some of the testing machines that Stefan uses showed very volatile test results. The same test on the same code base could not return stable results. One of the biggest challenges is to really come up with a test environment that produces accurate and stable measures. In his particular case it turned out that Intel SpeedStep caused the unpredictable performance behaviour. This might not be true for all of you out there – but it’s a great data point that hopefully helps some of you when trying to find a stable test environment.

CPU Timings under Windows
Besides execution times – meaning taking a timestamp at the beginning and at the end of the method call to measure execution time – it is possible to get the actual time spent on the CPU for the executing thread. This is a very valuable measure. The difference between CPU and Execution Time is explained by time waiting for I/O, the database, a remote call or time waiting on synchronization.
Another hint from Stefan: CPU timings on Windows Operating Systems can be very inaccurate and are therefore ignored for short runs as they don’t give us enough value.

dynaTrace APM is used to manage application performance. With built-in features separating GC Time from Execution Time, the ability to capture CPU Timings (on OS’s where these values make sense), and the fact that dynaTrace traces individual transactions across tiers down to the method level with almost no overhead enables us to use this data for performance management.

If you want to get stable results you have to have stable tests

We learned that there are certain aspects to consider when collecting execution time measures, e.g.: Extract GC Times. In order to get stable results it is not only necessary to have a stable environment and a stable way to measure timing – the key thing is to have stable tests with realistic test data in a realistic environment.
dynaTrace uses two types of load tests: Unit tests to performance test certain features, and SilkPerformer tests to load test our event collection and sending by putting those applications under load that are actively traced and monitored.

Getting Stable and realistic Unit Tests
The Unit tests are executed in all different environments that the dynaTrace software runs on. The key to getting stable results is that every test method has a setup and teardown to ensure that everything is cleaned up before the next test case executes. The test method itself then runs through several “Warm-Up” runs followed by multiple Test Runs that are taken for performance measurement. The Warm-Up phase is necessary to rule out performance impacts of a JVM/CLR that has just started, heap spaces that have not yet reached its “normal” utilization level or impacts on e.g.: not fully initialized caches. How long does the Warm-Up phase last? An indication that is used here is an execution time volatility of < 5%. This means that the warm-up phase runs until the execution times are stabilized. Following that are multiple Test Runs. Average execution values across these test runs are taken to validate the performance of the tested feature.

The dynaTrace Test Automation Team has invested a lot into a home-grown testing framework that allows the execution and the measurement of the above-described approach.

Warm-Up phase runs as long as tests produce stable results

Warm-Up phase runs as long as tests start producing stable results

Getting Stable Load Testing Results
The concept of a Warm-Up and the actual measured Testing-Phase is not a new concept in general. The SilkPerformer Load-Tests that are used work in the same way where SilkPerformer has the built-in feature of a Warm-Up and Measurement Period which made it easy for us to apply this process to these kinds of load tests as well.

So Stefan – How often do you run your tests and what happens if things go wrong?

We use QuickBuild as our Build/Continuous Integration Server. Every time a build is triggered all functional unit tests are executed giving us immediate feedback about functional correctness of the build. A broken build or failed Unit tests trigger alerts to those developers that checked in code in the respective code base since the last successful build. This gives us the chance to immediately fix functional regressions.

Twice a day we also execute the performance Unit tests as described above. In case of a performance regression the same alerting mechanism is triggered – meaning that the developers who made code modifications in the code are automatically notified about the problem. Larger scale performance tests for critical features are executed every day as it is not feasible to execute them more than once.

Providing the information to the developer
In addition to the Unit test results – whether we are talking about functional or performance tests – we capture transactional tracing data (PurePath’s) with dynaTrace Continuous APM (this is where we “eat our own dog food”). dynaTrace runs on the Continuous Integration Environment and traces all tests starting from the test method through those components that are being tested.

Not only do we use dynaTrace to capture transaction-based information like executed methods, method arguments, SQL Statements, Exceptions, … – we also use dynaTrace Dashboards to make the data easily accessible to everybody. The developer is notified via automatically-triggered Email alerts if a threshold is violated. Afterwards he can have a look at our dashboards that show execution times of individual test cases over time. This is great to see performance regressions on the Entry Point level. From here we can drill deeper to a Triage Dashboard in order to identify the root cause of the identified regression, e.g.: unnecessary exceptions or method calls causing overhead. All this is available at the fingertips of the developer or architect who needs to look at the details.

A Real-Life Example on how to prevent problems from shipping with the product

Now let’s look at a real life example. For version 3.1 of dynaTrace we changed the way we read and write memory dumps. We expected huge performance improvements with that change. In order to verify this and in order to compare it against the existing implementation we created a set of load tests that tested the memory dump feature with different sizes of memory dumps.
The following graph shows the load testing results of different dump sizes.

Performance results over time for different test use cases

Performance results over time for different test use cases

We executed the tests back in May for the existing implementation to have a baseline. On May 19th we ran the first test with the new implementation. We can observe that most of the tests ran faster – but – we had some that were significantly slower. It turned out this was due to an incorrect internal cache strategy which was really fast for large amounts of data but slow for small dumps. Once this problem was fixed we overall got much better times with the new implementation.

Further down the road two bug fixes – that were not intended to “hurt” performance – actually impacted the performance of a single use case dramatically. With the tests in place and with the collection of detailed results, it was easy to identify the problem caused by the fix and solve it in no time.
Further performance improvements that were made early June can be seen and verified that they actually improved the new version.

What was necessary to catch this problem?
It was necessary to have the correct test cases in place. In this case it was essential to run the same test on different environments with different input data. Otherwise the problem would have remained undetected.
Running these tests twice a day allowed the developer who committed the fix to go back in the code that he was still very familiar with and fix the problem for those scenarios that were not tested on his local machine but were tested in the CI Environment.
dynaTrace Continuous APM is used to analyze test executions – it provides accurate timings and in-depth tracing information including method execution times, argument values, database statements, exceptions, … This information is essential for analyzing the root cause of the problem fast in order to bring performance back on track.

Conclusion

dynaTrace uses dynaTrace Continuous APM internally to live what we believe is the correct approach for Continuous Application Performance Management. We learned that an APM Solution is one piece of the puzzle in a Development Environment. In order to do performance management it is essential to have good and stable test cases that are executed continuously. Then the Continuous APM Solution enables you to identify regressions early on providing your developers the in-depth information they need to minimize bug fixing time.

Here is the dynaTrace Test Automation check list for successful performance changes:

  • Create performance test before improving feature performance
  • Carry them out at least 5 times to be sure that they are stable
  • Implement the “Performance” of the feature
  • Rerun the tests and check if your assumptions are correct

Related reading:

  1. Continuous Performance Management in Development Continuous Integration has become a well established practice in todays...
  2. Performance Management in Continuous Integration I recently gave  presentations on Performance Management as part of...
  3. 5 Steps to Automate Browser Performance Analysis with Watir and dynaTrace AJAX Edition I’ve recently been working with several clients to analyze their...
  4. Visual Studio Team System for Unit-, Web- and Load-Testing with dynaTrace Last week I was given the opportunity to meet the...
  5. Do more with Functional Testing – Take the Next Evolutionary Step Functional Testing has always been an activity done by Test...

More Stories By Andreas Grabner

Andreas Grabner has been helping companies improve their application performance for 15+ years. He is a regular contributor within Web Performance and DevOps communities and a prolific speaker at user groups and conferences around the world. Reach him at @grabnerandi

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
In his session at @ThingsExpo, Chris Klein, CEO and Co-founder of Rachio, will discuss next generation communities that are using IoT to create more sustainable, intelligent communities. One example is Sterling Ranch, a 10,000 home development that – with the help of Siemens – will integrate IoT technology into the community to provide residents with energy and water savings as well as intelligent security. Everything from stop lights to sprinkler systems to building infrastructures will run ef...
Whether your IoT service is connecting cars, homes, appliances, wearable, cameras or other devices, one question hangs in the balance – how do you actually make money from this service? The ability to turn your IoT service into profit requires the ability to create a monetization strategy that is flexible, scalable and working for you in real-time. It must be a transparent, smoothly implemented strategy that all stakeholders – from customers to the board – will be able to understand and comprehe...
So, you bought into the current machine learning craze and went on to collect millions/billions of records from this promising new data source. Now, what do you do with them? Too often, the abundance of data quickly turns into an abundance of problems. How do you extract that "magic essence" from your data without falling into the common pitfalls? In her session at @ThingsExpo, Natalia Ponomareva, Software Engineer at Google, will provide tips on how to be successful in large scale machine lear...
You think you know what’s in your data. But do you? Most organizations are now aware of the business intelligence represented by their data. Data science stands to take this to a level you never thought of – literally. The techniques of data science, when used with the capabilities of Big Data technologies, can make connections you had not yet imagined, helping you discover new insights and ask new questions of your data. In his session at @ThingsExpo, Sarbjit Sarkaria, data science team lead ...
Artificial Intelligence has the potential to massively disrupt IoT. In his session at 18th Cloud Expo, AJ Abdallat, CEO of Beyond AI, will discuss what the five main drivers are in Artificial Intelligence that could shape the future of the Internet of Things. AJ Abdallat is CEO of Beyond AI. He has over 20 years of management experience in the fields of artificial intelligence, sensors, instruments, devices and software for telecommunications, life sciences, environmental monitoring, process...
Increasing IoT connectivity is forcing enterprises to find elegant solutions to organize and visualize all incoming data from these connected devices with re-configurable dashboard widgets to effectively allow rapid decision-making for everything from immediate actions in tactical situations to strategic analysis and reporting. In his session at 18th Cloud Expo, Shikhir Singh, Senior Developer Relations Manager at Sencha, will discuss how to create HTML5 dashboards that interact with IoT devic...
SYS-CON Events announced today that Peak 10, Inc., a national IT infrastructure and cloud services provider, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Peak 10 provides reliable, tailored data center and network services, cloud and managed services. Its solutions are designed to scale and adapt to customers’ changing business needs, enabling them to lower costs, improve performance and focus inter...
SYS-CON Events announced today that Ericsson has been named “Gold Sponsor” of SYS-CON's @ThingsExpo, which will take place on June 7-9, 2016, at the Javits Center in New York, New York. Ericsson is a world leader in the rapidly changing environment of communications technology – providing equipment, software and services to enable transformation through mobility. Some 40 percent of global mobile traffic runs through networks we have supplied. More than 1 billion subscribers around the world re...
There is an ever-growing explosion of new devices that are connected to the Internet using “cloud” solutions. This rapid growth is creating a massive new demand for efficient access to data. And it’s not just about connecting to that data anymore. This new demand is bringing new issues and challenges and it is important for companies to scale for the coming growth. And with that scaling comes the need for greater security, gathering and data analysis, storage, connectivity and, of course, the...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Machine Learning helps make complex systems more efficient. By applying advanced Machine Learning techniques such as Cognitive Fingerprinting, wind project operators can utilize these tools to learn from collected data, detect regular patterns, and optimize their own operations. In his session at 18th Cloud Expo, Stuart Gillen, Director of Business Development at SparkCognition, will discuss how research has demonstrated the value of Machine Learning in delivering next generation analytics to im...
This is not a small hotel event. It is also not a big vendor party where politicians and entertainers are more important than real content. This is Cloud Expo, the world's longest-running conference and exhibition focused on Cloud Computing and all that it entails. If you want serious presentations and valuable insight about Cloud Computing for three straight days, then register now for Cloud Expo.
IoT device adoption is growing at staggering rates, and with it comes opportunity for developers to meet consumer demand for an ever more connected world. Wireless communication is the key part of the encompassing components of any IoT device. Wireless connectivity enhances the device utility at the expense of ease of use and deployment challenges. Since connectivity is fundamental for IoT device development, engineers must understand how to overcome the hurdles inherent in incorporating multipl...
The increasing popularity of the Internet of Things necessitates that our physical and cognitive relationship with wearable technology will change rapidly in the near future. This advent means logging has become a thing of the past. Before, it was on us to track our own data, but now that data is automatically available. What does this mean for mHealth and the "connected" body? In her session at @ThingsExpo, Lisa Calkins, CEO and co-founder of Amadeus Consulting, will discuss the impact of wea...
SYS-CON Events announced today that Stratoscale, the software company developing the next generation data center operating system, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. Stratoscale is revolutionizing the data center with a zero-to-cloud-in-minutes solution. With Stratoscale’s hardware-agnostic, Software Defined Data Center (SDDC) solution to store everything, run anything and scale everywhere...
Angular 2 is a complete re-write of the popular framework AngularJS. Programming in Angular 2 is greatly simplified – now it's a component-based well-performing framework. This immersive one-day workshop at 18th Cloud Expo, led by Yakov Fain, a Java Champion and a co-founder of the IT consultancy Farata Systems and the product company SuranceBay, will provide you with everything you wanted to know about Angular 2.
SYS-CON Events announced today that Men & Mice, the leading global provider of DNS, DHCP and IP address management overlay solutions, will exhibit at SYS-CON's 18th International Cloud Expo®, which will take place on June 7-9, 2016, at the Javits Center in New York City, NY. The Men & Mice Suite overlay solution is already known for its powerful application in heterogeneous operating environments, enabling enterprises to scale without fuss. Building on a solid range of diverse platform support,...
You deployed your app with the Bluemix PaaS and it's gaining some serious traction, so it's time to make some tweaks. Did you design your application in a way that it can scale in the cloud? Were you even thinking about the cloud when you built the app? If not, chances are your app is going to break. Check out this webcast to learn various techniques for designing applications that will scale successfully in Bluemix, for the confidence you need to take your apps to the next level and beyond.
We’ve worked with dozens of early adopters across numerous industries and will debunk common misperceptions, which starts with understanding that many of the connected products we’ll use over the next 5 years are already products, they’re just not yet connected. With an IoT product, time-in-market provides much more essential feedback than ever before. Innovation comes from what you do with the data that the connected product provides in order to enhance the customer experience and optimize busi...
Digital payments using wearable devices such as smart watches, fitness trackers, and payment wristbands are an increasing area of focus for industry participants, and consumer acceptance from early trials and deployments has encouraged some of the biggest names in technology and banking to continue their push to drive growth in this nascent market. Wearable payment systems may utilize near field communication (NFC), radio frequency identification (RFID), or quick response (QR) codes and barcodes...