Welcome!

XML Authors: Elizabeth White, Liz McMillan, Ignacio M. Llorente, Carmen Gonzalez, David Dossot

Related Topics: XML, SOA & WOA

XML: Blog Post

Eating Our Own Dog Food: dynaTrace Does Continuous APM

How dynaTrace does continuous APM internally in development

I sat together with Stefan Frandl, Test Automation Lead in dynaTrace’s R&D Lab in Linz, Austria to discuss how dynaTrace does Continuous APM in Development. Obviously dynaTrace takes performance very seriously as we preach to our clients that Continuous Application Performance Management is a critical component across the Application Lifecycle. The earlier in the Lifecycle you manage and get your performance under control the less you have to worry about actual problems later on when you ship your product.

In the discussion I had with Stefan he talked about how dynaTrace transitioned from traditional performance management to where we are now – which means: “eat our own dog food” and “live the dynaTrace Continuous APM message”.

In this article we learn that it is not simply done by plugging in an APM Solution and all your performance problems are automatically detected. It is about building a robust continuous integration environment with meaningful functional and performance tests. It is about having “buy in” from your engineers. It is about figuring out what needs to be measured – what can be measured – which measures you can actually trust and figuring out which measures indicate your performance health status.

What are the problems that performance management in development solves?

I’ve been a developer for many years – so I have no problems ranting a bit about an attitude that many of us have. Meaning: We always “think” that our code is fast enough. And that might be true on my local dev-machine with enough RAM and CPU power that can easily handle the single user load when testing the newly implemented feature or recent bug fix.
In addition to this problem Stefan listed the following areas:

  • Continuous changes on the codebase by different people over a longer period of time increase the probability of small problems sneaking in and accumulating over time into big problems
  • Multiple “New Features” or “Bug Fixes” across the code base from different developers impacting each other
  • Different hardware reveals different problems – especially in multi-threaded environments
  • Other software running on the target machine impacts your application performance

Executing performance tests only at the end of a sprint/iteration or as the very last step before a product release will uncover all the small accumulated problems or environmental related problems at once. Finding all these problems late means additional effort by the dev team to analyze these problems (going back in the change log, getting back into the code, …) and jeopardizes the project schedule.

Furthermore a developer cannot verify if his improvements are real improvements or only improving the product in his local test setup.

Therefore: focus early and continuously on the performance aspect of your code.

Why the traditional approach failed

Prior to “eating our own dog food” we approached performance management in two traditional ways:

Using Profilers
Developers used profilers on their local machine to identify hot spots on manually executed test cases, e.g.: clicking through the main use case of the new feature. This is of course a valid approach and identifies general performance problems like non optimized algorithms – non-performing usage of collections, “wasted” memory …

The usage of a profiler is limited to low load environments. Why is that? Because profilers – in order to capture all this information – have a significant impact on application performance and don’t work well under heavy load. That means that problems that happen outside the “one user” test scenario are harder to catch with a profiler. I don’t say it’s impossible as you can run profilers in different modes that lower the overhead – but then you often don’t get the detailed data you need. So – there is a big trade-off here. Concurrency problems often only occur in high load scenarios which can’t be covered by profilers. So these problems remain undiscovered.

Using manual timings
Adding custom timers in the code is another approach that was used. Developers added their own time measuring statements in what they believed are critical methods in their code. This approach works better in high load environments – but brings two problems with it:

  1. it requires code changes and it is limited to your own code
  2. you have to manually dig through the collected information and try to make sense of it -> timings alone often don’t help you either so you need to add additional logging to get things like method arguments, …
  3. it’s very hard to compare results collected from different machines (e.g. different hardware)

The manual effort on the one side and the inability to manage performance under heavy load on the other made the traditional approaches fail.

Challenge with accurate and stable measuring

Measuring execution time is a thing that is not too hard to do with all the options that the runtime, application server or operating system provides. But it is not easy to measure the right thing and to measure it accurately. Here are some of the problems Stefan ran into when measuring execution time:

Garbage Collection
In a managed environment like Java and .NET, the Garbage Collector plays a big role in application performance. GC Runs and their impact on performance are unpredictable across runs. Even though you run the same test in the same environment with the same parameters it doesn’t mean that the GC runs consistently. Measuring execution times of Methods in Java & .NET Applications is therefore done by extracting the time the Garbage Collector “suspended” a method from execution. What does that mean? When you take a timestamp at the beginning and at the end of the method the difference is not necessarily the pure execution time. If the GC kicked in while your method was executing it impacts your measured time. In order to get accurate execution time it is therefore necessary to subtract the GC Collection time. This Use Case is supported by dynaTrace in a way that it measures execution time including and excluding GC. But it is still very important to monitor GC times and GC activations. A fast implementation that produces a lot of garbage and therefore adds high load to the CPU may degrade the overall performance of the system by starving other threads.

dynaTrace captures the runtime suspension time per method and per transaction

dynaTrace captures the runtime suspension time per method and per transaction

Intel SpeedStep
Some of the testing machines that Stefan uses showed very volatile test results. The same test on the same code base could not return stable results. One of the biggest challenges is to really come up with a test environment that produces accurate and stable measures. In his particular case it turned out that Intel SpeedStep caused the unpredictable performance behaviour. This might not be true for all of you out there – but it’s a great data point that hopefully helps some of you when trying to find a stable test environment.

CPU Timings under Windows
Besides execution times – meaning taking a timestamp at the beginning and at the end of the method call to measure execution time – it is possible to get the actual time spent on the CPU for the executing thread. This is a very valuable measure. The difference between CPU and Execution Time is explained by time waiting for I/O, the database, a remote call or time waiting on synchronization.
Another hint from Stefan: CPU timings on Windows Operating Systems can be very inaccurate and are therefore ignored for short runs as they don’t give us enough value.

dynaTrace APM is used to manage application performance. With built-in features separating GC Time from Execution Time, the ability to capture CPU Timings (on OS’s where these values make sense), and the fact that dynaTrace traces individual transactions across tiers down to the method level with almost no overhead enables us to use this data for performance management.

If you want to get stable results you have to have stable tests

We learned that there are certain aspects to consider when collecting execution time measures, e.g.: Extract GC Times. In order to get stable results it is not only necessary to have a stable environment and a stable way to measure timing – the key thing is to have stable tests with realistic test data in a realistic environment.
dynaTrace uses two types of load tests: Unit tests to performance test certain features, and SilkPerformer tests to load test our event collection and sending by putting those applications under load that are actively traced and monitored.

Getting Stable and realistic Unit Tests
The Unit tests are executed in all different environments that the dynaTrace software runs on. The key to getting stable results is that every test method has a setup and teardown to ensure that everything is cleaned up before the next test case executes. The test method itself then runs through several “Warm-Up” runs followed by multiple Test Runs that are taken for performance measurement. The Warm-Up phase is necessary to rule out performance impacts of a JVM/CLR that has just started, heap spaces that have not yet reached its “normal” utilization level or impacts on e.g.: not fully initialized caches. How long does the Warm-Up phase last? An indication that is used here is an execution time volatility of < 5%. This means that the warm-up phase runs until the execution times are stabilized. Following that are multiple Test Runs. Average execution values across these test runs are taken to validate the performance of the tested feature.

The dynaTrace Test Automation Team has invested a lot into a home-grown testing framework that allows the execution and the measurement of the above-described approach.

Warm-Up phase runs as long as tests produce stable results

Warm-Up phase runs as long as tests start producing stable results

Getting Stable Load Testing Results
The concept of a Warm-Up and the actual measured Testing-Phase is not a new concept in general. The SilkPerformer Load-Tests that are used work in the same way where SilkPerformer has the built-in feature of a Warm-Up and Measurement Period which made it easy for us to apply this process to these kinds of load tests as well.

So Stefan – How often do you run your tests and what happens if things go wrong?

We use QuickBuild as our Build/Continuous Integration Server. Every time a build is triggered all functional unit tests are executed giving us immediate feedback about functional correctness of the build. A broken build or failed Unit tests trigger alerts to those developers that checked in code in the respective code base since the last successful build. This gives us the chance to immediately fix functional regressions.

Twice a day we also execute the performance Unit tests as described above. In case of a performance regression the same alerting mechanism is triggered – meaning that the developers who made code modifications in the code are automatically notified about the problem. Larger scale performance tests for critical features are executed every day as it is not feasible to execute them more than once.

Providing the information to the developer
In addition to the Unit test results – whether we are talking about functional or performance tests – we capture transactional tracing data (PurePath’s) with dynaTrace Continuous APM (this is where we “eat our own dog food”). dynaTrace runs on the Continuous Integration Environment and traces all tests starting from the test method through those components that are being tested.

Not only do we use dynaTrace to capture transaction-based information like executed methods, method arguments, SQL Statements, Exceptions, … – we also use dynaTrace Dashboards to make the data easily accessible to everybody. The developer is notified via automatically-triggered Email alerts if a threshold is violated. Afterwards he can have a look at our dashboards that show execution times of individual test cases over time. This is great to see performance regressions on the Entry Point level. From here we can drill deeper to a Triage Dashboard in order to identify the root cause of the identified regression, e.g.: unnecessary exceptions or method calls causing overhead. All this is available at the fingertips of the developer or architect who needs to look at the details.

A Real-Life Example on how to prevent problems from shipping with the product

Now let’s look at a real life example. For version 3.1 of dynaTrace we changed the way we read and write memory dumps. We expected huge performance improvements with that change. In order to verify this and in order to compare it against the existing implementation we created a set of load tests that tested the memory dump feature with different sizes of memory dumps.
The following graph shows the load testing results of different dump sizes.

Performance results over time for different test use cases

Performance results over time for different test use cases

We executed the tests back in May for the existing implementation to have a baseline. On May 19th we ran the first test with the new implementation. We can observe that most of the tests ran faster – but – we had some that were significantly slower. It turned out this was due to an incorrect internal cache strategy which was really fast for large amounts of data but slow for small dumps. Once this problem was fixed we overall got much better times with the new implementation.

Further down the road two bug fixes – that were not intended to “hurt” performance – actually impacted the performance of a single use case dramatically. With the tests in place and with the collection of detailed results, it was easy to identify the problem caused by the fix and solve it in no time.
Further performance improvements that were made early June can be seen and verified that they actually improved the new version.

What was necessary to catch this problem?
It was necessary to have the correct test cases in place. In this case it was essential to run the same test on different environments with different input data. Otherwise the problem would have remained undetected.
Running these tests twice a day allowed the developer who committed the fix to go back in the code that he was still very familiar with and fix the problem for those scenarios that were not tested on his local machine but were tested in the CI Environment.
dynaTrace Continuous APM is used to analyze test executions – it provides accurate timings and in-depth tracing information including method execution times, argument values, database statements, exceptions, … This information is essential for analyzing the root cause of the problem fast in order to bring performance back on track.

Conclusion

dynaTrace uses dynaTrace Continuous APM internally to live what we believe is the correct approach for Continuous Application Performance Management. We learned that an APM Solution is one piece of the puzzle in a Development Environment. In order to do performance management it is essential to have good and stable test cases that are executed continuously. Then the Continuous APM Solution enables you to identify regressions early on providing your developers the in-depth information they need to minimize bug fixing time.

Here is the dynaTrace Test Automation check list for successful performance changes:

  • Create performance test before improving feature performance
  • Carry them out at least 5 times to be sure that they are stable
  • Implement the “Performance” of the feature
  • Rerun the tests and check if your assumptions are correct

Related reading:

  1. Continuous Performance Management in Development Continuous Integration has become a well established practice in todays...
  2. Performance Management in Continuous Integration I recently gave  presentations on Performance Management as part of...
  3. 5 Steps to Automate Browser Performance Analysis with Watir and dynaTrace AJAX Edition I’ve recently been working with several clients to analyze their...
  4. Visual Studio Team System for Unit-, Web- and Load-Testing with dynaTrace Last week I was given the opportunity to meet the...
  5. Do more with Functional Testing – Take the Next Evolutionary Step Functional Testing has always been an activity done by Test...

More Stories By Andreas Grabner

Andreas Grabner has more than a decade of experience as an architect and developer in the Java and .NET space. In his current role, Andi works as a Technology Strategist for Compuware and leads the Compuware APM Center of Excellence team. In his role he influences the Compuware APM product strategy and works closely with customers in implementing performance management solutions across the entire application lifecycle. He is a frequent speaker at technology conferences on performance and architecture-related topics, and regularly authors articles offering business and technology advice for Compuware’s About:Performance blog.

Comments (0)

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.


@ThingsExpo Stories
Enthusiasm for the Internet of Things has reached an all-time high. In 2013 alone, venture capitalists spent more than $1 billion dollars investing in the IoT space. With "smart" appliances and devices, IoT covers wearable smart devices, cloud services to hardware companies. Nest, a Google company, detects temperatures inside homes and automatically adjusts it by tracking its user's habit. These technologies are quickly developing and with it come challenges such as bridging infrastructure gaps, abiding by privacy concerns and making the concept a reality. These challenges can't be addressed w...
The Domain Name Service (DNS) is one of the most important components in networking infrastructure, enabling users and services to access applications by translating URLs (names) into IP addresses (numbers). Because every icon and URL and all embedded content on a website requires a DNS lookup loading complex sites necessitates hundreds of DNS queries. In addition, as more internet-enabled ‘Things' get connected, people will rely on DNS to name and find their fridges, toasters and toilets. According to a recent IDG Research Services Survey this rate of traffic will only grow. What's driving t...
Connected devices and the Internet of Things are getting significant momentum in 2014. In his session at Internet of @ThingsExpo, Jim Hunter, Chief Scientist & Technology Evangelist at Greenwave Systems, examined three key elements that together will drive mass adoption of the IoT before the end of 2015. The first element is the recent advent of robust open source protocols (like AllJoyn and WebRTC) that facilitate M2M communication. The second is broad availability of flexible, cost-effective storage designed to handle the massive surge in back-end data in a world where timely analytics is e...
Scott Jenson leads a project called The Physical Web within the Chrome team at Google. Project members are working to take the scalability and openness of the web and use it to talk to the exponentially exploding range of smart devices. Nearly every company today working on the IoT comes up with the same basic solution: use my server and you'll be fine. But if we really believe there will be trillions of these devices, that just can't scale. We need a system that is open a scalable and by using the URL as a basic building block, we open this up and get the same resilience that the web enjoys.
We are reaching the end of the beginning with WebRTC, and real systems using this technology have begun to appear. One challenge that faces every WebRTC deployment (in some form or another) is identity management. For example, if you have an existing service – possibly built on a variety of different PaaS/SaaS offerings – and you want to add real-time communications you are faced with a challenge relating to user management, authentication, authorization, and validation. Service providers will want to use their existing identities, but these will have credentials already that are (hopefully) i...
"Matrix is an ambitious open standard and implementation that's set up to break down the fragmentation problems that exist in IP messaging and VoIP communication," explained John Woolf, Technical Evangelist at Matrix, in this SYS-CON.tv interview at @ThingsExpo, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
How do APIs and IoT relate? The answer is not as simple as merely adding an API on top of a dumb device, but rather about understanding the architectural patterns for implementing an IoT fabric. There are typically two or three trends: Exposing the device to a management framework Exposing that management framework to a business centric logic Exposing that business layer and data to end users. This last trend is the IoT stack, which involves a new shift in the separation of what stuff happens, where data lives and where the interface lies. For instance, it's a mix of architectural styles ...
The Internet of Things will put IT to its ultimate test by creating infinite new opportunities to digitize products and services, generate and analyze new data to improve customer satisfaction, and discover new ways to gain a competitive advantage across nearly every industry. In order to help corporate business units to capitalize on the rapidly evolving IoT opportunities, IT must stand up to a new set of challenges. In his session at @ThingsExpo, Jeff Kaplan, Managing Director of THINKstrategies, will examine why IT must finally fulfill its role in support of its SBUs or face a new round of...
Cultural, regulatory, environmental, political and economic (CREPE) conditions over the past decade are creating cross-industry solution spaces that require processes and technologies from both the Internet of Things (IoT), and Data Management and Analytics (DMA). These solution spaces are evolving into Sensor Analytics Ecosystems (SAE) that represent significant new opportunities for organizations of all types. Public Utilities throughout the world, providing electricity, natural gas and water, are pursuing SmartGrid initiatives that represent one of the more mature examples of SAE. We have s...
The Internet of Things will greatly expand the opportunities for data collection and new business models driven off of that data. In her session at @ThingsExpo, Esmeralda Swartz, CMO of MetraTech, discussed how for this to be effective you not only need to have infrastructure and operational models capable of utilizing this new phenomenon, but increasingly service providers will need to convince a skeptical public to participate. Get ready to show them the money!
One of the biggest challenges when developing connected devices is identifying user value and delivering it through successful user experiences. In his session at Internet of @ThingsExpo, Mike Kuniavsky, Principal Scientist, Innovation Services at PARC, described an IoT-specific approach to user experience design that combines approaches from interaction design, industrial design and service design to create experiences that go beyond simple connected gadgets to create lasting, multi-device experiences grounded in people's real needs and desires.
P2P RTC will impact the landscape of communications, shifting from traditional telephony style communications models to OTT (Over-The-Top) cloud assisted & PaaS (Platform as a Service) communication services. The P2P shift will impact many areas of our lives, from mobile communication, human interactive web services, RTC and telephony infrastructure, user federation, security and privacy implications, business costs, and scalability. In his session at @ThingsExpo, Robin Raymond, Chief Architect at Hookflash, will walk through the shifting landscape of traditional telephone and voice services ...
The Internet of Things is tied together with a thin strand that is known as time. Coincidentally, at the core of nearly all data analytics is a timestamp. When working with time series data there are a few core principles that everyone should consider, especially across datasets where time is the common boundary. In his session at Internet of @ThingsExpo, Jim Scott, Director of Enterprise Strategy & Architecture at MapR Technologies, discussed single-value, geo-spatial, and log time series data. By focusing on enterprise applications and the data center, he will use OpenTSDB as an example t...
Explosive growth in connected devices. Enormous amounts of data for collection and analysis. Critical use of data for split-second decision making and actionable information. All three are factors in making the Internet of Things a reality. Yet, any one factor would have an IT organization pondering its infrastructure strategy. How should your organization enhance its IT framework to enable an Internet of Things implementation? In his session at Internet of @ThingsExpo, James Kirkland, Chief Architect for the Internet of Things and Intelligent Systems at Red Hat, described how to revolutioniz...
Bit6 today issued a challenge to the technology community implementing Web Real Time Communication (WebRTC). To leap beyond WebRTC’s significant limitations and fully leverage its underlying value to accelerate innovation, application developers need to consider the entire communications ecosystem.
The definition of IoT is not new, in fact it’s been around for over a decade. What has changed is the public's awareness that the technology we use on a daily basis has caught up on the vision of an always on, always connected world. If you look into the details of what comprises the IoT, you’ll see that it includes everything from cloud computing, Big Data analytics, “Things,” Web communication, applications, network, storage, etc. It is essentially including everything connected online from hardware to software, or as we like to say, it’s an Internet of many different things. The difference ...
Cloud Expo 2014 TV commercials will feature @ThingsExpo, which was launched in June, 2014 at New York City's Javits Center as the largest 'Internet of Things' event in the world.
SYS-CON Events announced today that Windstream, a leading provider of advanced network and cloud communications, has been named “Silver Sponsor” of SYS-CON's 16th International Cloud Expo®, which will take place on June 9–11, 2015, at the Javits Center in New York, NY. Windstream (Nasdaq: WIN), a FORTUNE 500 and S&P 500 company, is a leading provider of advanced network communications, including cloud computing and managed services, to businesses nationwide. The company also offers broadband, phone and digital TV services to consumers primarily in rural areas.
"There is a natural synchronization between the business models, the IoT is there to support ,” explained Brendan O'Brien, Co-founder and Chief Architect of Aria Systems, in this SYS-CON.tv interview at the 15th International Cloud Expo®, held Nov 4–6, 2014, at the Santa Clara Convention Center in Santa Clara, CA.
The major cloud platforms defy a simple, side-by-side analysis. Each of the major IaaS public-cloud platforms offers their own unique strengths and functionality. Options for on-site private cloud are diverse as well, and must be designed and deployed while taking existing legacy architecture and infrastructure into account. Then the reality is that most enterprises are embarking on a hybrid cloud strategy and programs. In this Power Panel at 15th Cloud Expo (http://www.CloudComputingExpo.com), moderated by Ashar Baig, Research Director, Cloud, at Gigaom Research, Nate Gordon, Director of T...