Observability, Monitoring and Testing
Observability blurs the line between testing and monitoring. The concept TOAD can help us think through their relationship and extends it to DevOps.
Are observability and monitoring part of testing?
The short answer is no. The longer answer is if they are both done well, monitoring and observability can provide capabilities that help testing.
Monitoring
Testing is a search for information. Testers make a hypothesis and then design experiments to uncover information.
Monitoring is about capturing our application data and then building dashboards, reports and alerting on it. We can watch for know metrics and errors. We can gather insights into a system’s behavior and performance.
Back in 2007, Ed Keyes presented a talk at GTAC claiming that sufficiently advanced monitoring was indistinguishable from testing. He was saying a few things:
- Running automated tests in production is a good idea. Even better when you monitor the application state throughout the user journey.
- Watching users move through the application and asserting on state can lead to understanding users better. Then you can create newer tests based on what they actually do.
I’ve never seen monitoring advanced enough to replace, let alone make it indistinguishable from testing. I have used it as place to gather data and information about my experiments.
Observability
Observability tools allow us to be more active in our search for information. Monitoring is for known-unknowns and observability is for unknown-unknowns. Observability means we can continually design and test our hypotheses. It allows us to test the internal state of an application as we watch its external output. If that sounds like testing a bit, I agree.
This interplay of testing and observability reminds me of this concept called T.O.A.D.
T.O.A.D.
T.O.A.D. is a catchy acronym pushed forward years ago by Noah Sussman and built on by Chris McMahon. It stands for Testing, Observability and DevOps.
The idea behind T.O.A.D. is there is a common thread between all three concepts and when you focus on them, you are able to better understand your application. DevOps facilities the development process from desktop to production. Observability tells us what has happened on the system in great detail. Testing describes what the system should do. (Taken from McMahon's blog).
The interplay of testing, observability and devops make sense. Let's use an example:
- We create well designed automated tests to help us show our application works.
- We add them to a pipeline to detect changes throughout development.
- We observe if the changes in the application caused by the tests are acceptable or if they create new problems.
What's the common thread?
If monitoring is robust and thorough enough, it can reveal information about the system’s functionality, reliability, and other characteristics that are traditionally assessed during testing. Observability takes us further by understanding what is happening to our application(s) under test during our testing. This blurring of the lines is captured well by T.O.A.D.
The Association for Software Testing is crowd-sourcing a book, Navigating the World as a Context-Driven Tester, which aims to provide responses to common questions and statements about testing from a context-driven perspective.
It's being edited by Lee Hawkins who is posing questions on Twitter, LinkedIn, Slack, and the AST mailing list and then collating the replies, focusing on practice over theory. This was my reply.