A typical day of Testing (circa 2018)
Recently I found myself repeatedly describing how I approach my testing role in a “typical day” and afterwards I thought it would be fun to capture some things I said to see how this might evolve over time:
Background
- At Laurel & Wolf our engineering team works in 2 week sprints and we try to deploy to production on a daily basis.
- We work within a GitHub workflow which essentially means our changes happen in feature branches, when those branches are ready they become Pull Requests (PRs), those PRs automatically run against our CI server, then get code reviewed and are ready for final testing by myself and our other QA.
- All of our JIRA stories are attached to at least one PR depending on the repository / systems being updated.
- Everyone on the engineering team has a fully loaded MacBook Pro with Docker containers running our development environment for ease of use.
Daily
- I’ll come in and if not already working on something, I’ll take the highest priority item in the QA column of our tracking system JIRA. (I try to evenly share the responsibility and heavy lifting with my fellow tester although I do love to jump into big risky changes.)
- Review the GitHub PR to see /how big/ the code changes are and what files were impacted. You never know when this will come into play and at the very least it’s another source of input for what I might test. I will also do some basic code review looking for unexpected file changes or obvious challenges.
- Checkout the branch locally and pull in the latest changes and get everything running.
- If the story is large and/or complex, I’ll open up Xmind to visually model the system and changes mentioned in the story.
- Import the acceptance criteria into as requirements. (An important input to testing but not the sole oracle).
- Pull in relevant mockups or other sources cited either in the story, subtasks or other supporting documentation.
- Use a few testing mnemonics like SFDIPOT and MIDTESTD from the Heuristic Test Strategy Model to come up with test ideas.
- Pull in relevant catalogs of data, cheat sheets and previous mindmaps that might contain useful test ideas I might mix in. (When I do this I often update old test strategies with new ideas!)
- Brainstorm relevant test techniques to apply based on the collected information and outline those tests or functional coverage areas at a high level. Depending on the technique (or what I brainstormed using mnemonics) I might select the data I’m going to use, make a table or matrix, or functional outline but it depends on the situation. All of these can be included or linked to the map.
- If the story is small or less complex, I’ll use a notebook / moleskine for this same purpose but on a smaller scale. Frankly I also use the notebook in conjunction with the mind map as well.
- I’ll list out any relevant questions I have that I want to answer during my exploration. Note any follow up questions during my testing.
- I don’t typically time-box sessions but I almost always have a charter or two written down in my notebook or mindmap.
- Start exploring the system and the changes outlined in the story (or outline in my mindmap) while generating more test ideas and marking down what things matched up with the acceptance criteria and my modeled system. Anything that doesn’t match I follow up on. Depending on the change I might:
- Watch web requests in my browser and local logs to make sure requests are succeeding and the data or requests are what I expect it to be.
- Inspect a page in DevTools for the browser or in React or Redux.
- Reference the codebase, kick off workers to make something happen, manipulate or generate data, reference third party systems, etc.
- Backend or API changes are usually tested in isolation and then with the features they are built to work with and then as a part of a larger system.
- Look for testability improvements and hopefully address time during this time
- Add new JIRA stories for potential automation coverage
- I repeat this process until I’m able to find less bugs, less questions and/or I’m able to generate less and less test ideas that seem valuable. And/or I may stop testing after a certain amount of time depending on the desired ship date (unless there is something blocking) or a set time box.
- When bugs are found and a few things can happen:
- If they are small, I’ll probably try to fix them myself otherwise
- I’ll let the developer know directly (slack message or PR comment) so they can begin working as I continuing to test
- or I’ll pair with them so we can work through hard to reproduce issues.
- If they are non-blocking issues I’ll file a bug report for addressing later
- Run our e2e tests to try to catch known regression and make sure we didn’t break any of the tests.
- This doesn’t happen often, our tests are fairly low maintenance.
- Once changes are promoted to Staging I’ll do some smaller amounts of testing including testing on mobile devices. We do some of this automatically with our e2e tests.
Semi-Daily
- Pickup a JIRA story for automation coverage and start writing new tests.
- Investigate or triage a bug coming from our design community (an internal team that works directly with our interior designers).
circa 2018
I’m probably forgetting a ton of things but this is roughly what a “typical day” looks like for me. I expect this to evolve over time as data analysis becomes more important and as our automation suites grow.
What does your typical day look like?