Summary: How Google Tests Software
As a software tester I try to learn as much as I can about how other companies test software. It just so happens that through Google’s testing blog James Whittaker has taken steps to outline just how Google does it.
If you’re interested in learning more I’d recommend reading through the five part series by going to the Google Testing Blog directly but feel free to check out my summary and the things I found interesting:
Google’s organization structure is such that they don’t have a dedicated testing group. Instead the company has more of a project-matrix organizational structure where testers are located in a group called Engineering Productivity where they report directly to a manager but are then shared to individual product groups like Android, GMail, etc. Through this they are able to move around to different groups inside the company based on a particular project and stand to gain a better experience. Engineering Productivity also develops in-house tools, maintains knowledge disciplines in addition to loaning out engineers.
Google has a saying: “you build it, you break it”. They have essentially 3 engineering roles Software Engineers (SWEs), Software Engineers in Test (SETs), and Test Engineers (TEs). SWEs write code, design documentation and are responsible for the quality of anything they touch in isolation. SETs focus on testability, they write code that allows SWEs to test their features, refactor code, and write unit testing frameworks and automation. SETs are responsible for quality of the features. TEs are the opposite of SETs and are focused on user testing. They write some code in the form of automation scripts and usage scenarios and coordinate and test with other TEs. These descriptions are a bit over generalized but you get the idea.
It’s interesting to note that in all of the companies I’ve worked for the SWEs and SETs are the same people and TEs are usually focused on the low hanging fruit. Instead Google blends development and testing to prevent bugs / lapses in quality instead of trying to catch them later when it is more expensive and harder to fix.
As a rule Google tries to ship products as soon as they provide some benefit to the user. Instead of releasing new updates / features in large releases Google tries to release, get feedback and reiterate as fast as possible. This means less time in-house and more time getting their customers responses. Yet in order to get out to production Google has 5 Channels to get through: Canary, Dev, Test, Beta and Production. The Canary channel holds experiments and code that isn’t ready to be released. The Dev channels is where the day to day work gets done, the Test channel is used for internal dog fooding and potential beta items. The Beta and Production channels hold builds that will get external exposure assuming they have passed applicable testing / real world exposure.
Finally Google breaks down their types of testing into three broad categories that include both manual and automated testing: Small Tests, Medium Tests and Large Tests. Small tests are written by SWEs and SETs and are usually focused on single functions or modules. Medium tests involve two or more features and cover the interactions between those features. SETs are mostly responsible for Medium tests. Large tests are three or more features and represent real user scenarios as best as they can be represented. The mix of manual and automated testing depends on what is being tested. James reiterates it’s not how you label the tests just as long as everyone in the company is on the same page.
And there you have, roughly, how Google Tests Software. You can see they spend a great deal of time working on preventing bugs from ever coming up so they can focus their Test Engineers on bigger potential problems and less on the low hanging fruit – which completely makes sense. Now how you and I apply these things to our own testing framework is the real challenge!