That’s right. I said it. If you are a software developer, then you are also a software tester. If you care about the quality of the software you are developing then you should be spending a significant amount of your development effort ensuring and verifying that quality. The only way to do that is to exercise your source code. Preferably in a consistent and repeatable way so that your time spent is optimized.
When I was job searching at the end of my college career, my CS advisor made a strong recommendation that I never mention testing on my resume. “You definitely don’t want to be mistaken for a tester,” he emphatically pointed out. Manual regression and test plan writing has seemed like a mundane task to me and been something I have worked very hard to avoid. Unfortunately, this mindset of avoiding the process of testing did not empower me to create high-quality code or to develop a strong and agile workflow.
I think this mindset was indicative of the testing methods available at the time. Most testing was done with rigorous, monotonous, manual testing scripts and was not a very engaging or rewarding process. Today we have tools and frameworks to help us automate our testing and to make it elegant and repeatable.
I have come to believe that by spending the time to test, a developer can play an active role in greatly improving the quality of their work. This improved quality also comes with the secondary benefit of being a repeatable process which increases agility and the ability to respond to feature improvements. I don’t mean that as a developer you should be writing and executing manual test plans. What I do mean is that for every unit of code you write their should be a set of associated tests ensuring that unit works correctly. For every system that you create their should also be a set of tests that is fully exercised through its given interface (whether the project is a web service, UI, console, etc). These tests should be written in a way that is consistent, repeatable, and maintainable (and not brittle).
Test Driven Development => Quality by Design
It’s no secret that I’m a huge fan of TDD development. The most important reason for unit testing as you build functionality is that it improves the quality of your design and code. Writing these tests as you code forces you to minimize dependencies (decrease coupling) and view the unit of code from the point of a consumer (increasing coherence and usability). It’s very difficult to test badly written code that doesn’t follow good SOLID principles.
UI Testing -> Consistent and Deployable Quality
For previous projects all we created were unit tests (through TDD) for our server-side code. Although these tests led to good design and greatly improved code quality, they did not decrease the need for rigorous manual regression testing at each deployment. This regression testing involved days worth of time, made deployments much more painful, and as a result decreased the agility of our projects. These testing requirements made it impossible to deploy our code on a frequent basis because we had no way to quickly and reliably verify the correctness of our project at deployment time.
System or Integration Testing for Deployable Units
In an attempt to decrease the friction and increase the reliability of our deployments we have begun creating integration or system tests for our deployable units. The term ‘system’ or ‘integration’ is a little vague here, but deciding where the system boundaries occur and scoping tests to these boundaries is key to creating maintainable tests. If the system boundary is scoped correctly then the tests are easy to create and maintain and they are effective. If the scope is too broad then they are usually brittle and difficult, too narrow and they don’t provide confidence in the correctness of the system under test. This boundary is specific to the given system and technology used, but a good rule is that you should at least split the system boundaries where there are network boundaries involved.
An example would be a single page application (SPA) which makes calls to a web service. In this case there are really two systems (the SPA and the web service). Although the SPA requires the web service for operation, both can be tested in isolation using mocks and these tests can provide a high degree of confidence in the correctness of the system. Testing them as one unit greatly increases the complexity and efficiency of the testing. The tests run slowly and are much more brittle.
All Tests Should be Maintainable
As alluded to earlier, the simple act of creating tests is not enough. The tests need to be maintainable and should not be brittle. Brittle tests will be neglected and ignored. Ignored or disabled tests are worse than no tests. They provide you no benefit and you have wasted quite a bit of time creating and maintaining them.
We have been testing the client side interaction of our web pages using a web testing framework named Selenium (but there are other frameworks available and you should research and choose the one that is best for your project). For any feature added to the project, the developers adding the functionality are responsible for creating and maintaing the appropriate unit and interface tests. This effort is greatly improving the quality of product we are delivering to our users and our hope is that it will decrease the friction of our deployments. There is something immensely satisfying about watching your automated tests drive your application. It’s almost mesmerizing and you can feel the quality of your work improving with each successful test result.
Automated Testing Increases Productivity
I know that testing is something that developers have traditionally avoided, but it should be a standard part of our developer effort. Automating the testing of your application greatly improves its quality and ultimately makes the team more efficient and effective. By delivering a higher quality product and being able to deploy more rapidly it improves the engagement of your team and customer satisfaction with your product.