+ Readings:
- Introduction - Software Testing Techniques by Boris Beizer
- Google’s Innovation Factory: Testing, Culture, And Infrastructure - by Patrick Copeland
+ Response:
When we first were introduced to testing, I thought, “Oh good, I always test my code - I’ve been doing this for a while now!” I did not realize how different writing test cases and testing requirements is from me just putting a couple “System.out.println()” lines in my program to prove to myself that it is running the way I thought it was.
This practice has left me spending a lot of time in the toy-program equivalent of a software tester’s Mental Phase Two where I discover that my code doesn’t work and then have to make changes. Using the IDE’s debugger was a bit of a game-changer when it came to testing my code, but it didn’t change the fact that my programs have a lot of bugs, even if they run like they are supposed to. I can’t even imagine the amount of bugs in a software system.
In the Introduction of Boris Beizer’s Software Testing Techniques (I am pretty sure this is the source of our class packet), it seems like we have been testing our programs so far with Kiddie Testing, where we just run it and see what comes out (come on, everybody does it!), but then we also have been using a form of testing that requires trusted Existing Programs (Beizer 24). We have built programs based on the professor’s requirements and they run the program and test it’s reliability against a program they themselves have already created (hopefully!).
Beizer claims that testing is “done to catch bugs” (Beizer 1). A benefit of testing is that it can be done by someone who has no idea of how the program works, whereas debugging can only be done by someone who knows where to look and what could possibly be the cause of the failing test case.
I also took a look at “Google’s Innovation Factory: Testing, Culture, And Infrastructure”, an article written by Google’s Patrick Copeland for IEEE’s International Conference on Software Testing, Innovation, and Validation publication from 2010. Copeland claims that “Development teams write good tests because they care about the products, but also because they want more time to spend writing features and less on debugging” (Copeland 2). This is similar to Beizer’s approach where “The first and most obvious reason [for testing] is we want to reduce the labor of testing” (Beizer 6). Writing clear and intelligent tests is important because this means that more focus goes towards developing the software.
It’s also clear that software could never be bug free. Beizer elaborates on this by saying “If the objective of testing were to prove that a program is free of bugs, then testing not only would be practically impossible, but also would be theoretically impossible” (Beizer 24). It is not possible for there to be no bugs in software - it is possible, however, that the tests do not test that buggy input of the software and maybe the buggy input has such a small chance of being run that no one will ever know.
I do think there are a couple of important ways of writing code that would decrease the possibility of bugs. Copeland refers to Frederick Brooks's No Silver Bullet when he states that “While there is no magic bullet, there is a pragmatism that can be applied to software development that seeks to balance the art form of creating software with the needs for repeatability, efficiency, and quality” (Copeland 4). Techniques like minimizing human-error in the running of software, reusing code that is proven to be reliable, and, of course, testing are all good practices.