The Free On-line Dictionary of Computing defines
quality as:
The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs. Not to be mistaken for "degree of excellence" or "fitness for use" which meet only part of the definition.
Put more simply, quality can be defined as how well something measures up to a standard.
But what if the standard sucks? What if the quality of the standard is low? Example: the standard for an app says it only has to run 1 out of 5 times when you double-click its icon. If you test for that and the app meets the standard, you can say it's of good quality, right? ...
right?
It reminds me of testing apps whose functionality was written without specifications. When I test the app and find something that doesn't make sense or "doesn't work", Development can say that it's not a bug--it's working as intended. ...which is totally relative, but totally true. It's just the intention that was flawed. ...and the only way to coerce a change is to make some great and sneaky argument, or bring in some sort of exterior, already-defined standard that all of the sudden makes the given functionality look like crap.
In the example above, the developer had set his own standard, to which I thought wouldn't be up to standards of the end user. In the example of the 1/5 standard above, if you're like me, your brain makes a judgment call on the standard itself--probably without you realizing it. In essence and in context to this post, you're impelled to hold the standard to some implied standard--a standard that's sort of like a code of ethics that pervades a culture. You know that cutting in line at the Post Office is just a no-no, not because there are any signs there that say so--you just know
. Same idea. In the case of the developer working without a spec, my code of ethics was just different than his.
So in order to have Dev and QA teams be efficient, a general practice is to have both departments agree on what's acceptable and what's not. They define the standard as they see fit for their organization and customers.
But the trick is: how do we know when the standards that we've set are good standards? Where does the standard's standard come from? How do you know right off the bat that the requirement that the app only has to run 1/5th of the time really sucks?
Some correlation can be found when looking at the study of
Truth. People have been studying concepts here for a couple thousand years, as opposed to the drop in the bucket we've spent on studying software. And there are probably just as many theories on SW development and testing practices as there are on Truth. So without going too in-depth to the topic, I believe there's some interesting discoveries to be made when considering the various theories of Truth.