21 February 2008

Thinking Out Loud: Requirements vs. Criteria

What's the difference?  I can't find any good resources for comparison on the web... I'm thinking:
  • Requirements are a list of things that must get done for a project, regardless of their outcome.
  • Criteria is the assessment of the results of the activities done to meet those requirements.
You set up requirements so you know what you have to do.  Ex.:
MyApp must run/startup on Vista.
...This lets you know that you (at least) have to write tests that cause MyApp to try to run/start on Vista, then run those tests. You set criteria so you know the level of "quality" to which that requirement was met.  In order to determine the level of quality, you look at the results of running the tests.  Ex.:
Criteria = MyApp must run/startup 100% of the time, in at least 1000 attempts, on each edition of Vista.  MyApp ran/started-up on 5 different Vista Editions, a total 1200 times each, and started 99.999% of the time.
...Criteria not met. Hmm... I think I like it....

20 February 2008

Mountain Wingsuit

I want to know who tests this equipment...

13 February 2008

FIFA.com - Football - Test Criteria

It's always refreshing to switch contexts for a subject you're immersed into.  I ran across FIFA's criteria for testing their balls, which is just good to take a look at.  Here's an excerpt from the write-up:
FIFA Quality Concept

The FIFA Quality Concept for Footballs is a test programme for Outdoor, Futsal and Beach Soccer footballs. Manufacturers have the possibility to enter into a licensing agreement for the use of the prestigious 'FIFA APPROVED' and 'FIFA INSPECTED' Quality Marks on footballs which have passed a rigorous testing procedure. As an alternative there is the possibility to use the wording 'IMS International Matchball Standard'. Footballs bearing this designation have passed the same quality requirements as 'FIFA INSPECTED' footballs. The use of this designation is however not subject to a licence fee and any association with FIFA is prohibited.

There are two levels of criteria for the three designations. Footballs applying for the category 'FIFA INSPECTED' or the technically equivalent 'IMS - International Matchball Standard' must pass the following six rigorous laboratory tests:

  1. Weight
  2. Circumference
  3. Sphericity
  4. Loss of Air Pressure
  5. Water Absorption(replaced with,Balance' test for testing of Futsal balls )
  6. Rebound Footballs applying for the higher 'FIFA APPROVED' mark must pass the six tests at an even more demanding level and must undergo an additional test:
  7. Shape and Size Retention (Shooting Test)
FIFA.com - Football - Test Criteria

12 February 2008

11 February 2008

Truth and quality I

z=x+y The Free On-line Dictionary of Computing defines quality as:
The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs.  Not to be mistaken for "degree of excellence" or "fitness for use" which meet only part of the definition.
Put more simply, quality can be defined as how well something measures up to a standard. But what if the standard sucks? What if the quality of the standard is low? Example: the standard for an app says it only has to run 1 out of 5 times when you double-click its icon. If you test for that and the app meets the standard, you can say it's of good quality, right? ...right? It reminds me of testing apps whose functionality was written without specifications.  When I test the app and find something that doesn't make sense or "doesn't work", Development can say that it's not a bug--it's working as intended.  ...which is totally relative, but totally true.  It's just the intention that was flawed.  ...and the only way to coerce a change is to make some great and sneaky argument, or bring in some sort of exterior, already-defined standard that all of the sudden makes the given functionality look like crap. In the example above, the developer had set his own standard, to which I thought wouldn't be up to standards of the end user.  In the example of the 1/5 standard above, if you're like me, your brain makes a judgment call on the standard itself--probably without you realizing it.  In essence and in context to this post, you're impelled to hold the standard to some implied standard--a standard that's sort of like a code of ethics that pervades a culture.  You know that cutting in line at the Post Office is just a no-no, not because there are any signs there that say so--you just know.  Same idea.  In the case of the developer working without a spec, my code of ethics was just different than his. So in order to have Dev and QA teams be efficient, a general practice is to have both departments agree on what's acceptable and what's not. They define the standard as they see fit for their organization and customers. But the trick is: how do we know when the standards that we've set are good standards?  Where does the standard's standard come from? How do you know right off the bat that the requirement that the app only has to run 1/5th of the time really sucks? Some correlation can be found when looking at the study of Truth.  People have been studying concepts here for a couple thousand years, as opposed to the drop in the bucket we've spent on studying software.  And there are probably just as many theories on SW development and testing practices as there are on Truth.  So without going too in-depth to the topic, I believe there's some interesting discoveries to be made when considering the various theories of Truth.

07 February 2008

Let's get mathy

A picture of OA(4,2,3) Pairwise testing is a technique that was introduced to about 6 months ago and have really looked forward to getting the chance to use since then.  I re- ran across a list of tools to help generate lists of test cases to run according to the possible input data (variants).  When the time is right, I wanna try some of these out.

Metrics

wtfm.jpg (source)

06 February 2008

What's a "feature"? Really...

I've been searching all over the interweb for someone software-y to define "feature" and can find a bazillion uses of the word, but no definitions. Strange. Lots of assumptions made in this field...

ISO 9001 vs CMM

An basic high-level comparison between the two industry quality standards: What do ISO 9001 and CMM mean to your organization? (courtesy techrepublic.com)

ISO 9126 and Reliability

I've reading through the ISO 9126 Standard doc (in case you don't have ISO numbers memorized, it's Software engineering - Product quality) and amongst other interesting things thus far, I encountered a section titled "Quality model for external an internal quality" which outlines areas of an application that should be tested; test types, in essence. Of those test types, one was Reliability.  It defined Reliability as:
The capability of the software product to maintain a specified level of performance when used under specified conditions.
...but what really grabbed me was the first note under that definition:
NOTE 1  Wear or ageing does not occur in software.  Limitations in reliability are due to faults in requirements, design, and implementation.  Failures due to these faults depend on the way the software product is used and the program options selected rather than on elapsed time.
This blatantly points the finger to requirements, design, and implementation for when dealing with reliability. I think to some, this might seem to be a "duh" statement, but that whole note is quite enabling as a software test engineer.

Philosophia I

From wikipedia.com:

Philosophy is the discipline concerned with questions of how one should live (ethics); what sorts of things exist and what are their essential natures (metaphysics); what counts as genuine knowledge (epistemology); and what are the correct principles of reasoning (logic).[1][2] The word is of Greek origin: φιλοσοφία (philosophía), meaning love of wisdom.[3]

I recently had a conversation with a friend of mine who's teaching a JC class on Logic--on such subject. We found ourselves on said topic, however, by discussing a real-life situation that seemed to violate logic of foundational core moral standards. Things about the situation were utterly perplexing; the path from the reality of yesteryear seemed that it could not lead to today's reality. Yet, as I seem to hear so often lately: "It is what it is." Funny though... I couldn't help but notice that the logic that we tried to apply to my friend's situation was quite similar to that of the logic we try to apply to engineering a piece of software:
We take this timeframe (SDLC) and try to mesh different sorts of tasks in to virtual compartments of the timeframe--sort of like Play Dough in to cookie cutters--to help us manage intricacies that our brains can't really handle all at once. The SDLC is really just a tool that some smart people came up with to help us get stuff done.
Logic is really just some framework that helps us explain certain things (!=all things) away in reality. Also derived by smart people.
Sometimes the Play Dough just doesn't all fit.