29 May 2008
Vista & Its Own Mind #2
Yet again, Vista presents the pleasant surprise of exiting all my work and rebooting, just to apply updates that it never told me about. I'm appalled for the second time now.
Update:
And Outlook didn't even save the emails I had written but not yet sent... So freaking annoying.
28 May 2008
QC's: Maturity v. Stability
ISO 9126 presents the concepts of "quality characteristics", which is proving to be both helpful and confusing here at work. We've had our longest group process meetings on this topic--just trying to figure out and agree on what each one of these means. One of the big problems is figuring out what some of these are not. We use many of these words loosely in this world of testing/development, thus setting ourselves up for confusion when talking about this stuff.
Technical Malapropisms
Now, no one likes a vocal grammar Nazi around, but sometimes it really is important to talk about the distinctions between different things you talk about on a day to day basis.
One of the many bits of confusion came when talking about "Stability". Stability falls (as a sub-characteristic) under the main characteristic of "Maintainability", which can make sense if you take a minute to ponder. When we started trying to come up with examples of this, however, most of us came up with examples of "Maturity", which falls under "Reliability." It seems that in non-ISO conversations, stability is talked about in a way that encompasses both of these sub-characteristics. I did some brief google-ing to see if we were the only ones with this "problem" and found an abstract on Ecological Modelling: An inverse relationship between stability and maturity in models of aquatic ecosystems. This article on software maturity models suggests that stability is a factory that makes up maturity. ISO says otherwise:
Maturity:
The capability of the software product to avoid failure as a result of faults in the software.
Stability:
The capability of the software product to avoid unexpected effects from modifications of the software.
Maturity deals with "failure as a result of faults"; Stability deals with "unexpected effects from modifications."
Failures v. Unexpected Effects
Maturity deals with failures; Stability deals with unexpected effects. What's the difference? ...in terms of test cases passing I say, nothing. Both are the absence of passing; it doesn't matter what you call it. ...unless this means to say that a product is unstable if changes in module A unexpectedly effect the behavior in module B. One could then interpret "failure" as the inability for module A to meet specification for module A when testing module A; "unexpected effects" are then the inability for module A to meet specification for module A when testing module B.
Faults v. Modifications
Maturity deals with faults in the software; Stability deals with modifications of the software. What's the difference? Key differences become possibly more confusing when you take a look at the definition and notes of the main characteristics for each sub-characteristic. Note 1 for "Reliability" (parent of Maturity) says:
"Wear or aging does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation. Failures due to these faults depend on the way the software product is used and the program options selected rather than on elapsed time."Interesting. A fault can be in the requirements. A fault can be in the design. A fault can be in the implementation. Lack of maturity, then, is failures in one of these fault areas. The 2nd sentence in the definition of "Maintainability" (parent of Stability) says:
"Modifications may include corrections, improvements or adaptation of the software to changes in environment, and in requirements and functional specifications."A modification requires that something exist already; a new "version" of that thing comes about after modifications are made. Stability, then, is strictly tied to how the application behaves in comparison to how it behaved in the last version. Lack of stability is evident, then, when the same test activities are conducted in two different versions of the application, both in the same environment, and failures occur in unexpected areas of the application.
Conclusions
So one might be led to conclude that when testing version 2 of some app, you really could run test A and essentially be testing for both stability and maturity. But like many of these other ISO characteristics, the line is quite fine. The only way I can see it comes back to relate to one thing: what's effected. If you test A after changes were made in A, then you're testing maturity of A. If you test [B-Z] after making changes in A, you're testing stability of the app.
Labels:
iso,
maturity,
quality,
reliability,
stability
21 May 2008
14 May 2008
Teach me to test
In seeking to appease my recent desires to get back in to writing code, I flirted with the idea of purchasing a book on Xcode and Objective-C so I could get crackin' on some Mac dev. With my other recent desire of cutting back on spending, I decided to check out cocoalab.com's free eBook on the topic. While it covers the uber-basics of getting in to development, it also covers the uber-basics of Xcode, which is what I really need. It also does this on Leopard--something no books on the market can tout yet (so I've read).
So I blew through the first 6 chapters before having to attend my roomie's bday party, and am excited to get back to it ASAP.
It just occurred to me though, that while the book talks about debugging in Xcode, it barely talks about testing (well, so far). And then it occurred to me: most development books that I've ever read don't really make many suggestions about testing at all--much less about how or when to test the code you wrote. I realize that Test-driven Development is really a suggested technique, but it seems to me that if developers at least followed these concepts, they would find more success. Thus, if books taught how to test your code in equal doses at they taught how to write code, we might see a reversal in the economy. :-)
Labels:
development-testing,
discipline
05 May 2008
A test case is a scientific procedure
...so treat it as such! After reading through hundreds of test cases at work in the past couple of weeks, I'm getting extremely frustrated seeing test cases that don't state their point, nor how to know if what happens when running the test was good or bad (pass/fail). I've seen countless functional test case summaries that state how to do something (instead of what the point of the test is); ...countless test descriptions that state only the object under test (instead of how acting on that object will result in some expected outcome); ...and countless "expected results" sections with sentences that aren't even complete sentences, let alone describe how to determine if the test passed or failed.
We all did experiments in junior high, right? In its simplest form, a test case is just like that--make your hypothesis, plan the steps out you need to "run" in order to try to prove your hypothesis, get your gear together, run through your steps, then get a poster board and some spatter paint, write up your results (plus charts & graphs) and stick them to the board (make sure to not sniff the glue or you'll get in trouble! jk...).
Without even knowing a product, I should at least be able to understand what the point of a test case is when reading it. Next, the test case must have a goal of achieving something explicit, so that when I run through the steps I won't have to make any new decisions about injecting any new data from outside of my initial plan (the initial plan is a direct result of my hypothesis, so injecting new data may not coincide with what I'm trying to prove in the first place). Lastly, I should never see multiple possible expected outcomes as the result for a test--if there are 2 possible paths through a certain component, each path needs its own test case.
Be explicit with one (and only one) point to the test case. Please.
Subscribe to:
Posts (Atom)