26 September 2008
Politics should be like Software Development
25 July 2008
Vista auto-reboot-for-updates #4
12 June 2008
Vista auto-reboot-for-updates #3
11 June 2008
RANT: writing 1000 test cases is not that bad
04 June 2008
Words (for all developers) To Live By
29 May 2008
Vista & Its Own Mind #2
28 May 2008
QC's: Maturity v. Stability
ISO 9126 presents the concepts of "quality characteristics", which is proving to be both helpful and confusing here at work. We've had our longest group process meetings on this topic--just trying to figure out and agree on what each one of these means. One of the big problems is figuring out what some of these are not. We use many of these words loosely in this world of testing/development, thus setting ourselves up for confusion when talking about this stuff.
Technical Malapropisms
Now, no one likes a vocal grammar Nazi around, but sometimes it really is important to talk about the distinctions between different things you talk about on a day to day basis.
One of the many bits of confusion came when talking about "Stability". Stability falls (as a sub-characteristic) under the main characteristic of "Maintainability", which can make sense if you take a minute to ponder. When we started trying to come up with examples of this, however, most of us came up with examples of "Maturity", which falls under "Reliability." It seems that in non-ISO conversations, stability is talked about in a way that encompasses both of these sub-characteristics. I did some brief google-ing to see if we were the only ones with this "problem" and found an abstract on Ecological Modelling: An inverse relationship between stability and maturity in models of aquatic ecosystems. This article on software maturity models suggests that stability is a factory that makes up maturity. ISO says otherwise:
Maturity:
The capability of the software product to avoid failure as a result of faults in the software.
Stability:
The capability of the software product to avoid unexpected effects from modifications of the software.
Maturity deals with "failure as a result of faults"; Stability deals with "unexpected effects from modifications."
Failures v. Unexpected Effects
Maturity deals with failures; Stability deals with unexpected effects. What's the difference? ...in terms of test cases passing I say, nothing. Both are the absence of passing; it doesn't matter what you call it. ...unless this means to say that a product is unstable if changes in module A unexpectedly effect the behavior in module B. One could then interpret "failure" as the inability for module A to meet specification for module A when testing module A; "unexpected effects" are then the inability for module A to meet specification for module A when testing module B.
Faults v. Modifications
Maturity deals with faults in the software; Stability deals with modifications of the software. What's the difference? Key differences become possibly more confusing when you take a look at the definition and notes of the main characteristics for each sub-characteristic. Note 1 for "Reliability" (parent of Maturity) says:
"Wear or aging does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation. Failures due to these faults depend on the way the software product is used and the program options selected rather than on elapsed time."Interesting. A fault can be in the requirements. A fault can be in the design. A fault can be in the implementation. Lack of maturity, then, is failures in one of these fault areas. The 2nd sentence in the definition of "Maintainability" (parent of Stability) says:
"Modifications may include corrections, improvements or adaptation of the software to changes in environment, and in requirements and functional specifications."A modification requires that something exist already; a new "version" of that thing comes about after modifications are made. Stability, then, is strictly tied to how the application behaves in comparison to how it behaved in the last version. Lack of stability is evident, then, when the same test activities are conducted in two different versions of the application, both in the same environment, and failures occur in unexpected areas of the application.
Conclusions
So one might be led to conclude that when testing version 2 of some app, you really could run test A and essentially be testing for both stability and maturity. But like many of these other ISO characteristics, the line is quite fine. The only way I can see it comes back to relate to one thing: what's effected. If you test A after changes were made in A, then you're testing maturity of A. If you test [B-Z] after making changes in A, you're testing stability of the app.
21 May 2008
14 May 2008
Teach me to test
05 May 2008
A test case is a scientific procedure
21 April 2008
Testing is an Engineering discipline
Engineering is the discipline and profession of applying scientific knowledge and utilizing natural laws and physical resources in order to design and implement materials, structures, machines, devices, systems, and processes that realize a desired objective and meet specified criteria. More precisely, engineering can be defined as “the creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.”Engineering - Wikipedia, the free encyclopedia ...and notice the word "discipline". Development Engineers can (and should be) hounded for bad formatting and not commenting their code--along those same lines of being disciplined, let the rest of us know what you're trying to accomplish when writing your tests! Be explicit about what you're trying to test for, show steps for how to test for that, and be concise about what to expect. Just as a screwy, uncommented piece of code that someone else wrote can drive a Development Engineer nuts, a screwy test can have the same effect on your fellow Test Engineers. Please, learn to be disciplined in your work--you'll gain respect from your co-workers and probably realize that you missed a few things along the way. It's not hard--it just takes discipline!
16 April 2008
US FDA/CDRH: General Principles of Software Validation; Final Guidance for Industry and FDA Staff
Software verification and validation are difficult because a developer cannot test forever, and it is hard to know how much evidence is enough. In large measure, software validation is a matter of developing a "level of confidence" that the device meets all requirements and user expectations for the software automated functions and features of the device. Measures such as defects found in specifications documents, estimates of defects remaining, testing coverage, and other techniques are all used to develop an acceptable level of confidence before shipping the product. The level of confidence, and therefore the level of software validation, verification, and testing effort needed, will vary depending upon the safety risk (hazard) posed by the automated functions of the device.There's also a great section called "Software is Different From Hardware", which points out some great subtle-but-huge differences between the two. Section 5. Activities and Tasks has some good practical info on planning and test tasks--both Black Box and White Box (although not explicitly so). US FDA/CDRH: General Principles of Software Validation; Final Guidance for Industry and FDA Staff
10 April 2008
Vista annoyance
Apple's Characteristics of Great Software
- 2 out of 7 (Ease of Use, Attractive Appearance) are related to ISO's Usability
- 1 out of 7 (Reliability) are related to ISO's Reliability
- 1 out of 7 (High Performance) are related to ISO's Efficiency
- 1 out of 7 (Interoperability) are related to ISO's Functionality
- 1 out of 7 (Adaptability) are related to ISO's Portability
- Mobility seems to be a hybrid of ISO's Usability, Functionality, and Efficiency
28 March 2008
27 March 2008
The BS7925-2 Standard for Software Component Testing
- 2.1.1.8 mentions the order in which test activities should be done--it includes Component Test Specification (sandwiched between Planning and Execution)
- 2.3 "Component test specification". In brief, tests are written here using the techniques that were determined doing Planning (read: tests weren't written during Planning, but the techniques were chosen then)
- 3 "Test Case Design Techniques". Concisely describes popular and useful techniques for creating test cases for a component.
- 4 "Test Measurement Techniques". These aren't criteria, but rather methods for helping to figure out progress--and maybe setting criteria based on this info. They show how to do this for each test case technique type in section 3 (most are pretty obvious, but it's still nice to see on paper).
26 March 2008
softwaretestingsucks.com
I'm not exactly sure how this got started, but there are actually a few decent articles/pages of info in here on the basics of testing:
- What is Quality?
- Life Cycle testing
- Testing Types
- Testing Techniques
- Testing Tools
- Certification Programs
- Testing Jokes
19 March 2008
Testing Removable Media
14 March 2008
Negative test cases
21 February 2008
Thinking Out Loud: Requirements vs. Criteria
- Requirements are a list of things that must get done for a project, regardless of their outcome.
- Criteria is the assessment of the results of the activities done to meet those requirements.
MyApp must run/startup on Vista....This lets you know that you (at least) have to write tests that cause MyApp to try to run/start on Vista, then run those tests. You set criteria so you know the level of "quality" to which that requirement was met. In order to determine the level of quality, you look at the results of running the tests. Ex.:
Criteria = MyApp must run/startup 100% of the time, in at least 1000 attempts, on each edition of Vista. MyApp ran/started-up on 5 different Vista Editions, a total 1200 times each, and started 99.999% of the time....Criteria not met. Hmm... I think I like it....
20 February 2008
13 February 2008
FIFA.com - Football - Test Criteria
FIFA Quality ConceptFIFA.com - Football - Test CriteriaThe FIFA Quality Concept for Footballs is a test programme for Outdoor, Futsal and Beach Soccer footballs. Manufacturers have the possibility to enter into a licensing agreement for the use of the prestigious 'FIFA APPROVED' and 'FIFA INSPECTED' Quality Marks on footballs which have passed a rigorous testing procedure. As an alternative there is the possibility to use the wording 'IMS International Matchball Standard'. Footballs bearing this designation have passed the same quality requirements as 'FIFA INSPECTED' footballs. The use of this designation is however not subject to a licence fee and any association with FIFA is prohibited.
There are two levels of criteria for the three designations. Footballs applying for the category 'FIFA INSPECTED' or the technically equivalent 'IMS - International Matchball Standard' must pass the following six rigorous laboratory tests:
- Weight
- Circumference
- Sphericity
- Loss of Air Pressure
- Water Absorption(replaced with,Balance' test for testing of Futsal balls )
- Rebound Footballs applying for the higher 'FIFA APPROVED' mark must pass the six tests at an even more demanding level and must undergo an additional test:
- Shape and Size Retention (Shooting Test)
12 February 2008
11 February 2008
Truth and quality I
The totality of features and characteristics of a product or service that bear on its ability to satisfy stated or implied needs. Not to be mistaken for "degree of excellence" or "fitness for use" which meet only part of the definition.Put more simply, quality can be defined as how well something measures up to a standard. But what if the standard sucks? What if the quality of the standard is low? Example: the standard for an app says it only has to run 1 out of 5 times when you double-click its icon. If you test for that and the app meets the standard, you can say it's of good quality, right? ...right? It reminds me of testing apps whose functionality was written without specifications. When I test the app and find something that doesn't make sense or "doesn't work", Development can say that it's not a bug--it's working as intended. ...which is totally relative, but totally true. It's just the intention that was flawed. ...and the only way to coerce a change is to make some great and sneaky argument, or bring in some sort of exterior, already-defined standard that all of the sudden makes the given functionality look like crap. In the example above, the developer had set his own standard, to which I thought wouldn't be up to standards of the end user. In the example of the 1/5 standard above, if you're like me, your brain makes a judgment call on the standard itself--probably without you realizing it. In essence and in context to this post, you're impelled to hold the standard to some implied standard--a standard that's sort of like a code of ethics that pervades a culture. You know that cutting in line at the Post Office is just a no-no, not because there are any signs there that say so--you just know. Same idea. In the case of the developer working without a spec, my code of ethics was just different than his. So in order to have Dev and QA teams be efficient, a general practice is to have both departments agree on what's acceptable and what's not. They define the standard as they see fit for their organization and customers. But the trick is: how do we know when the standards that we've set are good standards? Where does the standard's standard come from? How do you know right off the bat that the requirement that the app only has to run 1/5th of the time really sucks? Some correlation can be found when looking at the study of Truth. People have been studying concepts here for a couple thousand years, as opposed to the drop in the bucket we've spent on studying software. And there are probably just as many theories on SW development and testing practices as there are on Truth. So without going too in-depth to the topic, I believe there's some interesting discoveries to be made when considering the various theories of Truth.
07 February 2008
Let's get mathy
06 February 2008
What's a "feature"? Really...
ISO 9001 vs CMM
ISO 9126 and Reliability
The capability of the software product to maintain a specified level of performance when used under specified conditions.
NOTE 1 Wear or ageing does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation. Failures due to these faults depend on the way the software product is used and the program options selected rather than on elapsed time.
Philosophia I
I recently had a conversation with a friend of mine who's teaching a JC class on Logic--on such subject. We found ourselves on said topic, however, by discussing a real-life situation that seemed to violate logic of foundational core moral standards. Things about the situation were utterly perplexing; the path from the reality of yesteryear seemed that it could not lead to today's reality. Yet, as I seem to hear so often lately: "It is what it is." Funny though... I couldn't help but notice that the logic that we tried to apply to my friend's situation was quite similar to that of the logic we try to apply to engineering a piece of software:Philosophy is the discipline concerned with questions of how one should live (ethics); what sorts of things exist and what are their essential natures (metaphysics); what counts as genuine knowledge (epistemology); and what are the correct principles of reasoning (logic).[1][2] The word is of Greek origin: φιλοσοφία (philosophía), meaning love of wisdom.[3]