13 February 2010

I think I moved blogs?

Hi all... I've been trying out tumblr.com for a month or so now for my blogging-ish outlet and have been really digging it. For my 7 followers out there, I think I've made the decision to post solely over there--check it at http://musicismyeverest.tumblr.com. I've actually moved all my posts from here to there too!


16 January 2010

Seven Steps to Test Automation Success


In starting a new test automation group at work, I started doing some industry research to get some do/don't ideas. I really, really dislike the idea of GUI x,y coordinate automation; other GUI object-aware automation tools seem to be pricey and still seem like tons of maintenance and really distanced from testing the real meat of most apps. Most info that I've found on the topic has either dealt with things at the GUI level (probably since that's "easiest" to implement, despite being difficult to maintain), or at the unit level--while these focuses certainly have their merits, I was looking for info on the level in between those things; Google calls this the "medium" level.

I found this article by Brett Pettichord called "Seven Steps to Test Automation Success", where he talks about the difficulties in automating tests, and I really think he hits a lot of nails on the head. He argues that in developing automate tests, you should really treat your work like that of software development, then discusses his seven steps:

  1. Improve the Testing Process
  2. Define Requirements
  3. Prove the Concept
  4. Champion Product Testability
  5. Design for Sustainability
  6. Plan for Deployment
  7. Face the Challenges of Success
In each step, Brett focuses on laying lots on the table--and a lot of times things that I hadn't really processed as stuff that needed to be on the table. I like how he consistently hints at making the app under test testable, thus driving the ease of use, not just for automaters & testers, but for users as well. I think he does a wonderful job describing how test automation teams can bridge the gap between both development and Black Box testing, as I think this is definitely a challenge that I'm going to face in building my team.

All in all, there's tons of meat in this article, so take it in sections or get a cup of coffee and get comfy, but either way I think this is going to be a reference of mine for quite some time now.


Ruby Net::HTTP Cheat Sheet

I think this will prove to be a good resource--especially since I don't use this library that often. Hope you find it useful...
Net::HTTP Cheat Sheet


04 November 2009

Clean IRCing on OSX

I've been IRCing at work quite a bit lately, which has been pretty great. Every now and then I find it exciting to return back to old tech and start using it again--it's usually simple and complicated all at the same time, but for some reason I find that stimulating.


In any case... I used Colloquy for a while, which is a great modern IRC client, but after dealing with some annoying bugs, downloading the source, rebuilding, and not getting any love, I decided to search for something new. And I found irssi. irssi is totally old school, but totally new school. I love it. There are a ton of getting started guides out there, but it was nice to find a Mac-specific one: A Guide to Efficiently Using Irssi and Screen | quadpoint.org; definitely worth checking out.


My irssi'ing got even better when combining with Blacktree's (makers of Quicksilver) visor. Visor is just a simple HUD-type SIMBL plugin for Terminal.app that allows for key-combo hiding/unhiding. This is great for moving my IRC client (Terminal.app + irssi) on/off screen quickly, keeping my desktop clean so I can focus on my work.


Just thought I'd share with y'all...


19 May 2009

GTAC 2008 Keynote Address: The Future of Testing

Quite a thought provoking speech. I was fortunate to see this in person! Take an hour and check it out--it's totally worth it.

04 March 2009

Density

This ended my late night work marathon with a happy note:

23 February 2009

Hitler's nightly build fails

Oh man this is good. We should hire (no, not "heil") Hitler.... I love the part where the lady consoles the other lady saying, "Don't worry, he didn't mean that about Scrum." Genius.

26 September 2008

Politics should be like Software Development

I'm watching the presidential debates.  Each candidate makes claims that the other said "this", then the other retorts with, "No, I said 'that'."  Back and forth.  So how do you know who's telling the truth, who's lying, who's twisting the truth, and who's misunderstood who? How great would it be to have a list of things that each candidate stated at any given time, then when whoever gets elected, we can all see if they've actually done what they said they'd do.  It'd be kinda like SW requirements for some project--you define them, your teams implement them in the software, then other teams make sure they were implemented as stated, then you all get together and talk about (and tell the rest of the company) the good things you did and the bad things you did. Who in the public actually keeps our president accountable?  I sure couldn't tell you what Dubbya claimed he did and if he did it.  It just doesn't seem like there's much accountability for the things these guys say they're going to do. Also, it seems like these guys don't know if the other is talking about strategy, tactics, or post-mortem topics.  One attacks the other's strategies, then the first justifies with their tactics, then the first attacks back talking about the results of everything.  Is it that unclear?  It just seems like candidates thrive (or are told they'll thrive) on attacking the other, and in order to retort "politically", they work around the issues and talk about something else. It's just annoying at how much of a TV show these campaigns have become--candidates can say whatever they want to say as long as it entertains the public.  They work on our emotions, using planned out catch phrases and directed topics.  But once the winner gets in office, who's there to keep them accountable?

25 July 2008

Vista auto-reboot-for-updates #4

*sigh*  I guess I'm used to this now, so it's not so surprising, although just as annoying.  I was just setting up a meeting in Outlook when all of the sudden Outlook just started going in and out of focus, then *poof* everything closed and it was rebooting.  Once again, thanks POS Vista for not considering that I was actually working on something before you decided to reboot--reboot without ever having asked me if that's what I wanted.  I actually like how OS X assumes a lot for me, but never does it assume that I want to throw my work to the wind. I really just don't get how this is good behavior.  Judging by the frequency of my blogs on the topic, this happens about once every 3 - 6 weeks since April.  Amazing.  My opinion of MS continues to plummet...

12 June 2008

Vista auto-reboot-for-updates #3

Yet again, I'm in the middle of my morning emailing, and all of a sudden, Vista decides to reboot without warning.  How the hell could M$ ever decide this was a good idea?  I really can't think of anything more frustrating than losing work.  And oh ya... it's been "Configuring updates" for the past 10 minutes now... awesome.  *sigh*

11 June 2008

RANT: writing 1000 test cases is not that bad

I've heard a number of times over the past few months here at work things like: "...but if we do it like that, that means I'll be writing test cases forever!"  Testers: do you think developers say that when they're assigned to engineer some feature?  If they do, they probably won't be keeping their job long.  When assigned to engineer a feature, a developer goes through some process to figure out what it is they need to do/not do, then write the code that makes the thing do that, one line of code at a time, until the thing does what they think it should do.  What's so craaaazy about testing those lines of code in their different permutations??  Sure, there's merit in testing efficiently, but that's beside the point.  The point is, you do what it takes to get the job done--if that means writing 1000 new test cases in the next month, then that's what it means.

04 June 2008

Words (for all developers) To Live By

In coming up with a presentation for work on characteristics of quality, I've done a lot of expanding my brain.  Lots of defining, clarifying, quantifying, and a fair amount of reading.  This morning I took another looks at Apple's ADC guide for Human Interface Guidelines--particularly the part on Characteristics of Great Software.  I'm pretty sure they'd updated it since the last time I'd been there, as I noticed a link on that page to another page called "Know Your Audience".  I followed. First nugget: "It is useful to create scenarios that describe a typical day of a person who uses the type of software product you are designing." Second nugget: "Develop your product with people and their capabilities—not computers and their capabilities—in mind." Third nugget: "It is not your needs or your usage patterns that you are designing for, but those of your (potential) customers." Read it. Enough said.

29 May 2008

Vista & Its Own Mind #2

Yet again, Vista presents the pleasant surprise of exiting all my work and rebooting, just to apply updates that it never told me about. I'm appalled for the second time now. Update: And Outlook didn't even save the emails I had written but not yet sent... So freaking annoying.

28 May 2008

QC's: Maturity v. Stability


ISO 9126 presents the concepts of "quality characteristics", which is proving to be both helpful and confusing here at work. We've had our longest group process meetings on this topic--just trying to figure out and agree on what each one of these means. One of the big problems is figuring out what some of these are not. We use many of these words loosely in this world of testing/development, thus setting ourselves up for confusion when talking about this stuff.

Technical Malapropisms
Now, no one likes a vocal grammar Nazi around, but sometimes it really is important to talk about the distinctions between different things you talk about on a day to day basis.

One of the many bits of confusion came when talking about "Stability". Stability falls (as a sub-characteristic) under the main characteristic of "Maintainability", which can make sense if you take a minute to ponder. When we started trying to come up with examples of this, however, most of us came up with examples of "Maturity", which falls under "Reliability." It seems that in non-ISO conversations, stability is talked about in a way that encompasses both of these sub-characteristics. I did some brief google-ing to see if we were the only ones with this "problem" and found an abstract on Ecological Modelling: An inverse relationship between stability and maturity in models of aquatic ecosystems. This article on software maturity models suggests that stability is a factory that makes up maturity. ISO says otherwise:

Maturity:
The capability of the software product to avoid failure as a result of faults in the software.

Stability:
The capability of the software product to avoid unexpected effects from modifications of the software.

Maturity deals with "failure as a result of faults"; Stability deals with "unexpected effects from modifications."

Failures v. Unexpected Effects
Maturity deals with failures; Stability deals with unexpected effects. What's the difference? ...in terms of test cases passing I say, nothing. Both are the absence of passing; it doesn't matter what you call it. ...unless this means to say that a product is unstable if changes in module A unexpectedly effect the behavior in module B. One could then interpret "failure" as the inability for module A to meet specification for module A when testing module A; "unexpected effects" are then the inability for module A to meet specification for module A when testing module B.

Faults v. Modifications
Maturity deals with faults in the software; Stability deals with modifications of the software. What's the difference? Key differences become possibly more confusing when you take a look at the definition and notes of the main characteristics for each sub-characteristic. Note 1 for "Reliability" (parent of Maturity) says:
"Wear or aging does not occur in software. Limitations in reliability are due to faults in requirements, design, and implementation. Failures due to these faults depend on the way the software product is used and the program options selected rather than on elapsed time."
Interesting. A fault can be in the requirements. A fault can be in the design. A fault can be in the implementation. Lack of maturity, then, is failures in one of these fault areas. The 2nd sentence in the definition of "Maintainability" (parent of Stability) says:
"Modifications may include corrections, improvements or adaptation of the software to changes in environment, and in requirements and functional specifications."
A modification requires that something exist already; a new "version" of that thing comes about after modifications are made. Stability, then, is strictly tied to how the application behaves in comparison to how it behaved in the last version. Lack of stability is evident, then, when the same test activities are conducted in two different versions of the application, both in the same environment, and failures occur in unexpected areas of the application.

Conclusions
So one might be led to conclude that when testing version 2 of some app, you really could run test A and essentially be testing for both stability and maturity. But like many of these other ISO characteristics, the line is quite fine. The only way I can see it comes back to relate to one thing: what's effected. If you test A after changes were made in A, then you're testing maturity of A. If you test [B-Z] after making changes in A, you're testing stability of the app.

21 May 2008

Sneaky


14 May 2008

Teach me to test

In seeking to appease my recent desires to get back in to writing code, I flirted with the idea of purchasing a book on Xcode and Objective-C so I could get crackin' on some Mac dev.  With my other recent desire of cutting back on spending, I decided to check out cocoalab.com's free eBook on the topic.  While it covers the uber-basics of getting in to development, it also covers the uber-basics of Xcode, which is what I really need.  It also does this on Leopard--something no books on the market can tout yet (so I've read). So I blew through the first 6 chapters before having to attend my roomie's bday party, and am excited to get back to it ASAP. It just occurred to me though, that while the book talks about debugging in Xcode, it barely talks about testing (well, so far).  And then it occurred to me: most development books that I've ever read don't really make many suggestions about testing at all--much less about how or when to test the code you wrote.  I realize that Test-driven Development is really a suggested technique, but it seems to me that if developers at least followed these concepts, they would find more success.  Thus, if books taught how to test your code in equal doses at they taught how to write code, we might see a reversal in the economy.  :-)

05 May 2008

A test case is a scientific procedure

...so treat it as such!  After reading through hundreds of test cases at work in the past couple of weeks, I'm getting extremely frustrated seeing test cases that don't state their point, nor how to know if what happens when running the test was good or bad (pass/fail).  I've seen countless functional test case summaries that state how to do something (instead of what the point of the test is); ...countless test descriptions that state only the object under test (instead of how acting on that object will result in some expected outcome); ...and countless "expected results" sections with sentences that aren't even complete sentences, let alone describe how to determine if the test passed or failed. We all did experiments in junior high, right?  In its simplest form, a test case is just like that--make your hypothesis, plan the steps out you need to "run" in order to try to prove your hypothesis, get your gear together, run through your steps, then get a poster board and some spatter paint, write up your results (plus charts & graphs) and stick them to the board (make sure to not sniff the glue or you'll get in trouble!  jk...). Without even knowing a product, I should at least be able to understand what the point of a test case is when reading it.  Next, the test case must have a goal of achieving something explicit, so that when I run through the steps I won't have to make any new decisions about injecting any new data from outside of my initial plan (the initial plan is a direct result of my hypothesis, so injecting new data may not coincide with what I'm trying to prove in the first place).  Lastly, I should never see multiple possible expected outcomes as the result for a test--if there are 2 possible paths through a certain component, each path needs its own test case. Be explicit with one (and only one) point to the test case.  Please.

21 April 2008

Testing is an Engineering discipline

In fact, as I understand it, in many companies Testing is the department that imposes discipline on Development--which shouldn't be.  SW Development Engineers have a set of steps that they go through when developing an application/component/whatever, which is different than the steps that SW Test Engineers go through--but each have their purposes.  It is not Test's job to ensure that SW Development Engineers have gone through their steps properly; it is however, Test's job to ensure that the product of Development's efforts is something suitable for what it was intended for. Next, SW Test Engineers--please please pleeeeeeaaassseee take some accountability for your job title!!  Check out wikipedia's entry for Engineering:
Engineering is the discipline and profession of applying scientific knowledge and utilizing natural laws and physical resources in order to design and implement materials, structures, machines, devices, systems, and processes that realize a desired objective and meet specified criteria. More precisely, engineering can be defined as “the creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilizing them singly or in combination; or to construct or operate the same with full cognizance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.”
Engineering - Wikipedia, the free encyclopedia ...and notice the word "discipline".  Development Engineers can (and should be) hounded for bad formatting and not commenting their code--along those same lines of being disciplined, let the rest of us know what you're trying to accomplish when writing your tests!  Be explicit about what you're trying to test for, show steps for how to test for that, and be concise about what to expect.  Just as a screwy, uncommented piece of code that someone else wrote can drive a Development Engineer nuts, a screwy test can have the same effect on your fellow Test Engineers.  Please, learn to be disciplined in your work--you'll gain respect from your co-workers and probably realize that you missed a few things along the way.  It's not hard--it just takes discipline!

16 April 2008

US FDA/CDRH: General Principles of Software Validation; Final Guidance for Industry and FDA Staff

I ran across this link while looking for a specific page in my del.icio.us test links, and while the products that this document is geared towards are a little different than what I deal with on a daily basis, the concepts are great.  I particularly liked this section--especially the first sentence here--in the Verification & Validation section:
Software verification and validation are difficult because a developer cannot test forever, and it is hard to know how much evidence is enough. In large measure, software validation is a matter of developing a "level of confidence" that the device meets all requirements and user expectations for the software automated functions and features of the device. Measures such as defects found in specifications documents, estimates of defects remaining, testing coverage, and other techniques are all used to develop an acceptable level of confidence before shipping the product. The level of confidence, and therefore the level of software validation, verification, and testing effort needed, will vary depending upon the safety risk (hazard) posed by the automated functions of the device.
There's also a great section called "Software is Different From Hardware", which points out some great subtle-but-huge differences between the two. Section 5. Activities and Tasks has some good practical info on planning and test tasks--both Black Box and White Box (although not explicitly so). US FDA/CDRH: General Principles of Software Validation; Final Guidance for Industry and FDA Staff

10 April 2008

Vista annoyance

I love how Vista just decides to reboot and apply some updates while I'm in the middle of writing an email.  No warnings, nothing.  Just shuts down as if I said "Shutdown and kill all my apps in the process"...  Nicely done M$.  I love it when my OS gives me the finger.