There’s a lot of discussion going on in the software community on “testing”. (See for example here, here and here). We think there’s too many superstitions and assumptions. Very little of this helps in your day-to-day work and leaves you still wondering how you reliably and automatically test, for example, that crazy sql query, your UI or that complex batch job.
We think you don’t need to plow through 800+ pages to get going. Testing, at it’s core, is very straightforward.
Let us explain. We do a fair amount of consulting and some of us have mainly been working on payment processing systems for the past couple of years. Whenever you, let’s say, topup your e-wallet account with a credit card a lot of stuff happens in the background, across a variety of distributed systems.
In short: A lot of room for failure and thus possibilities for angry customer: And people *do* go absolute nuts – and very rightly so – if 5€/$/XXX go missing, are not credited instantly to their accounts and they cannot complete the payment for their daughter’s Harry Potter doll on Ebay. Now imagine that pain multiplied if parts of your system break down on Black Friday, even just for 2 minutes. So you probably want to be able to very quickly, automatically, repeatedly be *very* confident that your whole payment chain is up and running.
It is of course a bit different if you are working in a two-man team, on some smaller webpage/app of yours. It might well be that your modus operandi is to just hit f5, redo the whole test-workflow manually in the browser, even though it very likely is much simpler to automate. But you might not know exactly how and really doubt it’s worth the effort for that small(ish) audience you are targeting.
After a couple of weeks you even automated that darn 15-field user registration with a tiny browser plugin! Plus, your product owner/business analyst/boss/crazy friend next to you just shouts over new requirements all day long, that not only change again from one day to the other, but are rarely more than “we need an XYZ mega feature, now!”). And under these circumstances, you might very well not care that your user registration throws exceptions with some crazy ass chinese characters. Let those exceptions spam the logs, not even the customer cares, they don’t even seem to notice the random downtime your webapp has every once in a while. We are in no advocating this, but this is simply stuff you see over and over again.
In any case, unless you are living in some sort of parallel universe, you don’t do technology for technology’s sake, but at the end you should deliver some business value. And a certain degree (but probably not always pacemaker degree) of confidence that your stuff works.
So for us, the only questions you should ask yourself regarding testing:
- Does my software work? Which parts of my software do work?
- At what point in time can I say my software works? Can I make that repeatable, how often? Do I need to?
- How much time do I need to show someone that my software works? Does it have to be super-fast?
- A funny one: Do I actually have to be able to show that my software/script/webpage works most of the time? Is it ok if it crashes?
What you should absolutely not focus on:
- The random testing framework of the month™ and your assumption that it wouldn’t have happened with Haskell in the first place™
- Testing first/after/in between, at night, in the morning
- The amount of buzz words you can spit out in your QA group meetings
- Your scrum master’s insistance that, even though you don’t have solid requirements, you just develop something, give it to the QA and he will “done” it”
Now you might be saying: “Erm, you are just as vague, high-level and non-practical as the other guys out there! I still don’t know how to test my GUI and my database queries. Or anything in between!.” It’s all just blah blah to me!
Fear not: We will follow up with a short, no bullshit, series on how to test with practical code examples (think: let’s build a small “paypal”). If you want to get notified, hit the big blue button below