Testautomation — How to design Team and Drivers
Nearly three years I did test and test automation. The systems I tested and saw continually regression-tested (from a Black Box point of view, i.e. via GUI) were two very large very large ones.The most important facts I learned were the following:
- Never automate test the way suggested by the vendors selling test automation tools:
These tools want to force you to have test driver scripts doing both at the same time — driving the test and checking the reaction of the tested software for problems. However, if test driver code is interlaced with code to detect errors, chances are that you will detect only a rather specific class of errors — conceptual mistakes e.g. will go undetected this way.
Even more: This way you can not do more than only the most trivial checking (because the context seen by the checking code is too restricted).
Another disadvantage of this approach is that if you decide to do more checking, you need to re-run the complete test (which may take hours, or may even be impossible if the application database on which you work cannot be reset just for starting a new test run — which is the rule if you do data warehouse testing: Restoring a specific database snapshot before starting the next test cycle may just be far too costly).
So the right way to create test drivers is: The driver should only trigger the application and should create logs showing how the application was triggered during test and what reaction exactly this triggering induced. If sufficiently context is stored in the resulting log file, code implemented as a stand alone program can do many more checks than just those possible by a script driving the test via the application's GUI.
- Think about a way how to learn how effective your testing team actually is:
The effectiveness of testers cannot be judged just by counting how many bugs they detect per man month but can be extremely low for some testers:
Testing teams on the job over a large time span — years possible — tend to become unimaginative an more and more less creative. So, according to what I saw, at least for regression testing very large systems you should have two testing teams working in parallel but independent from each other. The testing tasks you give them should be tailored to allow you to compare the effectiveness of these two teams. Only then will you be able to tell team ONE or team TWO that it is possible to be better than they actually are.
So, if you pay (just to give an example) 40 testers, never let them work all in the same team. Put up two teams — each of them 20 people — and let them know that they as well as their test suites and solution concepts — will be constantly monitored and seen as competing with each other. This will not cost you more than one team of 40 but will be much more effective. Needless to say: These two teams must be lead by different managers.
Nice and not so nice Test Automation Tools
Test automation tools, at least those meant to drive the application under test via its GUI, work by first recording and then playing back: Recording is generating a script which then must be enhanced by a programmer to contain checking code (most of this code I recommend to have in a separate program or script, but that's another issue: see above).So far so good. But regression testing is a kind of test where the system under test is constantly changing in detail (a moving target). This implies that scripts recorded even recently may not work for very long — they need to be adopted according to how the application did change.
Re-recording these scripts on a regular basis
- is far easier if the code checking the application's reaction is maintained in other programs,
- but even then is simply not possible for a really non-trivial test suite.
A third category of test recording tools is what I call too naive: They do not support to write, during test runs, files showing data feeded to or produced by the triggered application. Doing non-trivial checks is then very hard or even impossible. Fabasoft app.test Studio and TOSCA seem to be tools in this class.
It is my firm believe that
They do not allow non-trival verification of what the application under test is actually doing.
stw4399TAGUI — Tools . Automation . GUI — News?
Mehr + B G E + S H A + More