Overly Complex Test Objectives

The reason you don’t want to cram as many functional features as possible into one test is that, in case one of the functions fail, you end up having to fail of the functions. This is because, if you are testing nine different things simultaneously and there is a major crash can you definitively say which of the nine different things caused it? If you test the nine different areas independently, you can have some measure of confidence when the crash occurs that it is limited to the function you are currently testing. Later, you can do cross-functional testing, but initially, break down the functions into their simplest units and verify each separately. In general, the Performance Qualification (PQ) is where you do the combined functional testing, not here.

Testing with the Wrong Attitude

Here, as in life, attitude is everything. If you are fearful of finding problems, you won’t identify them if they land in your lap. If you think the system is perfect, you won’t recognize a problem if you have to step over it to get to lunch. Your attitude should always be that there are problems in the system that you must find. If you don’t go into testing with this attitude that there absolutely are bugs and that you won’t stop until you find them, you will certainly miss the more subtle problems.

Sticking to Simple Challenges

Don’t just try the obvious scenarios, try the un-obvious , the impossible, and the ridiculous – a system is not required to handle everything under the sun but it should definitely not corrupt or lose mission-critical data – that is the first drop dead issue that will cause a validation to fail. If system security is inadequate to prevent unauthorised access, that is the second drop dead issue. The third is the only subjective drop-dead issue, the system should be reliable and available for use when needed.

Not Addressing the High Level of Integration in the System

Many systems today are so highly regulated that you cannot test one module without using another. And since the FDA has mandated that no unvalidated modules shall be used in the validation of other modules, this puts us in a quandary. Most companies just ignore this and go ahead and test using the unvalidated module, anyway. But 483’s have resulted from this approach, so it is not advisable. So what do you do? I have long used a concept I developed years ago called the “Mini Verification”.

In other words, I conduct a subset of testing on the most basic features of the needed module. Once this most basic testing has been successfully completed, I can now use this basic functionality in the validation of the module I am working on. Later in the Validation, the entire functionality of the other module will be addressed, but as for testing the module I am working on, only basic features are needed. Now, in keeping with any recommended approach of laying all your cards out on the table so that an auditor will not become confused or suspicious, I feel strongly that you should couch the Mini-Verification with an explanation.

For example:

“Due to the high level of integration present in this system, (the Workshop model cannot be verified without the developer, and vice versa), special steps must be taken to ensure that the FDA’s mandate is met. To this end, a mini-verification of the Developer shall be done pior to starting the functional tests for the Workstation module. The mini-verification of the Developer module will consist of the creation of a simple form containing representative types of fields (e.g date field, numeric field, text field, etc) Once the Developer functionality has been verified at this basic level, the Workstation test may be conducted. Full functional verification of the Developer module will follow the Workstation tests. In this way it can be ensured that even in this tightly integrated system, no unvalidated modules will be used in the validation of other modules. This concludes the mini-verification of the Developer module. See the Developer section for a comprehensive functional verification of this feature.”

Insufficient Testing of Query Screens

If you think you’ve done substantive testing of the querying capabilities in your system, consider if your tests clearly demonstrate these functional issues. Upon querying, does the system return exactly the correct records? Or are some records missed or extraneous records included? Is all data displayed as it is entered? Or was data lost or corrupted during the retrieve process? Can records be changed from the query screen? (This is more often the case than you would think. This must be checked, because typically, a Query screen is accessible to all levels of users and if records can be changed from this screen, it is a major security violation.)

The only field verification required for Query screens is to ensure that this is a read-only field. To test that, for each field, you should try to delete the field, overwrite the field, and add to the end of the field. You should not be able to do any of these……