Working with clients in the life science industries, I deal often with the issue of validation of commercial off-the-shelf software. But, because validation is an FDA requirement, companies often treat validation as nothing more than a documentation effort--an exercise in producing a large volume of paperwork that the company can point to during an FDA inspection. So, I've been thinking about what companies can do to make validation a "meaningful exercise," one that adds real value to the system being implemented.
One key point is to understand the meaning of the word validation. Validation is ultimately to ensure that a system meets its requirements. Most validation professionals understand this. But too often the validation team interprets "requirements" as nothing more than "what the users want the system to do." So they conduct a series of interviews to ask users what they want the system to do.
Unfortunately, when users are asked what they want the system to do, most leave out many important things from a regulatory perspective. A better approach, in my opinion, is not to start with what the users want
the system to do but what the applicable regulations require
the users to do. I have found great value in sitting down with users, walking through the regulations line by line, and asking, "Do you want the system to help you do this?"
For example, FDA regulations for medical device manufacturers require users to ensure that raw materials are only purchased from approved suppliers (21 CFR Part 820.50). If the user wants the system to help him do that, then he probably needs the system to maintain an approved supplier list and prevent production materials from being procured from suppliers that are not approved to supply that material. Once I have established such a system requirement, it is a no-brainer to validate the system against that requirement. I simply test whether I can purchase material from a non-approved supplier. A series of tests such as this, that challenge the system to directly address true requirements, is the only way to uncover design flaws and false assumptions of the developer, implementer, and user.
How much more meaningful is this type of validation in contrast to what I see too often: the validation team writes a long series of test cases in excruciating detail to see whether the user can add, change, and delete a vendor master, whether a user can add, change, and delete a purchase order, whether a user can add, change, and delete a line item, etc., etc.,--but never test to see whether it is possible to order material from an unapproved supplier. No wonder they seldom discover anything interesting. They are not testing the system against its requirements--they are testing it against its design specification. That might be considered a system test, but I would not consider it a meaningful validation.
The lesson: start with the regulations, use the regulations to derive system requirements, and use system requirements to design test cases.
There is much more I could write on this subject, but I'd like some feedback. What is your experience with software validation? How do you ensure that validation is a meaningful exercise?