The Great Screenshot Debate – Vote Now!

Software validation has become an integral part of the overall software lifecycle when it comes to using a software application in a regulated environment. If the application is defined as being GxP system or critical to quality the end result means that a high degree of testing is required in order to allow the application to be used in the manufacturing process.

What should happen is that a comprehensive Vendor Audit needs to be carried out, to ensure that the system has been developed to the highest of quality standards. This approach is essential so that you have a high degree of assurance that the system is sound, and can be leveraged from later in at the validation stage.

Before a software validation script is drafted in anger, it is essential to perform a robust risk assessment to ensure that you don’t end up testing every single mouse click that occurs, in reality what happens is that people who have little experience of the system get together and come to the conclusion:

“Hey, everything is critical in this system, we have to test anything that moves!”

Does this sound like a familiar situation to you?

Anyway I am getting off the point slightly, one of my major gripe’s with software validation testing is the constant call for screenshots to prove that a certain value appears on the screen, when a certain calculation or button click is performed.

What I don’t understand is, why is this necessary?

Surely when you have highly qualified people developing these scripts and testing these scripts, if someone signs off that the acceptance criteria has been met, or records the actual value this should be sufficient.

Are we saying that validation people cannot be trusted, and you need a screenshot for everything?

Is there a requirement out there that states screenshots are the preferred form of documented evidence?

I think it’s time to open up this debate and see what the general feeling is out on the front line. If we are ever to get to a point where we have a lean validation approach these are the issues that need to be addressed.

Please add your comments below or click here to leave them in the forum.

0
shares

  • gokeeffe

    David Stokes • No they’re not. It’s perfectly acceptable to allow a trained tester to enter qualitative or quantitative test results by hand, as per the instructions in the test script.
    Screen shots can be useful for saving time, especially when recording a lot of values.

  • gokeeffe

    David Stokes • No they’re not. It’s perfectly acceptable to allow a trained tester to enter qualitative or quantitative test results by hand, as per the instructions in the test script.
    Screen shots can be useful for saving time, especially when recording a lot of values.

  • gokeeffe

    Fernando Pedeconi • FDA had established in 1987 that Computer Systems Validation is “Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality attributes.”
    True is that you can demonstrate through the provision of written statements on a validation protocol whether the computerised system meets its predefined specifications and quality attributes, however, a key element is to determine what does constitute verifiable and indisputable evidence.

  • gokeeffe

    Fernando Pedeconi • FDA had established in 1987 that Computer Systems Validation is “Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality attributes.”
    True is that you can demonstrate through the provision of written statements on a validation protocol whether the computerised system meets its predefined specifications and quality attributes, however, a key element is to determine what does constitute verifiable and indisputable evidence.

  • gokeeffe

    David Stokes • It should be possible to recreate any test environment and re-execute testing. Recorded results therefore constitute evidence that can be verified and confirmed (albeit not easily). This should be sufficient taking a risk-based approach to testing.

  • gokeeffe

    David Stokes • It should be possible to recreate any test environment and re-execute testing. Recorded results therefore constitute evidence that can be verified and confirmed (albeit not easily). This should be sufficient taking a risk-based approach to testing.

  • gokeeffe

    Fernando Pedeconi • Agreed it should be possible. However, not so sure how practical that would be: Say for instance that a given software functionality was used in production during a given timeframe, before of which did not exist and after which it has been removed. You need to demonstrate to an inspector what was the behavior of that functionality during that timeframe and its impact to the process and the products made through it.
    Although it is entirely possible to being able to replicate the exact environment on which that functionality was used, we’d probably agree on the fact that in the course of an investigation (either a regulatory or even an internal one) you may hit the problem constituted by the lack of a luxury resource which is time. I would suggest therefore to document verification and testing activities on critical functionality as thoroughly as possible, according to a risk based approach.

  • gokeeffe

    Fernando Pedeconi • Agreed it should be possible. However, not so sure how practical that would be: Say for instance that a given software functionality was used in production during a given timeframe, before of which did not exist and after which it has been removed. You need to demonstrate to an inspector what was the behavior of that functionality during that timeframe and its impact to the process and the products made through it.
    Although it is entirely possible to being able to replicate the exact environment on which that functionality was used, we’d probably agree on the fact that in the course of an investigation (either a regulatory or even an internal one) you may hit the problem constituted by the lack of a luxury resource which is time. I would suggest therefore to document verification and testing activities on critical functionality as thoroughly as possible, according to a risk based approach.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Fernando, not to be picky with your comment but the fact that you are referring to a statement by the FDA from 1987 concerns me. If we continually look to the past there will be no future. I agree with David 100% here, we’ll never get to the place we want to go, if we continue looking to the past.

    The overhead involved with managing paper screenshots is huge, for everyone involved in the process. Can we not trust the people executing that they are doing it correct.

    Let’s focus on the quality of what we do, not the aesthetics of the documents and attachments.

    I don’t know how many times I’ve heard from the Quality Departments, the FDA need to see screenshots!!!

    People are just making up this stuff as they go along!

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Fernando, not to be picky with your comment but the fact that you are referring to a statement by the FDA from 1987 concerns me. If we continually look to the past there will be no future. I agree with David 100% here, we’ll never get to the place we want to go, if we continue looking to the past.

    The overhead involved with managing paper screenshots is huge, for everyone involved in the process. Can we not trust the people executing that they are doing it correct.

    Let’s focus on the quality of what we do, not the aesthetics of the documents and attachments.

    I don’t know how many times I’ve heard from the Quality Departments, the FDA need to see screenshots!!!

    People are just making up this stuff as they go along!

  • gokeeffe

    Fernando Pedeconi • Graham, one of the requirements of the “evidence” is that needs to be objective, i.e. verifiable by a third party. You cannot refute a statement made by someone without either repeating the testing yourself (in which case you’re at the risk of putting yourself in a rather awkward situation of having to have that test system readily available at any point in time) or having the possibility to confront the statement against evidence provided by the tester according to a pre-approved protocol.

    By the way, the above it is not exclusive of computer systems validation, it works like that in any other scientific discipline you may think of.

  • gokeeffe

    Fernando Pedeconi • Graham, one of the requirements of the “evidence” is that needs to be objective, i.e. verifiable by a third party. You cannot refute a statement made by someone without either repeating the testing yourself (in which case you’re at the risk of putting yourself in a rather awkward situation of having to have that test system readily available at any point in time) or having the possibility to confront the statement against evidence provided by the tester according to a pre-approved protocol.

    By the way, the above it is not exclusive of computer systems validation, it works like that in any other scientific discipline you may think of.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Thanks for the clarification Fernando, so are we saying then that a tester verifying on the test script is not objective evidence. Does that mean another person will have to watch the screen at the time of execution too, to represent the third party.

    I’m still not clear on this one!

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Thanks for the clarification Fernando, so are we saying then that a tester verifying on the test script is not objective evidence. Does that mean another person will have to watch the screen at the time of execution too, to represent the third party.

    I’m still not clear on this one!

  • gokeeffe

    Joy Gray • Graham, people certainly are making this stuff up as they go along, but in some cases out of necessity to make it up!

    FDA regulations can be vague at best, and people are trying to read the minds of FDA inspectors who haven’t even walked through the door yet.

    Working for a QMS software development company and SaaS provider, customer audits are a frequent and often. It’s fascinating to see what each different company wants to see, or what their auditor-for-hire wants to see, what they think the FDA would want to see, and what I think is right and reasonable for a non-regulated company that admittedly becomes an extension of the regulated company with the SaaS QMS (EDMS, Training, Audit, CAPA, Complaints).

  • gokeeffe

    Joy Gray • Graham, people certainly are making this stuff up as they go along, but in some cases out of necessity to make it up!

    FDA regulations can be vague at best, and people are trying to read the minds of FDA inspectors who haven’t even walked through the door yet.

    Working for a QMS software development company and SaaS provider, customer audits are a frequent and often. It’s fascinating to see what each different company wants to see, or what their auditor-for-hire wants to see, what they think the FDA would want to see, and what I think is right and reasonable for a non-regulated company that admittedly becomes an extension of the regulated company with the SaaS QMS (EDMS, Training, Audit, CAPA, Complaints).

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Hi Joy,

    Well, I do believe it is more important that ever for software development companies to have a robust quality system in place, not only for the auditors perspective but also to allow the client to leverage off their documentation to reduce the validation burden down the line.

    Why do the FDA get away with having vague regulations anyway, this always seem to be the crux of the problem!!

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Hi Joy,

    Well, I do believe it is more important that ever for software development companies to have a robust quality system in place, not only for the auditors perspective but also to allow the client to leverage off their documentation to reduce the validation burden down the line.

    Why do the FDA get away with having vague regulations anyway, this always seem to be the crux of the problem!!

  • gokeeffe

    David Stokes • The GAMP Testing SIG currently are updating the “Testing of GxP Systems” Good Practice Guide and will be /are looking at the question of test evidence. I can’t tell you yet what will be in the new version, but the existing version (which was reviewed by US FDA) does address this issue and does not require screen shots.

  • gokeeffe

    David Stokes • The GAMP Testing SIG currently are updating the “Testing of GxP Systems” Good Practice Guide and will be /are looking at the question of test evidence. I can’t tell you yet what will be in the new version, but the existing version (which was reviewed by US FDA) does address this issue and does not require screen shots.

  • gokeeffe

    Fernando Pedeconi • David, I think they would need to look at the risk factor and the implications of why indisputable, objective evidence is required to prove the system meets its predefined specifications, throughout the lifespan of the system (and beyond, actually).

    Graham, objective evidence is to last and being verifiable at future time.

  • gokeeffe

    Fernando Pedeconi • David, I think they would need to look at the risk factor and the implications of why indisputable, objective evidence is required to prove the system meets its predefined specifications, throughout the lifespan of the system (and beyond, actually).

    Graham, objective evidence is to last and being verifiable at future time.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • This is again further proof to me that we need clear, understandable ways to do things.

    I assume both you guys are experts in your fields but the outlooks seem to be different, we are working in scientific fields there should be a right and wrong way to do things, it really shouldn’t be based on opinions really.

    When we talk about Risk Factors, I’ve never worked on a project where a comprehensive risk assessment was preformed on an application before validation commenced, not to meant a Risk Factor on the viability of not using screenshots.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • This is again further proof to me that we need clear, understandable ways to do things.

    I assume both you guys are experts in your fields but the outlooks seem to be different, we are working in scientific fields there should be a right and wrong way to do things, it really shouldn’t be based on opinions really.

    When we talk about Risk Factors, I’ve never worked on a project where a comprehensive risk assessment was preformed on an application before validation commenced, not to meant a Risk Factor on the viability of not using screenshots.

  • Frank Houston

    There are times when a screenshot is the most efficient way to record the expected result of a test, and not just for the “critical” requirements, for instance, when the test script itself uses a screen shot to describe the expected result as a system attribute. When the expected result is expressed as the unique value of some variable, which can be copied into the result block, then a screen shot may be overkill.

    A pet peeve of mine is the test script where it seems as if the result of every mouse-click and screen change gets recorded on the way to the screen where the attribute or variable under test is actually challenged. That is more of a problem than overuse of screen shots.

  • Frank Houston

    There are times when a screenshot is the most efficient way to record the expected result of a test, and not just for the “critical” requirements, for instance, when the test script itself uses a screen shot to describe the expected result as a system attribute. When the expected result is expressed as the unique value of some variable, which can be copied into the result block, then a screen shot may be overkill.

    A pet peeve of mine is the test script where it seems as if the result of every mouse-click and screen change gets recorded on the way to the screen where the attribute or variable under test is actually challenged. That is more of a problem than overuse of screen shots.

  • gokeeffe

    Posted by Peter Glaubenstein
    Where possible I would add screen shots as supplimental evidence that an action did happen as per your acceptance criteria. Providing that there is sufficient traceable information on the screen shot itself, ie date time stamp. This information then can be used for training purposes and training matrixes. Thus meeting a few key areas of GxP criteria.

  • gokeeffe

    Posted by Peter Glaubenstein
    Where possible I would add screen shots as supplimental evidence that an action did happen as per your acceptance criteria. Providing that there is sufficient traceable information on the screen shot itself, ie date time stamp. This information then can be used for training purposes and training matrixes. Thus meeting a few key areas of GxP criteria.

  • Bill Stiemsma

    I always believe in the practice that there should be sufficient evidence for the reviewer/approver/auditor to come to the same conclusion as the tester that specified acceptance criteria has been met. Screen shots of step by step to navigate are not required (in my opinion) but the final result that shows the intended results of the test are indisputable.

  • Bill Stiemsma

    I always believe in the practice that there should be sufficient evidence for the reviewer/approver/auditor to come to the same conclusion as the tester that specified acceptance criteria has been met. Screen shots of step by step to navigate are not required (in my opinion) but the final result that shows the intended results of the test are indisputable.

  • Neeta

    Yes screenshsots are required for crticial results. It acts as a proof of validation for audit purposes. Its not that we don’t trust on validation people.

  • Neeta

    Yes screenshsots are required for crticial results. It acts as a proof of validation for audit purposes. Its not that we don’t trust on validation people.

  • Aaron Leimback

    Another way that you could look at the use of screen shots is that it helps to prevent dishonesty during test execution. If we lived in a perfect world, screen shots wouldn’t be necessary because we would know that the script is an accurate representation of the test results. Screen shots not only support the documented test results but also provide the objective evidence that the intended use of the system meets the needs of the business This doesn’t mean every step like Bill said, only where it adds value.

  • Aaron Leimback

    Another way that you could look at the use of screen shots is that it helps to prevent dishonesty during test execution. If we lived in a perfect world, screen shots wouldn’t be necessary because we would know that the script is an accurate representation of the test results. Screen shots not only support the documented test results but also provide the objective evidence that the intended use of the system meets the needs of the business This doesn’t mean every step like Bill said, only where it adds value.

  • Tim Horgan

    I do not believe there is a requirement to use screenshots, and don’t recommend it. We do a lot of other validation execution on manufacturing equipment that is typically documented by written responses and signatures. I don’t see why we should use a different standard for validation of computer systems. (I’m referring to validation of applications that support the manufacturing environment, not validation of software contained in a medical device. That may present different considerations.)

  • Tim Horgan

    I do not believe there is a requirement to use screenshots, and don’t recommend it. We do a lot of other validation execution on manufacturing equipment that is typically documented by written responses and signatures. I don’t see why we should use a different standard for validation of computer systems. (I’m referring to validation of applications that support the manufacturing environment, not validation of software contained in a medical device. That may present different considerations.)

  • Toni

    You should not have screens prints for every step throughout your test case, but rather only the key steps that can prove your acceptance criteria passes or fails. In lieu of screen prints you can have a witness. It depends on your company’s procedures. I don’t think there are regulatory requirements that specifically state that you must provide ‘screen prints’. I have reviewed and approved documents using either case. I have seen in some cases where the screen prints were helpful in resolving deviations and I have seen an over use of screen prints, which is not effective. It all depends on your company’s procedures and how well test cases are developed to ensure that we have the documented evidence, whether a screen print or an executed test case signed and dated by a trained executor.

  • Toni

    You should not have screens prints for every step throughout your test case, but rather only the key steps that can prove your acceptance criteria passes or fails. In lieu of screen prints you can have a witness. It depends on your company’s procedures. I don’t think there are regulatory requirements that specifically state that you must provide ‘screen prints’. I have reviewed and approved documents using either case. I have seen in some cases where the screen prints were helpful in resolving deviations and I have seen an over use of screen prints, which is not effective. It all depends on your company’s procedures and how well test cases are developed to ensure that we have the documented evidence, whether a screen print or an executed test case signed and dated by a trained executor.

  • Chandra

    Can we electronically record actual results including embedded screen shots (say on a test script that is developed on a spreadsheet) and then print, sign & date?

  • Chandra

    Can we electronically record actual results including embedded screen shots (say on a test script that is developed on a spreadsheet) and then print, sign & date?

  • Matt Damick

    You need to be clear about what constitutes (product) quality-related requirements. Such requirements should be process-driven primarily (e.g., a unit operation results in a material of such-and-such characteristic), and then reg-driven (e.g., Part 11 requirements, when applicable). I doubt that all of an application’s attributes are traceable to quality-related requirements. Many are (albeit important) business requirements – such as audit trails (the exception being when we’re in Part 11 territory, of course), report formats, animation schemes for HMI graphics, tagging conventions, etc.

    Screen shots work as evidence when you can clearly tie them to non-transient events/attributes (e.g., setpoints, calculation results, equipment status), a clear set of procedures used to arrive at the event to be observed, and well-controlled versions of whatever software and hardware environment used for testing. However, screen shots are cumbersome and require labeling (word processing?) to associate them with test protocols – increasing execution resources.

    Quality-related attributes of HMIs might include indications of suitable SIP or autoclave cycle completion, certain instrument displays, but probably not stuff like the position of a valve or whether a prox switch is activated for a transfer panel swing-arm.

    As long as you have the necessary evidence of personnel qualification, unambiguous validation SOPs and protocol instructions, and good computerized system version control – all of which goes to test repeat-ability – screen shots are not likely necessary. I do agree with previous posters that screenshots are handy for displaying large amounts of info requiring verification. Spreadsheet cell configuration would be a prime example of when screenshots are useful.

  • Matt Damick

    You need to be clear about what constitutes (product) quality-related requirements. Such requirements should be process-driven primarily (e.g., a unit operation results in a material of such-and-such characteristic), and then reg-driven (e.g., Part 11 requirements, when applicable). I doubt that all of an application’s attributes are traceable to quality-related requirements. Many are (albeit important) business requirements – such as audit trails (the exception being when we’re in Part 11 territory, of course), report formats, animation schemes for HMI graphics, tagging conventions, etc.

    Screen shots work as evidence when you can clearly tie them to non-transient events/attributes (e.g., setpoints, calculation results, equipment status), a clear set of procedures used to arrive at the event to be observed, and well-controlled versions of whatever software and hardware environment used for testing. However, screen shots are cumbersome and require labeling (word processing?) to associate them with test protocols – increasing execution resources.

    Quality-related attributes of HMIs might include indications of suitable SIP or autoclave cycle completion, certain instrument displays, but probably not stuff like the position of a valve or whether a prox switch is activated for a transfer panel swing-arm.

    As long as you have the necessary evidence of personnel qualification, unambiguous validation SOPs and protocol instructions, and good computerized system version control – all of which goes to test repeat-ability – screen shots are not likely necessary. I do agree with previous posters that screenshots are handy for displaying large amounts of info requiring verification. Spreadsheet cell configuration would be a prime example of when screenshots are useful.

  • Clearly this is a discussion where the answer is “it depends”. Everyone in the thread have good arguments for and against. I agree with the argument that a screenshot is not required after every button click. This is cumbersome and unnecessary. My approach is to provide screenshots only on the test steps where a requirement is being verified. It does take a little bit of time but it removes all doubt of tester error.

  • Clearly this is a discussion where the answer is “it depends”. Everyone in the thread have good arguments for and against. I agree with the argument that a screenshot is not required after every button click. This is cumbersome and unnecessary. My approach is to provide screenshots only on the test steps where a requirement is being verified. It does take a little bit of time but it removes all doubt of tester error.

  • Martin

    I think (i.e. opinion only) judicious and strategic use of screenshots is warranted. A key part of assessing a system for Part 11 compliance is determining what records are being processed that are required to show compliance with predicate rules. Key points in the processing of those records would make good targets for screenshots. Another would be any risk control measures where operator or patient safety is implicated.

  • Martin

    I think (i.e. opinion only) judicious and strategic use of screenshots is warranted. A key part of assessing a system for Part 11 compliance is determining what records are being processed that are required to show compliance with predicate rules. Key points in the processing of those records would make good targets for screenshots. Another would be any risk control measures where operator or patient safety is implicated.

Similar articles:

The Difference Between Qualification and Validation [Video]

There is a general saying within the life sciences:

“We qualify a system and/or equipment and validate a process”

A system and/or equipment must be qualified to operate in a validated process.

For example:

“You qualify an autoclave, whereas you validate a sterilization process”

Manufacturers should identify what validation and qualification work is done. All systems, equipment, processes, procedures should be reviewed and the manufacturer should decide what qualification and validation work needs to be performed.

Direct, Indirect or No Impact

All facility areas, utilities and process equipment must be assessed and classified as direct impact, indirect impact or no impact following an analysis of their impact on the identity, strength, quality, purity or safety of products manufactured at the facility and also the safety of the operators & environment.

Impact on Quality

Each system or item of equipment having direct or indirect impact on the product quality must be validated. The extent of validation or qualification should be determined by performing the risk assessment of that particular system or equipment.

Join the Discussion

Use our community to find our more about validation and qualification.
http://community.learnaboutgmp.com/t/qualification-vs-validation/874

18
shares

Similar articles:

The Difference Between Prospective, Concurrent and Retrospective Validation

Unless you’re starting a new company you will need to plan on a variety of approaches.

Prospective validation occurs before the system is used in production, concurrent validation occurs simultaneously with production, and retrospective validation occurs after production use has occurred.

In this article we will discuss all three and also discuss the role the master validation plan (MVP) performs for each one.

1. Prospective Validation

Prospective validation is establishing documented evidence, prior to process implementation, that a system performs as is intended, based on pre-planned protocols.

This is the preferred approach.

Production is not started until all validation activities are completed.

The MVP need not go into much detail about this approach since it’s the standard method, however, prospective validation follows a step wise process illustrated here.

The process commences with the development of a Validation Plan and then passes through the DQ, RA, IQ, OQ and PQ phases after which process, computer, analytical and cleaning validations are performed which are followed by a final report.

After which the instrument or equipment will be subject to preventative maintenance and requalification on a routine basis.

Periodic Basis

On a periodic basis all instrumentation and equipment should be reviewed. This review is intended to identify any gaps which may have developed between the time it was last qualified and current requirements.

If any gaps are identified a remediation plan will be developed and the process will start again.

The MVP

The MVP may need to describe what is done with product produced during prospective validation. Typically, it is either scrapped or marked not for use or sale.

The product may be suitable for additional engineering testing or demonstrations, but appropriate efforts need to be made to ensure this product does not enter the supply chain.

Ideally, all validation is done prospectively; i.e., the system is validated before use. However, there are cases and conditions which may prevent this.

2. Concurrent Validation

Concurrent validation is used to establish documented evidence that a facility and process will perform as they are intended, based on information generated during actual use of the process.

In exceptional circumstances (for example, in a case of immediate and urgent public health need) validation may need to be conducted in parallel with routine production. The MVP needs to define how product is managed throughout the process.

Typically, the product batches are quarantined until they can be demonstrated (QC analysis) to meet specifications.

The Right Decision?

The decision to perform concurrent validation should not be made in a vacuum. All stakeholders including management, Quality Assurance and the government regulatory agencies should all agree that concurrent validation is an acceptable approach for the system under consideration.

As always the principal requirement is patient safety is not compromised. The rationale to conduct concurrent validation should be documented along with the agreement to do so by all the stakeholders. This can be part of the Validation Plan or documented as a deviation.

The Process

The concurrent validation process is identical to that of prospective validation. The process starts with the development of a Validation Plan, followed by the DQ, RA, IQ, OQ and PQ phases after which process, computer, analytical and cleaning validations are performed, ending with a final report.

Again, routine preventative maintenance, requalification and periodic review are performed.

3. Retrospective Validation

Retrospective validation is validating a system that has been operating for some time. There are various schools of thought on how to approach retrospective validation. Some may feel that a full-blown validation is required to assure the system is functioning properly.

Others may feel that since the system has been in use, presumably without issues, validation is not necessary and a memo to file justifying why validation is not necessary may be issued.

Doing a full validation may not be required, since you already have proof that the system functions as required – at least in the situations in which production was conducted. Doing nothing, though, is a risk.

It’s likely that the controls haven’t been challenged so there may be some hidden flaws that haven’t been identified that could lead to non-conforming product, hazardous operating conditions, extended delays, etc.

Historical Data

Historical data can certainly be used to support validation. For example, if there is detailed and statistically-significant evidence that production runs are well controlled you could rationalize and justify not doing full validation.

During retrospective validation, it’s advisable that existing product be quarantined, and production put on hold until validation is complete.

As an exception, producing product as part of the validation exercise would follow concurrent validation. This may not be practical since product may have already been distributed, but caution is advised for the reasons outlined.

General Process

The general process for retrospective validation follows the same process as for prospective and concurrent validation except DQ is seldom performed, as the system has already been in use for some time.

Instead a survey and review of available information is performed. This normally occurs before the validation plan is created.

The MVP should also provide guidance on managing inventory during retrospective validation.

One Major Issue

One potential major problem that can occur with retrospective validation the determination of what action should be taken if an issue is found with the system during retrospective validation?

As with everything else, a risk-based decision is warranted. This could be anything from product recall, to customer notifications, to just documenting the justification of the decision why nothing was done.

Again, the MVP should provide guidance on dealing with situations concerning out of specification conditions revealed during retrospective validation, which should also definitely include involving regulatory support.

13
shares

TOP

Similar articles: