Quality Vs Validation – Protocol Acceptance Criteria

  • 0

So the Validation Department generate the protocol and the Quality Department are involved in reviewing the protocol. Having worked on various different projects it has become quite apparent that the quality review on any protocol is a very important review as it irons out all of the documentation mistakes that occur when the document is being generated.

Quality Vs Validation  - Good Vs Evil
Quality Vs Validation - Good Vs Evil

On the other hand I often wonder if quality takes this review too far, and if they do why?

Quality Review

For example if you have a quality person reviewing an IQ or OQ protocol on a computer application, is it acceptable for them to review it having little knowledge of the application.

Does this lead to the quality person reviewing the protocol with an overly cautious approach to the point where the validation engineer is ready to pull their hair out in frustration.

A Simple Scenario

Lets take a very simple scenario. A test protocol usually has the following fields:

  • Test Procedure
  • Acceptance criteria
  • Meets Acceptance Criteria Yes/No
  • Performed by /Date

The column to take note of in this scenario is the Acceptance Criteria and the Performed by/Date field.

Lets discuss the acceptance criteria field first, this field usually contains the criteria that must be met in order to pass a test section. If the test is carried out and the acceptance criteria are not met then this usually results in a deviation or an event being raised.

I don’t think anyone has an issue with this scenario, but in terms of acceptance criteria what is acceptable?

Let take a simple example:

Test Procedure: Click the Yes radio button
Acceptance Criteria: Deviation message is displayed
Meets acceptance criteria: Yes
Performed by: Joe Soap 12/12/09

Is this acceptable from a quality perspective though, I mean when you sign the performed by section do you mean that the test has been completed or do you mean that the deviation message was displayed and the test was completed.

Screenshots

If the quality department are not happy with the latter then they will require a screenshot as evidence that the deviation message was displayed.

Do you see what I am getting at here; does this mean that screenshots are required for all acceptance criteria?

If so then your protocol will contain numerous attachments and a long review time.

Before your protocol is approved you really need to work closely with the quality department in order to understand what is acceptable from an acceptance criteria view point.

If you would like further assistance with protocol generation please feel free to contact Premier Validation

0
shares

  • maryacton

    Nice post – Yes we have the same issues with quality, they are important to the overall process but it is important that validation and quality work together and not think they are on different teams.

  • maryacton

    Nice post – Yes we have the same issues with quality, they are important to the overall process but it is important that validation and quality work together and not think they are on different teams.

  • S. J. D’Souza

    Documentation corrections do not equal spelling mistakes. There is more to a quality review that just spell check. On the surface it seems so much easier to have to review a document than having to create it in the first place. Having to write a protocol for anyone to understand decreases the efficiencies of the risk based approach or quality by design. IMHO it is incumbent of the quality approver to educate him/herself of the application in question before/while approving. Approval should be an iterative process to deliver the best qualification document.

  • S. J. D’Souza

    Documentation corrections do not equal spelling mistakes. There is more to a quality review that just spell check. On the surface it seems so much easier to have to review a document than having to create it in the first place. Having to write a protocol for anyone to understand decreases the efficiencies of the risk based approach or quality by design. IMHO it is incumbent of the quality approver to educate him/herself of the application in question before/while approving. Approval should be an iterative process to deliver the best qualification document.

  • gokeeffe

    Hi S. J. D’Souza,

    Thanks for the comments, in your experience could you provide a list of other aspects of a quality review?

    Cheers

    Graham

  • gokeeffe

    Hi S. J. D’Souza,

    Thanks for the comments, in your experience could you provide a list of other aspects of a quality review?

    Cheers

    Graham

  • snaranjo

    Quality involvement on Computer validation is essential to success, but also to concentrate on relevant testing and documentation. We can take tons of screenshots as evidence of standard functionalities (hardly tested by the supplier) and miss the opportunity to test critical or risky configuration. Then we have a very nice but useless documentation. In other words, validation effort should be focussed on demonstrating that system (or system configurations) meet the user processes without compromising electronic records. Documentation is a part, not the goal of validation, and quality people not always has this view. In my opinion, testing must be done by a well-trained or experienced user who write “pass/not pass” and sign the execution. A second user (witness or not) review the results and take part on resolution of fails/non conformities. We use to involve QA as final test reviewer (and protocol/report approvers).

    Regards

  • snaranjo

    Quality involvement on Computer validation is essential to success, but also to concentrate on relevant testing and documentation. We can take tons of screenshots as evidence of standard functionalities (hardly tested by the supplier) and miss the opportunity to test critical or risky configuration. Then we have a very nice but useless documentation. In other words, validation effort should be focussed on demonstrating that system (or system configurations) meet the user processes without compromising electronic records. Documentation is a part, not the goal of validation, and quality people not always has this view. In my opinion, testing must be done by a well-trained or experienced user who write “pass/not pass” and sign the execution. A second user (witness or not) review the results and take part on resolution of fails/non conformities. We use to involve QA as final test reviewer (and protocol/report approvers).

    Regards

  • Several interesting points should be clarified:

    1) prior to writing a protocol, I recommend that a team be established including the writer, the system owner, and the quality reviewer. This allows the team members to learn their strengths, weaknesses, and personality traits (if the quality reviewer is weak in computer validation the others can diplomatically provide suggestions, learn that the quality reviewer is not open to suggestions, or learn what the quality reviewer expects). Having written over a hundred protocols, I highly recommend this.

    2) prior to writing a protocol, I recommend that the writer generate an outline of what will be tested and identify the template to be used (to achieve team consensus). For example, I recommend that you include columns for “Expected Results” and “Actual Results” so that the reviewer can determine if actual results met expected results.

    3) getting to know the team members before writing the protocol improves the likelihood of it being right-first-time (minimizing protocol approval time and protocol review time after execution).

    4) it is important that a protocol be written such that execution and review is not open to interpretation (the team, including the executor, knows what is expected as does the reviewer). As an example, state when a screenshot is required (attach printout demonstrating that ….). Screenshots are required to demonstrate tasks have been performed successfully. As a rule of thumb, one or two, per test case.

    If you have any questions, you can reach me via email at: jeffreygassman@validationplusinc.com

  • Several interesting points should be clarified:

    1) prior to writing a protocol, I recommend that a team be established including the writer, the system owner, and the quality reviewer. This allows the team members to learn their strengths, weaknesses, and personality traits (if the quality reviewer is weak in computer validation the others can diplomatically provide suggestions, learn that the quality reviewer is not open to suggestions, or learn what the quality reviewer expects). Having written over a hundred protocols, I highly recommend this.

    2) prior to writing a protocol, I recommend that the writer generate an outline of what will be tested and identify the template to be used (to achieve team consensus). For example, I recommend that you include columns for “Expected Results” and “Actual Results” so that the reviewer can determine if actual results met expected results.

    3) getting to know the team members before writing the protocol improves the likelihood of it being right-first-time (minimizing protocol approval time and protocol review time after execution).

    4) it is important that a protocol be written such that execution and review is not open to interpretation (the team, including the executor, knows what is expected as does the reviewer). As an example, state when a screenshot is required (attach printout demonstrating that ….). Screenshots are required to demonstrate tasks have been performed successfully. As a rule of thumb, one or two, per test case.

    If you have any questions, you can reach me via email at: jeffreygassman@validationplusinc.com

  • Excellent comments – particularly the detail from Jeff.

    Something to keep in mind is that there are several common rationales or approaches to validation. These can typically vary by company, by division within a company, or even by the project that you are representing. Understanding the validation approach for your particular project is critical if you expect to be successful when it comes time for a quality review. Deliverable types that were acceptable on your last project may no longer apply, even for the same type of system.

    For example, if your project follows a risk-based approach there will be specific criteria that quality uses as a measuring stick. You may have a full suite of system lifecycle documentation, such as User Requirements, Functional Specifications, and/or a Design Specification of some sort. Depending on how the risk-based approach is structured, quality may expect your protocol to include verifications of specific elements from within these lifecycle documents. Your role in a project may be focused on protocol development, however, it’s critical that you understand validation rationales that may already be planned for your project. This is in alignment with comments by Jeff that you should familiarize yourself with quality’s expectations.

    To summarize, while it’s crucial that protocols and test cases be structured in a manner that eliminates ambiguity, understanding the rationales for testing and the appropriate deliverables that feed into the qualification effort are foundational when it comes time for a quality review.

    As a caveat, there are clearly organizations who don’t require indepth planning or advanced rationales when planning a validation effort. Whereas understanding a risk-based approach to protocol writing is critical if you want to be successful in a risk-based environment, communication of expectations is equally critical in a less rigorous business environment. In either case, the key to successful protocol development is understanding the basis that quality will be using when it’s time to review your work.

  • Excellent comments – particularly the detail from Jeff.

    Something to keep in mind is that there are several common rationales or approaches to validation. These can typically vary by company, by division within a company, or even by the project that you are representing. Understanding the validation approach for your particular project is critical if you expect to be successful when it comes time for a quality review. Deliverable types that were acceptable on your last project may no longer apply, even for the same type of system.

    For example, if your project follows a risk-based approach there will be specific criteria that quality uses as a measuring stick. You may have a full suite of system lifecycle documentation, such as User Requirements, Functional Specifications, and/or a Design Specification of some sort. Depending on how the risk-based approach is structured, quality may expect your protocol to include verifications of specific elements from within these lifecycle documents. Your role in a project may be focused on protocol development, however, it’s critical that you understand validation rationales that may already be planned for your project. This is in alignment with comments by Jeff that you should familiarize yourself with quality’s expectations.

    To summarize, while it’s crucial that protocols and test cases be structured in a manner that eliminates ambiguity, understanding the rationales for testing and the appropriate deliverables that feed into the qualification effort are foundational when it comes time for a quality review.

    As a caveat, there are clearly organizations who don’t require indepth planning or advanced rationales when planning a validation effort. Whereas understanding a risk-based approach to protocol writing is critical if you want to be successful in a risk-based environment, communication of expectations is equally critical in a less rigorous business environment. In either case, the key to successful protocol development is understanding the basis that quality will be using when it’s time to review your work.

  • waynem

    Nice Post! As I work for a large company with many branches, I have had to work closely with the various Responsible Head of Quality. My experience has shown that one has to understand the Quality “Person” to be able to understand their individual needs. Some are more prone to checking the basic fundimentals such as the font,size, shades of grey, etc of the documentation while others are extremely stroppy with spelling and grammer and yet have no idea of the various tests that are to be performed. Others ignore this and show a keen interest in the why, when, and what’s of the script that one is bombarded with so many questions that one starts to question your own protocol writing skills!! lol.

    As every person has a different outlook/way of doing something such as protocols, as a Protocol writer/executioner (grin..) at least ones life will never be boring.

    Keep the posts rolling. Tx

  • waynem

    Nice Post! As I work for a large company with many branches, I have had to work closely with the various Responsible Head of Quality. My experience has shown that one has to understand the Quality “Person” to be able to understand their individual needs. Some are more prone to checking the basic fundimentals such as the font,size, shades of grey, etc of the documentation while others are extremely stroppy with spelling and grammer and yet have no idea of the various tests that are to be performed. Others ignore this and show a keen interest in the why, when, and what’s of the script that one is bombarded with so many questions that one starts to question your own protocol writing skills!! lol.

    As every person has a different outlook/way of doing something such as protocols, as a Protocol writer/executioner (grin..) at least ones life will never be boring.

    Keep the posts rolling. Tx

  • R Hess

    Interesting that there seems to be such animosity between validation teams and quality teams. I work for a very small company and the quality and validation duties overlap as far as responsible personnel.
    I have had to do much educating here on what the purpose of validations are in order to be able to work well with all departments. And truthfully, the manufacturing departments are the most resistant to performing validations.
    Maybe this is a strange question, but don’t validation teams want their documents to meet all of the quality standards too? If a question the quality reviewer brings up cannot be answered easily, then maybe the quality reviewer has a point…?
    In our company, the quality department handles most of the external audits and must assist in defending documents to customers, ISO auditors, and government auditors. As both a quality and validation position, I definetly don’t want to be caught in a position with an auditor of any kind where I cannot defend or explain any part of a validation document…

  • R Hess

    Interesting that there seems to be such animosity between validation teams and quality teams. I work for a very small company and the quality and validation duties overlap as far as responsible personnel.
    I have had to do much educating here on what the purpose of validations are in order to be able to work well with all departments. And truthfully, the manufacturing departments are the most resistant to performing validations.
    Maybe this is a strange question, but don’t validation teams want their documents to meet all of the quality standards too? If a question the quality reviewer brings up cannot be answered easily, then maybe the quality reviewer has a point…?
    In our company, the quality department handles most of the external audits and must assist in defending documents to customers, ISO auditors, and government auditors. As both a quality and validation position, I definetly don’t want to be caught in a position with an auditor of any kind where I cannot defend or explain any part of a validation document…

Similar articles:

The Difference Between Qualification and Validation [Video]

There is a general saying within the life sciences:

“We qualify a system and/or equipment and validate a process”

A system and/or equipment must be qualified to operate in a validated process.

For example:

“You qualify an autoclave, whereas you validate a sterilization process”

Manufacturers should identify what validation and qualification work is done. All systems, equipment, processes, procedures should be reviewed and the manufacturer should decide what qualification and validation work needs to be performed.

Direct, Indirect or No Impact

All facility areas, utilities and process equipment must be assessed and classified as direct impact, indirect impact or no impact following an analysis of their impact on the identity, strength, quality, purity or safety of products manufactured at the facility and also the safety of the operators & environment.

Impact on Quality

Each system or item of equipment having direct or indirect impact on the product quality must be validated. The extent of validation or qualification should be determined by performing the risk assessment of that particular system or equipment.

Join the Discussion

Use our community to find our more about validation and qualification.
http://community.learnaboutgmp.com/t/qualification-vs-validation/874

20
shares

Similar articles:

The Difference Between Prospective, Concurrent and Retrospective Validation

Unless you’re starting a new company you will need to plan on a variety of approaches.

Prospective validation occurs before the system is used in production, concurrent validation occurs simultaneously with production, and retrospective validation occurs after production use has occurred.

In this article we will discuss all three and also discuss the role the master validation plan (MVP) performs for each one.

1. Prospective Validation

Prospective validation is establishing documented evidence, prior to process implementation, that a system performs as is intended, based on pre-planned protocols.

This is the preferred approach.

Production is not started until all validation activities are completed.

The MVP need not go into much detail about this approach since it’s the standard method, however, prospective validation follows a step wise process illustrated here.

The process commences with the development of a Validation Plan and then passes through the DQ, RA, IQ, OQ and PQ phases after which process, computer, analytical and cleaning validations are performed which are followed by a final report.

After which the instrument or equipment will be subject to preventative maintenance and requalification on a routine basis.

Periodic Basis

On a periodic basis all instrumentation and equipment should be reviewed. This review is intended to identify any gaps which may have developed between the time it was last qualified and current requirements.

If any gaps are identified a remediation plan will be developed and the process will start again.

The MVP

The MVP may need to describe what is done with product produced during prospective validation. Typically, it is either scrapped or marked not for use or sale.

The product may be suitable for additional engineering testing or demonstrations, but appropriate efforts need to be made to ensure this product does not enter the supply chain.

Ideally, all validation is done prospectively; i.e., the system is validated before use. However, there are cases and conditions which may prevent this.

2. Concurrent Validation

Concurrent validation is used to establish documented evidence that a facility and process will perform as they are intended, based on information generated during actual use of the process.

In exceptional circumstances (for example, in a case of immediate and urgent public health need) validation may need to be conducted in parallel with routine production. The MVP needs to define how product is managed throughout the process.

Typically, the product batches are quarantined until they can be demonstrated (QC analysis) to meet specifications.

The Right Decision?

The decision to perform concurrent validation should not be made in a vacuum. All stakeholders including management, Quality Assurance and the government regulatory agencies should all agree that concurrent validation is an acceptable approach for the system under consideration.

As always the principal requirement is patient safety is not compromised. The rationale to conduct concurrent validation should be documented along with the agreement to do so by all the stakeholders. This can be part of the Validation Plan or documented as a deviation.

The Process

The concurrent validation process is identical to that of prospective validation. The process starts with the development of a Validation Plan, followed by the DQ, RA, IQ, OQ and PQ phases after which process, computer, analytical and cleaning validations are performed, ending with a final report.

Again, routine preventative maintenance, requalification and periodic review are performed.

3. Retrospective Validation

Retrospective validation is validating a system that has been operating for some time. There are various schools of thought on how to approach retrospective validation. Some may feel that a full-blown validation is required to assure the system is functioning properly.

Others may feel that since the system has been in use, presumably without issues, validation is not necessary and a memo to file justifying why validation is not necessary may be issued.

Doing a full validation may not be required, since you already have proof that the system functions as required – at least in the situations in which production was conducted. Doing nothing, though, is a risk.

It’s likely that the controls haven’t been challenged so there may be some hidden flaws that haven’t been identified that could lead to non-conforming product, hazardous operating conditions, extended delays, etc.

Historical Data

Historical data can certainly be used to support validation. For example, if there is detailed and statistically-significant evidence that production runs are well controlled you could rationalize and justify not doing full validation.

During retrospective validation, it’s advisable that existing product be quarantined, and production put on hold until validation is complete.

As an exception, producing product as part of the validation exercise would follow concurrent validation. This may not be practical since product may have already been distributed, but caution is advised for the reasons outlined.

General Process

The general process for retrospective validation follows the same process as for prospective and concurrent validation except DQ is seldom performed, as the system has already been in use for some time.

Instead a survey and review of available information is performed. This normally occurs before the validation plan is created.

The MVP should also provide guidance on managing inventory during retrospective validation.

One Major Issue

One potential major problem that can occur with retrospective validation the determination of what action should be taken if an issue is found with the system during retrospective validation?

As with everything else, a risk-based decision is warranted. This could be anything from product recall, to customer notifications, to just documenting the justification of the decision why nothing was done.

Again, the MVP should provide guidance on dealing with situations concerning out of specification conditions revealed during retrospective validation, which should also definitely include involving regulatory support.

13
shares

TOP

Similar articles: