The Great Screenshot Debate – Vote Now!

Software validation has become an integral part of the overall software lifecycle when it comes to using a software application in a regulated environment. If the application is defined as being GxP system or critical to quality the end result means that a high degree of testing is required in order to allow the application to be used in the manufacturing process.

What should happen is that a comprehensive Vendor Audit needs to be carried out, to ensure that the system has been developed to the highest of quality standards. This approach is essential so that you have a high degree of assurance that the system is sound, and can be leveraged from later in at the validation stage.

Before a software validation script is drafted in anger, it is essential to perform a robust risk assessment to ensure that you don’t end up testing every single mouse click that occurs, in reality what happens is that people who have little experience of the system get together and come to the conclusion:

“Hey, everything is critical in this system, we have to test anything that moves!”

Does this sound like a familiar situation to you?

Anyway I am getting off the point slightly, one of my major gripe’s with software validation testing is the constant call for screenshots to prove that a certain value appears on the screen, when a certain calculation or button click is performed.

What I don’t understand is, why is this necessary?

Surely when you have highly qualified people developing these scripts and testing these scripts, if someone signs off that the acceptance criteria has been met, or records the actual value this should be sufficient.

Are we saying that validation people cannot be trusted, and you need a screenshot for everything?

Is there a requirement out there that states screenshots are the preferred form of documented evidence?

I think it’s time to open up this debate and see what the general feeling is out on the front line. If we are ever to get to a point where we have a lean validation approach these are the issues that need to be addressed.

Please add your comments below or click here to leave them in the forum.

0
shares

  • gokeeffe

    David Stokes • No they’re not. It’s perfectly acceptable to allow a trained tester to enter qualitative or quantitative test results by hand, as per the instructions in the test script.
    Screen shots can be useful for saving time, especially when recording a lot of values.

  • gokeeffe

    David Stokes • No they’re not. It’s perfectly acceptable to allow a trained tester to enter qualitative or quantitative test results by hand, as per the instructions in the test script.
    Screen shots can be useful for saving time, especially when recording a lot of values.

  • gokeeffe

    Fernando Pedeconi • FDA had established in 1987 that Computer Systems Validation is “Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality attributes.”
    True is that you can demonstrate through the provision of written statements on a validation protocol whether the computerised system meets its predefined specifications and quality attributes, however, a key element is to determine what does constitute verifiable and indisputable evidence.

  • gokeeffe

    Fernando Pedeconi • FDA had established in 1987 that Computer Systems Validation is “Establishing documented evidence which provides a high degree of assurance that a specific process will consistently produce a product meeting its pre-determined specifications and quality attributes.”
    True is that you can demonstrate through the provision of written statements on a validation protocol whether the computerised system meets its predefined specifications and quality attributes, however, a key element is to determine what does constitute verifiable and indisputable evidence.

  • gokeeffe

    David Stokes • It should be possible to recreate any test environment and re-execute testing. Recorded results therefore constitute evidence that can be verified and confirmed (albeit not easily). This should be sufficient taking a risk-based approach to testing.

  • gokeeffe

    David Stokes • It should be possible to recreate any test environment and re-execute testing. Recorded results therefore constitute evidence that can be verified and confirmed (albeit not easily). This should be sufficient taking a risk-based approach to testing.

  • gokeeffe

    Fernando Pedeconi • Agreed it should be possible. However, not so sure how practical that would be: Say for instance that a given software functionality was used in production during a given timeframe, before of which did not exist and after which it has been removed. You need to demonstrate to an inspector what was the behavior of that functionality during that timeframe and its impact to the process and the products made through it.
    Although it is entirely possible to being able to replicate the exact environment on which that functionality was used, we’d probably agree on the fact that in the course of an investigation (either a regulatory or even an internal one) you may hit the problem constituted by the lack of a luxury resource which is time. I would suggest therefore to document verification and testing activities on critical functionality as thoroughly as possible, according to a risk based approach.

  • gokeeffe

    Fernando Pedeconi • Agreed it should be possible. However, not so sure how practical that would be: Say for instance that a given software functionality was used in production during a given timeframe, before of which did not exist and after which it has been removed. You need to demonstrate to an inspector what was the behavior of that functionality during that timeframe and its impact to the process and the products made through it.
    Although it is entirely possible to being able to replicate the exact environment on which that functionality was used, we’d probably agree on the fact that in the course of an investigation (either a regulatory or even an internal one) you may hit the problem constituted by the lack of a luxury resource which is time. I would suggest therefore to document verification and testing activities on critical functionality as thoroughly as possible, according to a risk based approach.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Fernando, not to be picky with your comment but the fact that you are referring to a statement by the FDA from 1987 concerns me. If we continually look to the past there will be no future. I agree with David 100% here, we’ll never get to the place we want to go, if we continue looking to the past.

    The overhead involved with managing paper screenshots is huge, for everyone involved in the process. Can we not trust the people executing that they are doing it correct.

    Let’s focus on the quality of what we do, not the aesthetics of the documents and attachments.

    I don’t know how many times I’ve heard from the Quality Departments, the FDA need to see screenshots!!!

    People are just making up this stuff as they go along!

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Fernando, not to be picky with your comment but the fact that you are referring to a statement by the FDA from 1987 concerns me. If we continually look to the past there will be no future. I agree with David 100% here, we’ll never get to the place we want to go, if we continue looking to the past.

    The overhead involved with managing paper screenshots is huge, for everyone involved in the process. Can we not trust the people executing that they are doing it correct.

    Let’s focus on the quality of what we do, not the aesthetics of the documents and attachments.

    I don’t know how many times I’ve heard from the Quality Departments, the FDA need to see screenshots!!!

    People are just making up this stuff as they go along!

  • gokeeffe

    Fernando Pedeconi • Graham, one of the requirements of the “evidence” is that needs to be objective, i.e. verifiable by a third party. You cannot refute a statement made by someone without either repeating the testing yourself (in which case you’re at the risk of putting yourself in a rather awkward situation of having to have that test system readily available at any point in time) or having the possibility to confront the statement against evidence provided by the tester according to a pre-approved protocol.

    By the way, the above it is not exclusive of computer systems validation, it works like that in any other scientific discipline you may think of.

  • gokeeffe

    Fernando Pedeconi • Graham, one of the requirements of the “evidence” is that needs to be objective, i.e. verifiable by a third party. You cannot refute a statement made by someone without either repeating the testing yourself (in which case you’re at the risk of putting yourself in a rather awkward situation of having to have that test system readily available at any point in time) or having the possibility to confront the statement against evidence provided by the tester according to a pre-approved protocol.

    By the way, the above it is not exclusive of computer systems validation, it works like that in any other scientific discipline you may think of.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Thanks for the clarification Fernando, so are we saying then that a tester verifying on the test script is not objective evidence. Does that mean another person will have to watch the screen at the time of execution too, to represent the third party.

    I’m still not clear on this one!

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Thanks for the clarification Fernando, so are we saying then that a tester verifying on the test script is not objective evidence. Does that mean another person will have to watch the screen at the time of execution too, to represent the third party.

    I’m still not clear on this one!

  • gokeeffe

    Joy Gray • Graham, people certainly are making this stuff up as they go along, but in some cases out of necessity to make it up!

    FDA regulations can be vague at best, and people are trying to read the minds of FDA inspectors who haven’t even walked through the door yet.

    Working for a QMS software development company and SaaS provider, customer audits are a frequent and often. It’s fascinating to see what each different company wants to see, or what their auditor-for-hire wants to see, what they think the FDA would want to see, and what I think is right and reasonable for a non-regulated company that admittedly becomes an extension of the regulated company with the SaaS QMS (EDMS, Training, Audit, CAPA, Complaints).

  • gokeeffe

    Joy Gray • Graham, people certainly are making this stuff up as they go along, but in some cases out of necessity to make it up!

    FDA regulations can be vague at best, and people are trying to read the minds of FDA inspectors who haven’t even walked through the door yet.

    Working for a QMS software development company and SaaS provider, customer audits are a frequent and often. It’s fascinating to see what each different company wants to see, or what their auditor-for-hire wants to see, what they think the FDA would want to see, and what I think is right and reasonable for a non-regulated company that admittedly becomes an extension of the regulated company with the SaaS QMS (EDMS, Training, Audit, CAPA, Complaints).

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Hi Joy,

    Well, I do believe it is more important that ever for software development companies to have a robust quality system in place, not only for the auditors perspective but also to allow the client to leverage off their documentation to reduce the validation burden down the line.

    Why do the FDA get away with having vague regulations anyway, this always seem to be the crux of the problem!!

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • Hi Joy,

    Well, I do believe it is more important that ever for software development companies to have a robust quality system in place, not only for the auditors perspective but also to allow the client to leverage off their documentation to reduce the validation burden down the line.

    Why do the FDA get away with having vague regulations anyway, this always seem to be the crux of the problem!!

  • gokeeffe

    David Stokes • The GAMP Testing SIG currently are updating the “Testing of GxP Systems” Good Practice Guide and will be /are looking at the question of test evidence. I can’t tell you yet what will be in the new version, but the existing version (which was reviewed by US FDA) does address this issue and does not require screen shots.

  • gokeeffe

    David Stokes • The GAMP Testing SIG currently are updating the “Testing of GxP Systems” Good Practice Guide and will be /are looking at the question of test evidence. I can’t tell you yet what will be in the new version, but the existing version (which was reviewed by US FDA) does address this issue and does not require screen shots.

  • gokeeffe

    Fernando Pedeconi • David, I think they would need to look at the risk factor and the implications of why indisputable, objective evidence is required to prove the system meets its predefined specifications, throughout the lifespan of the system (and beyond, actually).

    Graham, objective evidence is to last and being verifiable at future time.

  • gokeeffe

    Fernando Pedeconi • David, I think they would need to look at the risk factor and the implications of why indisputable, objective evidence is required to prove the system meets its predefined specifications, throughout the lifespan of the system (and beyond, actually).

    Graham, objective evidence is to last and being verifiable at future time.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • This is again further proof to me that we need clear, understandable ways to do things.

    I assume both you guys are experts in your fields but the outlooks seem to be different, we are working in scientific fields there should be a right and wrong way to do things, it really shouldn’t be based on opinions really.

    When we talk about Risk Factors, I’ve never worked on a project where a comprehensive risk assessment was preformed on an application before validation commenced, not to meant a Risk Factor on the viability of not using screenshots.

  • gokeeffe

    Graham O’Keeffe +353 87 6006529 • This is again further proof to me that we need clear, understandable ways to do things.

    I assume both you guys are experts in your fields but the outlooks seem to be different, we are working in scientific fields there should be a right and wrong way to do things, it really shouldn’t be based on opinions really.

    When we talk about Risk Factors, I’ve never worked on a project where a comprehensive risk assessment was preformed on an application before validation commenced, not to meant a Risk Factor on the viability of not using screenshots.

  • Frank Houston

    There are times when a screenshot is the most efficient way to record the expected result of a test, and not just for the “critical” requirements, for instance, when the test script itself uses a screen shot to describe the expected result as a system attribute. When the expected result is expressed as the unique value of some variable, which can be copied into the result block, then a screen shot may be overkill.

    A pet peeve of mine is the test script where it seems as if the result of every mouse-click and screen change gets recorded on the way to the screen where the attribute or variable under test is actually challenged. That is more of a problem than overuse of screen shots.

  • Frank Houston

    There are times when a screenshot is the most efficient way to record the expected result of a test, and not just for the “critical” requirements, for instance, when the test script itself uses a screen shot to describe the expected result as a system attribute. When the expected result is expressed as the unique value of some variable, which can be copied into the result block, then a screen shot may be overkill.

    A pet peeve of mine is the test script where it seems as if the result of every mouse-click and screen change gets recorded on the way to the screen where the attribute or variable under test is actually challenged. That is more of a problem than overuse of screen shots.

  • gokeeffe

    Posted by Peter Glaubenstein
    Where possible I would add screen shots as supplimental evidence that an action did happen as per your acceptance criteria. Providing that there is sufficient traceable information on the screen shot itself, ie date time stamp. This information then can be used for training purposes and training matrixes. Thus meeting a few key areas of GxP criteria.

  • gokeeffe

    Posted by Peter Glaubenstein
    Where possible I would add screen shots as supplimental evidence that an action did happen as per your acceptance criteria. Providing that there is sufficient traceable information on the screen shot itself, ie date time stamp. This information then can be used for training purposes and training matrixes. Thus meeting a few key areas of GxP criteria.

  • Bill Stiemsma

    I always believe in the practice that there should be sufficient evidence for the reviewer/approver/auditor to come to the same conclusion as the tester that specified acceptance criteria has been met. Screen shots of step by step to navigate are not required (in my opinion) but the final result that shows the intended results of the test are indisputable.

  • Bill Stiemsma

    I always believe in the practice that there should be sufficient evidence for the reviewer/approver/auditor to come to the same conclusion as the tester that specified acceptance criteria has been met. Screen shots of step by step to navigate are not required (in my opinion) but the final result that shows the intended results of the test are indisputable.

  • Neeta

    Yes screenshsots are required for crticial results. It acts as a proof of validation for audit purposes. Its not that we don’t trust on validation people.

  • Neeta

    Yes screenshsots are required for crticial results. It acts as a proof of validation for audit purposes. Its not that we don’t trust on validation people.

  • Aaron Leimback

    Another way that you could look at the use of screen shots is that it helps to prevent dishonesty during test execution. If we lived in a perfect world, screen shots wouldn’t be necessary because we would know that the script is an accurate representation of the test results. Screen shots not only support the documented test results but also provide the objective evidence that the intended use of the system meets the needs of the business This doesn’t mean every step like Bill said, only where it adds value.

  • Aaron Leimback

    Another way that you could look at the use of screen shots is that it helps to prevent dishonesty during test execution. If we lived in a perfect world, screen shots wouldn’t be necessary because we would know that the script is an accurate representation of the test results. Screen shots not only support the documented test results but also provide the objective evidence that the intended use of the system meets the needs of the business This doesn’t mean every step like Bill said, only where it adds value.

  • Tim Horgan

    I do not believe there is a requirement to use screenshots, and don’t recommend it. We do a lot of other validation execution on manufacturing equipment that is typically documented by written responses and signatures. I don’t see why we should use a different standard for validation of computer systems. (I’m referring to validation of applications that support the manufacturing environment, not validation of software contained in a medical device. That may present different considerations.)

  • Tim Horgan

    I do not believe there is a requirement to use screenshots, and don’t recommend it. We do a lot of other validation execution on manufacturing equipment that is typically documented by written responses and signatures. I don’t see why we should use a different standard for validation of computer systems. (I’m referring to validation of applications that support the manufacturing environment, not validation of software contained in a medical device. That may present different considerations.)

  • Toni

    You should not have screens prints for every step throughout your test case, but rather only the key steps that can prove your acceptance criteria passes or fails. In lieu of screen prints you can have a witness. It depends on your company’s procedures. I don’t think there are regulatory requirements that specifically state that you must provide ‘screen prints’. I have reviewed and approved documents using either case. I have seen in some cases where the screen prints were helpful in resolving deviations and I have seen an over use of screen prints, which is not effective. It all depends on your company’s procedures and how well test cases are developed to ensure that we have the documented evidence, whether a screen print or an executed test case signed and dated by a trained executor.

  • Toni

    You should not have screens prints for every step throughout your test case, but rather only the key steps that can prove your acceptance criteria passes or fails. In lieu of screen prints you can have a witness. It depends on your company’s procedures. I don’t think there are regulatory requirements that specifically state that you must provide ‘screen prints’. I have reviewed and approved documents using either case. I have seen in some cases where the screen prints were helpful in resolving deviations and I have seen an over use of screen prints, which is not effective. It all depends on your company’s procedures and how well test cases are developed to ensure that we have the documented evidence, whether a screen print or an executed test case signed and dated by a trained executor.

  • Chandra

    Can we electronically record actual results including embedded screen shots (say on a test script that is developed on a spreadsheet) and then print, sign & date?

  • Chandra

    Can we electronically record actual results including embedded screen shots (say on a test script that is developed on a spreadsheet) and then print, sign & date?

  • Matt Damick

    You need to be clear about what constitutes (product) quality-related requirements. Such requirements should be process-driven primarily (e.g., a unit operation results in a material of such-and-such characteristic), and then reg-driven (e.g., Part 11 requirements, when applicable). I doubt that all of an application’s attributes are traceable to quality-related requirements. Many are (albeit important) business requirements – such as audit trails (the exception being when we’re in Part 11 territory, of course), report formats, animation schemes for HMI graphics, tagging conventions, etc.

    Screen shots work as evidence when you can clearly tie them to non-transient events/attributes (e.g., setpoints, calculation results, equipment status), a clear set of procedures used to arrive at the event to be observed, and well-controlled versions of whatever software and hardware environment used for testing. However, screen shots are cumbersome and require labeling (word processing?) to associate them with test protocols – increasing execution resources.

    Quality-related attributes of HMIs might include indications of suitable SIP or autoclave cycle completion, certain instrument displays, but probably not stuff like the position of a valve or whether a prox switch is activated for a transfer panel swing-arm.

    As long as you have the necessary evidence of personnel qualification, unambiguous validation SOPs and protocol instructions, and good computerized system version control – all of which goes to test repeat-ability – screen shots are not likely necessary. I do agree with previous posters that screenshots are handy for displaying large amounts of info requiring verification. Spreadsheet cell configuration would be a prime example of when screenshots are useful.

  • Matt Damick

    You need to be clear about what constitutes (product) quality-related requirements. Such requirements should be process-driven primarily (e.g., a unit operation results in a material of such-and-such characteristic), and then reg-driven (e.g., Part 11 requirements, when applicable). I doubt that all of an application’s attributes are traceable to quality-related requirements. Many are (albeit important) business requirements – such as audit trails (the exception being when we’re in Part 11 territory, of course), report formats, animation schemes for HMI graphics, tagging conventions, etc.

    Screen shots work as evidence when you can clearly tie them to non-transient events/attributes (e.g., setpoints, calculation results, equipment status), a clear set of procedures used to arrive at the event to be observed, and well-controlled versions of whatever software and hardware environment used for testing. However, screen shots are cumbersome and require labeling (word processing?) to associate them with test protocols – increasing execution resources.

    Quality-related attributes of HMIs might include indications of suitable SIP or autoclave cycle completion, certain instrument displays, but probably not stuff like the position of a valve or whether a prox switch is activated for a transfer panel swing-arm.

    As long as you have the necessary evidence of personnel qualification, unambiguous validation SOPs and protocol instructions, and good computerized system version control – all of which goes to test repeat-ability – screen shots are not likely necessary. I do agree with previous posters that screenshots are handy for displaying large amounts of info requiring verification. Spreadsheet cell configuration would be a prime example of when screenshots are useful.

  • Clearly this is a discussion where the answer is “it depends”. Everyone in the thread have good arguments for and against. I agree with the argument that a screenshot is not required after every button click. This is cumbersome and unnecessary. My approach is to provide screenshots only on the test steps where a requirement is being verified. It does take a little bit of time but it removes all doubt of tester error.

  • Clearly this is a discussion where the answer is “it depends”. Everyone in the thread have good arguments for and against. I agree with the argument that a screenshot is not required after every button click. This is cumbersome and unnecessary. My approach is to provide screenshots only on the test steps where a requirement is being verified. It does take a little bit of time but it removes all doubt of tester error.

  • Martin

    I think (i.e. opinion only) judicious and strategic use of screenshots is warranted. A key part of assessing a system for Part 11 compliance is determining what records are being processed that are required to show compliance with predicate rules. Key points in the processing of those records would make good targets for screenshots. Another would be any risk control measures where operator or patient safety is implicated.

  • Martin

    I think (i.e. opinion only) judicious and strategic use of screenshots is warranted. A key part of assessing a system for Part 11 compliance is determining what records are being processed that are required to show compliance with predicate rules. Key points in the processing of those records would make good targets for screenshots. Another would be any risk control measures where operator or patient safety is implicated.

Similar articles:

An Alternative View of the ICH Q10 Pharmaceutical Quality System (PQS)

The image below is that depicted by the International Conference of Harmonisation (ICH) Q10, Annex 2, and is supposed to depict a PQS or Pharmaceutical Quality System.

Typically, I really love the ICH. When we have to deal with outdated regulations from different global organizations it becomes a real nightmare trying to keep track of the nuances and the ICH has done a pretty good job of bringing several of the key organizations together and aligning them on how best to organize and meet the expected requirements.

That being said the diagram below and the depiction in Q10 of what a PQS should look like is greatly lacking.

Development Phases

In section 1.8 under the Quality Manual the ICH Q10 guidance states, “The description of the PQS should include: …(c) Identification of the pharmaceutical quality system processes, as well as their sequences, linkages and interdependencies.

Process maps and flow charts can be useful tools to facilitate depicting the pharmaceutical quality system processes in a visual manner”.

I completely agree.

The problem is using the graphical depiction they present in Annex 2 is completely worthless.

Basically they listed some of the PQS elements in a bar and then said they all apply to the entire product lifecycle, which simply isn’t true.

When we are in the development phase of our product lifecycle why would we do that under the change management system, or monitor process performance?

 

Controlling Change – No Value Add

There is no point in controlling changes for a product that is purposely being changed, nor does it offer any value to monitor the process performance for a process that has yet to be developed.

This isn’t a graphic depiction of the PQS, but rather a graphic of how they depict the lifecycle management (which also has some issues).

The PQS is the quality system and its subsystems and how they interrelate.

While it’s useful to look at how the PQS and product Lifecycle Management overlap and what elements of the PQS system are relevant at each lifecycle stage, it is not the point of the PQS, and even if that’s the end goal it’s not depicted here at all.

This image offers almost no value.

A Better Approach

So, what should this graphic look like?

While this is not a perfect view of a PQS, I would propose that the image below is a much better depiction of how the PQS should be visualized and a good place to start.

At the core of any quality system should be management. This goes back to Deming, who said, “Quality begins with the intent that is fixed by Management”.

Quality has to be rooted in the executive management team.

Define Core Quality Systems

Core quality systems then need to be defined. These are systems that impact all aspects of the business and include a Risk Management Policy, Resource Management, Document Control and CAPA systems.

All of the other subsystems, Deviations, Supplier Management, Equipment Qualifications, Validation, Material Management, etc, etc. all should be risk based or involve risk assessment, they all require resources and training, they call require documents (procedures, policies, records), and the CAPA system of course drives for process improvement regardless of the process.

Subsystems

All subsystems feed back into the main Management module. The subsystems listed, all are interconnected, with the exception of Post Market Systems.

The subsystems are important too, but they are farmed out to different groups and have different levels of importance depending on the stage of the product lifecycle.

Post Market Systems

The one exception is the Post Market Systems. This includes complaint management, product reviews, recall processes and other systems to support marketed products.

These generally do not interact with the other subsystems unless it is through the CAPA system or other management functions, but still utilizes all the systems under the management umbrella.

Alternate View

The PQS presented here, isn’t intended to be perfect, but I thought it was worth presenting an alternate view to the one presented by the ICH.

The ICH concept is a good one, and the ideas are fairly well laid out in the ICH, but the graphical representation of the PQS leaves a lot to be desired.

When establishing a PQS, it is better to start with something to what we’ve depicted here, and customize it as needed for the organization.

7
shares

Similar articles:

How 21 CFR Part 11.3(7) Applies to Electronic Batch Records [Video]

When dealing with Part 11 it’s important to understand what an electronic signature actually means

The definition of electronic signatures or e-sigs can be found in 21 CFR Part 11.3(7).

Electronic Signature

An electronic signature or e-sig means a computer data compilation of any symbol or series of symbols executed, adopted, or authorized by an individual to be the legally binding equivalent of the individual’s handwritten signature.

Handwritten Signatures

We also need to understand what a handwritten signature means in the context of Part 11.
The definition of handwritten signatures can be found in 21 CFR Part 11.3(8).

Handwritten signature means the scripted name or legal mark of an individual handwritten by that individual and executed or adopted with the present intention to authenticate a writing in a permanent form.

The act of signing with a writing or marking instrument such as a pen or stylus is preserved. The scripted name or legal mark, while conventionally applied to paper, may also be applied to other devices that capture the name or mark.

Electronic Batch Records

Eric works in a Pharmaceutical company and he is responsible for the filling process of the batch been manufactured.

Each time Eric performs the filling process he has to populate a batch record with the appropriate details

After each step Eric must also fill in his signature and date to verify that he actually performed each task.

Eric is manually handwriting these details and they are legally binding to Eric.

21 CFR Part 11.3(8)

This is when 21 CFR Part 11.3(8) applies.

Fast forward 12 months and Eric’s company has implemented a brand new Manufacturing Execution System (MES) where all details around the batch manufacturing process are recorded electronically.

21 CFR Part 11.3(7)

Now when Eric performs the filling process he now populates everything electronically and signs with his username and password combination to verify that he has performed those tasks.

This is when 21 CFR Part 11.3 (7) applies.

0
shares

TOP

Similar articles: