When a device manufacturing deviation happens, key areas that can be problematic during the investigation into root cause include the scope, use of appropriate statistical methodology, and inclusion of all shared processes, equipment, and procedures, according to FDA investigators.
Also challenging is putting plans in place to prevent recurrence of the deviation using the firm’s Corrective and Preventive Action (CAPA) system and formulating and following up with the appropriate effectiveness checks.
[Related: Looking for more insightful content from author Jerry Chapman? Download a FREE compilation report containing four GMP case studies of his that include FDA analysis.]
At the FDA/Xavier MedCon conference held virtually in April 2021, FDA Office of Medical Device and Radiological Health Operations Medical Device Specialist Liza Garcia from the agency’s Puerto Rico duty station shared insights regarding what Office of Regulatory Affairs (ORA) field investigators with FDA review during manufacturing site inspections as they appraise whether a firm is adequately executing and documenting effectiveness checks under its CAPA program.
CAPA Effectiveness Checks
“What are some of the conditions that we typically observe that are not necessarily indicative of adequacy” when companies implement and comply with effectiveness checks requirements? she asked.
In her presentation, Garcia:
- Reviewed the 21 CFR regulatory requirements for CAPA and effectiveness check related activities, including how the checks are adjudicated and documented, and
- Detailed what investigators look for when trying to evaluate how adequately firms are implementing the agency’s requirements, including “the conditions that we observed, which do not necessarily represent adequacy when conducting such activities.”
She emphasized that “for the purposes of this presentation the scope is only how effectiveness checks are conducted and how they are documented. The rationales that are exercised by firms deciding whether to validate and/or verify corrective actions are outside of the scope of this presentation.”
Regulatory requirements are governed by 21 CFR 820.100(a)(1), Garcia explained, “that points to the variety of quality data source inputs to CAPA and whatever mechanism firms have established in the form of procedures that escalate analysis of the quality data sources in corrective and preventive actions.”
While this requirement does not necessarily or directly reference effectiveness checks, it is relevant because the same quality data sources are usually monitored to measure effectiveness.
The explicit requirement that makes direct reference to effectiveness is contained in 820.100(a)(4). That subset of the regulations states that there needs to be a verification and/or validation conducted for corrective actions after they have been implemented.
“The effectiveness of these corrective actions is to be adjudicated by virtue of the corrective actions not having an adverse impact on the finished devices,” Garcia pointed out.
All CAPA activities, including effectiveness checks, need to be documented as prescribed by 820.100(b). The precedent for the regulatory requirement, as it explicitly relates to effectiveness checks, is found in the regulation preamble comment 163:
“FDA has revised Sec. 820.100(a)(4) to reflect that preventive, as well as corrective, action must be verified or validated. The section is now consistent with ISO 9001:1994, sections 4.14.2(d) and 4.14.3(c)….”
Garcia noted that “there are typically some recurrent actions or lack thereof, that we observe during inspection, which are indicative of some type of inadequacy when adjudicating effectiveness.”
For example, a risk-based approach “is not necessarily exercised” when conducting investigations and implementing actions. Ultimately, the goal or the purpose of implementing a corrective action is that there is no adverse impact on the finished device. The expectation is that the actions use a risk-based approach as related to devices, safety, and performance.
All CAPA activities, including effectiveness checks, need to be documented as prescribed by 820.100(b)
Also noted on inspection is that the scope of affected products that is considered when determining whether an action is or will be effective is often inadequate. This can manifest itself by a lack of consideration of common denominators of reported problems—for example, products that share the same systems but are not considered as part of the corrective actions that have been implemented and/or the effectiveness checks.
Other common denominators could include anything that is related to:
- Shared processes
“We also often see the misuse of statistics when measuring for effectiveness or the inadequate selection of statistical methodology,” Garcia said. “An example would include when firms incorrectly interpret the methodology that is supplied by a given standard, for example, or simply use a methodology that is not able to capture what is needed to be measured.”
To adequately measure effectiveness, “it is imperative that the detectability of recurrence be measured or determined,” she stressed. “We expect that the manner or the means that firms have implemented to detect recurrence or lack thereof is adequate.”
The timeframes that are established for monitoring can also be problematic. Many are not adequate for firms to observe any recurrence that may be manifested. Some defects are dependent upon time. “We also see a lack of any action or measures of previous non-conformities as they relate to the time that the corrective actions were implemented and how that is analyzed respectively.”
Also noted by agency investigators are issues with how firms document recurrence or how firms do or do not consider recurrence when evaluating the effectiveness of corrective and preventive actions that have been implemented.
We also often see the misuse of statistics when measuring for effectiveness or the inadequate selection of statistical methodology
While many firms clearly have established a mechanism to analyze all the appropriate data sources, when it is time to document effectiveness, these may be omitted from related assessments, Garcia said.
“Sometimes the firm has a way to detect recurrence but does not detect it before determining effectiveness and the lack of continued monitoring after effectiveness has been adjudicated. That is something that is common to see.”
She also noted that it is common to see a CAPA closed as effective, but “later we see the problem being manifested or we do not see that there is a continuous analysis of those quality data sources to see whether the problem repeats itself.”
Addressing Recurrent Problems
Regarding prevention of recurrence of an issue, investigators assess and evaluate whether a firm has selected a method for verification that is able to capture the recurrence or the failure repeating itself over time. That can have a variety of forms and/or mechanisms, including acceptance activity.
“Oftentimes, one of the first things we need to determine when a firm has implemented corrective actions for those that relate to specifications of finished products or any type of product specification is whether acceptance activities have been re-evaluated, changed, or impacted in some way,” Garcia said.
If there has been a change in acceptance activities, investigators need to determine that whatever was implemented was adequate. That includes testing methods. The firm should ask how suitable test methods are at detecting related product specifications.
If the effectiveness check that is done is dependent on something that manifests in the product after distribution—for example, the frequency of product complaints—it is important to determine whether the firm can detect that failure in the field.
The timeframes that are established for monitoring can also be problematic
Oftentimes agency investigators observe that the company does not have the ability to detect or measure the effectiveness criteria it sets.
Garcia said that FDA “likes to see that a firm has continuously and permanently implemented mechanisms in order to prevent and to detect any shifts in processes and/or product specifications that were not necessarily raised to the level of a non-conformity but can alert the firm to proactively and preventively address those potential failures.”
As it relates to documented CAPA activities, “we know that the requirement under 820.100(b) also includes verification activities. We often find that these tests for effectiveness are either not documented according to the CAPA procedure, or if they are, they are not documented adequately because the related data to substantiate the effectiveness and whatever records are to be the evidence that supports that are not included or are not generated at all, or the activity just does not occur.”
[Related: Download a FREE report containing five case studies by author Jerry Chapman.]
Subscribe to Redica Insights
Get quality and compliance insights from our experts in your inbox