The role of the Qualified Person is evolving faster than most pharma supply chains are ready for, and AI is at the center of that transformation. In a session titled AI and the Future QP: From Gatekeeper to Navigator at Making Pharmaceuticals 2026, Mukesh Patel of CommQP delivered a practical assessment of where the QP role stands today, where AI fits in, and what the road ahead looks like for quality professionals and the organizations that rely on them.
The QP Today: A Model Built on Retrospection
To understand where AI adds value, it helps to understand the limitations of the current model. As Mukesh outlined, today’s QP operates as a classic gatekeeper: manufacturing happens, testing follows, documentation is reviewed, and the QP certifies the batch. The process is retrospective, document-based, driven by human judgment, and conducted batch-by-batch.
A QP certifying a batch today may have no immediate visibility into a similar batch certified three months ago. There is no inherent pattern recognition, no cross-batch signal detection, just a human working through a stack of records and applying experience-based judgment.
It is a model that has served patient safety well so far, but Mukesh explains how the model should evolve with AI implementation.
Key Insight 1: AI Can Be Mapped to All 21 QP Responsibilities
Mukesh presented a detailed slide mapping potential AI use cases against each of the 21 QP responsibilities defined under Annex 16. The full scope of that mapping signals that AI integration in quality operations is no longer a future aspiration; it is a present-tense question.
| # | QP Responsibility | AI Use Case |
|---|---|---|
| 1.7.1 | GMP compliance | Batch record review, anomaly detection across GMP data |
| 1.7.2 | Supply chain traceability | Automated supply chain mapping from systems |
| 1.7.3 | Audits performed | Audit report summarization, trend analysis |
| 1.7.4 | MA site compliance | Cross-check site list vs MA dossier |
| 1.7.5 | Activities vs MA | Compare batch records vs dossier requirements |
| 1.7.6 | Materials compliance | CoA/spec comparison, supplier monitoring |
| 1.7.7 | API GMP/GDP | Evidence aggregation (audits, declarations) |
| 1.7.8 | API import compliance | Document completeness check |
| 1.7.9 | Excipient GMP | Risk-based excipient classification |
| 1.7.10 | TSE compliance | Track declarations, expiry alerts |
| 1.7.11 | Documentation complete | Missing data detection, workflow checks |
| 1.7.12 | Validation & training | CPV trending, training matrix checks |
| 1.7.13 | QC results compliance | Trend analysis, RTRT analytics |
| 1.7.14 | Commitments & stability | Commitment tracking, stability trending |
| 1.7.15 | Change impact | Predictive impact assessment from history |
| 1.7.16 | Investigations complete | Deviation/OOS pattern analysis |
| 1.7.17 | Complaints/recalls impact | Signal detection across complaints/recalls |
| 1.7.18 | Technical agreements | Clause extraction, gap analysis |
| 1.7.19 | Self-inspection | Risk-based inspection planning |
| 1.7.20 | Distribution arrangements | Lane risk analysis, excursion monitoring |
| 1.7.21 | Safety features | Vision systems for serialization/tamper checks |
Key Insight 2: There Are Clear Boundaries Between Good and Bad AI Use
Mukesh drew a distinction between where AI genuinely helps and where it introduces unacceptable risk — a framework directly relevant to evaluating CDMO quality systems.
Good uses of AI in QP responsibilities:
- Evidence aggregation and document comparison
- Anomaly detection and trend analysis
- Risk ranking and workflow gating
- Computer vision checks for sterile and serialization inspection
- Draft assessments for QP review
Uses that should raise a red flag:
- Autonomous final QP certification decisions
- Unsupervised closure of deviations or out-of-specification results
- Replacing human audit judgment
- Accepting AI-generated summaries without verifying the source
- Deploying opaque models where explainability, validation, or traceability are weak
Mukesh flagged the risk of AI hallucination. When a system pulls summary information from external sources, that information may be inaccurate, and the QP reviewing it may not know it. Source traceability is a GMP requirement.
Key Insight 3: Keep a Human in The Loop
Using Clause 1.7.16 — investigations — as a worked example, Mukesh illustrated where AI augments human judgment and where it cannot substitute for it.
AI can detect patterns across deviations, link recurring issues across batches and sites, support root cause hypotheses, and flag gaps in investigation completeness. What it cannot do is confirm a true root cause, evaluate scientific adequacy, or make the judgment that an investigation has been completed to a sufficient level — the exact language of the regulatory requirement, and explicitly a QP judgment call.
Key Insight 4: Regulatory Expectations
Annex 22, the emerging European guidance on AI in GMP contexts, sets expectations around model validation, transparency, explainability, lifecycle monitoring, and human oversight. Mukesh highlighted a distinction between traditional deterministic validation, where outputs are predictable and binary, and the probabilistic nature of AI systems, where the same input may not always produce the same output.
AI systems should be validated with GMP-appropriate rigor, intended use and decision boundaries should be defined, and bias, drift, and false negatives should be actively monitored.
Key Insight 5: Industry Adoption
Despite the potential, Mukesh noted that actual uptake remains limited. When speaking with pharmaceutical companies about live AI applications in QP-related processes, the consistent finding is that very little is actually deployed. Areas with the most traction include visual inspection in sterile manufacturing, supply chain monitoring, and process deviation detection.
Questions for CDMO Buyers:
- Does your quality team have a defined AI strategy under your quality management system?
- How are AI tools validated for GMP use, and how is that validation documented?
- Where is human QP review explicitly retained in AI-augmented processes?
- How do you manage model drift, bias, and false negatives over time?
- Can you demonstrate source traceability when AI is used to generate summaries or assessments?