The question posed in this article’s title is not an empty provocation. It reflects a real, growing, and profoundly transformative tension within the field of clinical research. Historically, the principal investigator has been a human figure—medical, scientific, and ethical. But today, with the accelerated advancement of agentic artificial intelligence—systems capable of perceiving, deciding, and acting autonomously within a predefined framework—we face an unprecedented possibility: sharing operational agency in a clinical trial with non-human systems.
This possibility is not limited to speculative imagination. Platforms already exist that recruit patients automatically, adapt protocols in real time, and detect adverse events before participants report them. Each of these functions, executed by an algorithmic agent, partially shifts the traditional center of decision-making. It’s not that the investigator disappears, but they evolve—from direct executor to strategic supervisor of digital agents with operational capabilities.
Asking whether a digital agent could assume the role of principal investigator forces us to rethink our definitions of clinical judgment, legal responsibility, scientific validity, and applied ethics. It is not a question seeking to eliminate the human role, but rather one that challenges it: are we ready to lead alongside systems that act, learn, and decide?
This article does not offer a definitive answer, but it is grounded in a strong conviction: agentic artificial intelligence is already among us. Its impact will not be defined solely by its technical capacity, but by our willingness—as a scientific community—to integrate it with discernment, governance, and human purpose.
1. From Agenic AI to Agentic AI: The Threshold in the Pharmaceutical Industry
In recent years, the pharmaceutical industry has widely adopted artificial intelligence tools. However, most of these applications fall under what is known as agenic AI: systems that process data, make predictions, or offer recommendations, but do not make decisions on their own. They are sophisticated observers, not autonomous actors.
For instance, a predictive model may identify patients at high risk of dropping out of a clinical trial, a system might help detect new biomarkers from biomedical literature, or an AI tool may assist in drafting regulatory reports. What they all have in common is that they depend on human input to function. They are powerful, but passive.
As we progress, a new category is emerging: agentic AI. This class of artificial intelligence possesses functional agency, meaning it can act independently within a defined framework. It can perceive its environment, prioritize objectives, and execute actions without constant human intervention. The distinction is no longer about how «intelligent» a system appears, but whether it has sufficient autonomy to make and carry out decisions.
In the pharmaceutical setting, this means that functions historically reserved for human investigators can now be shared—or even directly executed—by digital agents. This includes patient selection and monitoring, dynamic protocol adaptation, and clinical event management through automated responses. Agentic AI does not replace the investigator but profoundly transforms their role, challenging traditional notions of agency, responsibility, and control.
This evolution presents a deep dilemma, but also a strategic opportunity: can we lead this transformation without losing clinical judgment and scientific ethics? This article explores the phenomenon holistically—from emerging applications to regulatory challenges—while redefining what clinical leadership looks like in the algorithmic era.
2. Emerging Applications of Agentic AI in Clinical Research
The progressive integration of agentic artificial intelligence in clinical trials is redefining the role of technology in experimental medicine. Unlike agenic AI, which is limited to analyzing data and offering recommendations, agentic AI acts: it executes decisions within predefined parameters and responds dynamically to environmental information. This section explores its most relevant applications throughout the clinical trial lifecycle.
One of the most notable advances is the automated recruitment of patients. Platforms such as Deep 6 AI use natural language processing (NLP) and supervised machine learning to analyze millions of electronic health records (EHRs) and identify matches between real patients and inclusion/exclusion criteria for trials. The digital agent doesn’t just filter candidates; it prioritizes them, initiates contact, and autonomously monitors response rates (Deep 6 AI, 2023). Institutions like Mayo Clinic have managed to reduce recruitment time by over 75% using this approach, which is significant given that recruitment delays are among the top causes of trial failure (Collins & Stoffels, 2021).
Another emerging use case is adaptive informed consent. Traditionally, this process has relied on direct clinician–patient interaction. However, recent studies show that AI-powered digital assistants equipped with conversational capabilities can improve comprehension—especially among populations with limited literacy or cognitive conditions (Heng et al., 2020). Agentic agents can detect linguistic signs of confusion, adapt explanations in real time, and use biometric data to confirm understanding before prompting for an electronic signature. This dynamic interaction improves both the ethical quality and operational efficiency of consent.
Agentic AI is also transforming adaptive clinical trials, in which study parameters change as data accumulates. Unlearn.AI, for example, has developed «Digital Twins» that virtually simulate patients’ clinical evolution, allowing for control comparisons without traditional placebo groups (Unlearn.AI, 2023). These simulations not only optimize sample sizes but also allow agents to determine in real time when a study arm should be closed due to inefficacy or when dosage adjustments are needed. All of this can occur autonomously, with periodic human review.
Lastly, proactive monitoring of adverse events is being revolutionized by systems like AiCure, which employ computer vision, voice analysis, and behavioral modeling to detect clinical deviations in patient behavior (Brennan et al., 2019). By identifying early signals of non-adherence or deterioration, these agents trigger automated alerts, preventive interventions, or schedule adjustments. Unlike passive models that wait for patients to self-report, agentic agents anticipate, contextualize, and act.
Together, these applications configure a new ecosystem where AI not only supports but actively participates in clinical decision-making. This shift brings governance and responsibility challenges but marks a turning point toward operational precision medicine and distributed ethics.
3. Strategic Advantages and Emerging Risks: Progress or a Minefield?
Implementing agentic AI in clinical trials promises to revolutionize operational efficiency, therapeutic personalization, and the quality of generated evidence. One of the most visible benefits is the acceleration of patient recruitment. By analyzing millions of clinical data points in seconds, agents can identify eligible patients and establish priority lists without manual intervention by research teams. This represents a radical shift in one of the most costly and time-consuming stages of the clinical process (Deep 6 AI, 2023).
Another strategic advantage lies in reducing human error. Fatigue, cognitive biases, and variability among investigators often affect the consistency of clinical studies. Agentic systems operate with consistent rules and learn from each new interaction, ensuring homogeneous operational quality and reinforcing reproducibility (Topol, 2019).
These systems’ adaptive nature also enables real-time modifications to the experimental design, continuously optimizing the balance between efficacy and safety. Traceability improves significantly, as each algorithmic decision is recorded with detailed metadata, facilitating regulatory audits, causal analysis, and ethical reviews (Gerke et al., 2020).
However, these benefits are not without profound challenges. One of the main concerns is the dilution of clinical and legal responsibility. If a digital agent recruits an ineligible patient or modifies a protocol incorrectly, who is accountable? The developer? The ethics committee? The principal investigator? This ambiguity is particularly troubling when AI models function as black boxes, incapable of explaining their decisions (London, 2019).
The risk of algorithmic bias is also real. If training data reflects historical inequalities—based on gender, ethnicity, or socioeconomic status—agents may unintentionally reinforce these disparities, excluding vulnerable populations or producing inequitable outcomes. A well-known study by Obermeyer et al. (2019) showed how a health management algorithm prioritized white patients over Black patients, not through explicit racism, but due to biased proxy variables.
The informed consent process also raises concerns. If an agent adjusts dosages, modifies follow-up plans, or initiates referrals in real time, should this behavior be disclosed to the patient? Can individuals refuse automated interventions? These questions challenge traditional principles of autonomy and beneficence, demanding updated communication strategies and possibly new regulatory frameworks.
Lastly, there is a risk of excessive dependence. If the clinical investigator stops critically validating the system’s decisions and becomes a mere observer, the study’s scientific integrity is compromised. Human oversight must remain an active exercise of interpretation, correction, and shared responsibility.
In summary, agentic AI opens extraordinary possibilities, but also requires a redesign of clinical governance structures, bioethical principles, and our very conception of medical judgment. Technology is not neutral—and the greater its decisional power, the more it must be supervised through ethics, science, and humanity.
4. The New Clinical Investigator: From Executor to System Architect
The emergence of agentic AI in clinical environments redefines the competencies needed to lead a trial. Biomedical or statistical knowledge is no longer sufficient. Investigators must be able to interact with, supervise, and co-design with autonomous algorithmic systems. The clinical investigator of the future will act more like a systems architect—someone capable of understanding an agent’s decision logic, validating its actions, and ensuring its autonomy aligns with scientific and ethical standards.
One fundamental skill will be algorithmic literacy: understanding how models are trained, what data feeds them, what variables they prioritize, how they interpret results, and what their limitations are. Without this foundation, professionals risk validating decisions they do not understand, delegating inappropriate functions, or ignoring critical system signals (Chen & Decary, 2020).
They will also need to develop skills in designing supervised decision flows. This involves clearly defining which decisions an agent can execute independently, which require human review, and what safeguards must be in place to interrupt harmful actions. Algorithmic governance will no longer be a technical matter alone; it will be integral to clinical leadership.
Rethinking informed consent will also be essential. It will no longer suffice to explain the clinical protocol; patients must also be informed that certain decisions might be made or suggested by intelligent systems. Transparency must be accompanied by new ways of confirming comprehension, based on adaptive models, explanatory interfaces, and traceable interactions.
Within this new ecosystem, a key professional figure is emerging: the Clinical Algorithm Curator. Still evolving, this role blends medical knowledge, data analysis skills, ethical reasoning, and technological validation. Their job is not to build the model, but to adapt its logic to the clinical context, evaluate its impact on practice, and ensure every automated decision has a solid ethical and scientific foundation. Companies like Unlearn.AI and PathAI are already hiring hybrid professionals for this role.
Ultimately, clinical leadership in the 21st century will depend less on how many individual decisions a professional makes, and more on their ability to create environments where the best decisions—human or algorithmic—can emerge, be audited, and endure.
5. Conclusion | Clinical Trials with Agentic AI: Are We Ready to Share Agency?
The advent of agentic AI in clinical research is not merely a technical innovation. It is a paradigm shift that forces us to redefine how we approach scientific practice, ethical responsibility, and clinical judgment. For the first time, non-human systems can perceive, decide, and act within a regulated medical setting. And while they lack consciousness, their functional agency makes them real actors in evidence generation.
This opens fascinating possibilities: greater efficiency, continuous adaptability, therapeutic personalization at scale, proactive monitoring, and optimized resource allocation. But it also presents unavoidable challenges: traceability, bias, governance, patient autonomy, and shared legal responsibility.
The clinical investigator of the future will not be replaced—but they will be deeply transformed. From executor to supervisor; from operator to strategist; from observer to designer of ethically viable algorithmic ecosystems. Agentic AI does not replace human intelligence—it compels it to rise.
In this context, the most urgent challenge is not technical, but cultural and educational. We need to train professionals who can think both with and against the machine, who can collaborate without relinquishing judgment, and who can lead without clinging to total control. The future of medicine will not be defined solely in laboratories or ethics committees, but in our ability to integrate the best of computational science with human sensitivity.
Upcoming clinical trials will not just be smarter. They will be more autonomous, more adaptive, and more ethically demanding. If we want technology to serve health—and not the other way around—we must prepare now, with strategic vision, critical thinking, and clinical-technological leadership.
We still have time. But not much.
References (APA)
- Brennan, P. F., McGraw, D., Mandl, K. D., & Choudhry, N. K. (2019). Using digital technology to improve adherence and outcomes in patients with schizophrenia. Journal of Medical Internet Research, 21(3), e12294. https://doi.org/10.2196/12294
- Chen, M., & Decary, M. (2020). Artificial intelligence in healthcare: An essential guide for clinical leaders. BMJ Leader, 4(2), 120–124. https://doi.org/10.1136/leader-2019-000164
- Collins, F. S., & Stoffels, P. (2021). Accelerating COVID-19 therapeutic interventions and vaccines (ACTIV): A public-private partnership for a coordinated research response. JAMA, 323(24), 2455–2456. https://doi.org/10.1001/jama.2020.8920
- Deep 6 AI. (2023). Accelerating clinical trial recruitment through agent-based patient matching. https://www.deep6.ai
- Gerke, S., Minssen, T., & Cohen, G. (2020). Ethical and legal challenges of artificial intelligence–driven healthcare. In I. Glenn Cohen et al. (Eds.), Big Data, Health Law, and Bioethics (pp. 295–317). Cambridge University Press. https://doi.org/10.1017/9781108778683.019
- Heng, H., Zhang, Y., Tan, J. Y., & Holroyd, E. (2020). Chatbots to support informed consent: A scoping review. JMIR Medical Informatics, 8(8), e17995. https://doi.org/10.2196/17995
- London, A. J. (2019). Artificial intelligence and black-box medical decisions: Accuracy versus explainability. Hastings Center Report, 49(1), 15–21. https://doi.org/10.1002/hast.973
- Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
- Topol, E. (2019). Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books.
- Unlearn.AI. (2023). Digital Twins for Clinical Trials. https://www.unlearn.ai
Deja un comentario