While AI technologies, especially large language models (LLMs), are revolutionizing the drafting process of clinical trial documents, experts agree that human medical writers remain essential — at least for now. LLMs like OpenAI’s GPT-4 are significantly faster at producing lengthy and complex medical documents. However, they still face limitations, particularly when it comes to clinical reasoning, logical coherence, and data security. If you’re considering a career in this dynamic field, there’s no better time to start — become a medical writer and secure your future with Ladavius Academy today.
Speed and Efficiency vs. Clinical Expertise
LLMs are increasingly used to generate first drafts of study protocols, consent forms, and clinical reports. Wing Lon Ng, director of AI engineering at IQVIA, highlights that the real strength of LLMs lies in improving the efficiency of initial drafting. GPT-4, for example, achieved high scores in terminology accuracy (over 99%) and content relevance (82%). However, when tested for its understanding of clinical logic — such as aligning endpoints and eligibility criteria with official guidelines — the model only scored 41.1%.
To improve this, experts have explored retrieval-augmented generation (RAG), which feeds LLMs with updated, domain-specific data from sources like ClinicalTrials.gov. This approach significantly enhanced GPT-4’s logical reasoning, pushing its accuracy up to nearly 80%.
Inherent Limitations and the Human Role
Despite advances, LLMs are still fundamentally limited. “These are language models, not knowledge models,” says David Llorente, CEO of Narrativa. To bridge the gap, Narrativa integrates LLMs with knowledge graphs and statistical models to reduce factual inaccuracies, often referred to as “hallucinations.”
Moreover, AI struggles to keep up with evolving clinical regulations. Regular feedback and quality audits are essential to ensure that generated content remains accurate, unbiased, and compliant with ethical standards.
Security and Compliance Challenges
Another significant hurdle is the secure handling of sensitive patient data. LLMs pose potential risks if data governance is not rigorously enforced. Ng stresses the importance of anonymisation, access control, encryption, and ongoing audits. Narrativa mitigates these risks by hosting its tools in private client cloud environments, although vulnerabilities — such as the recent DeepSeek data leak — still pose threats.
A Collaborative Future for AI and Medical Writers
The consensus among industry experts is that AI should enhance — not replace — human medical writers. Ng envisions a hybrid model where LLMs handle routine drafting tasks, while professionals focus on oversight, quality control, and nuanced decision-making. Jennifer Bittinger, president of Narrativa, notes that the industry’s perspective has shifted from resistance to acceptance, viewing AI as a supportive tool.
At the 2024 Jefferies London Healthcare Conference, OpenAI showcased its O1 model, capable of generating consistent trial documentation using minimal human input. However, even with advanced tools, AI adoption depends on collaboration and legislative alignment.
Regulation May Be the Bottleneck
While the pharmaceutical sector is pushing for widespread AI integration, the legislative landscape remains fragmented. Varying regulations across regions could complicate deployment. Bittinger notes that despite support from figures like Elon Musk in the U.S. government, some lawmakers may resist AI’s broader use in healthcare.
Until standardized regulations are in place, the complete replacement of human writers by AI remains unlikely. For now, AI will continue to act as a powerful assistant rather than a full substitute.
Posted April 2025.
Subscribe to our newsletter
Personalised by your preferences, subscribe to our newsletters to get the best of the Pharmaceutical Industry news in your inbox.