The Future of HTA Submissions: Automated Evidence Generation

Explore how automated evidence generation is transforming Health Technology Assessment submissions. Learn about AI-powered tools revolutionizing pharmaceutical market access
February 15, 2025 4 min read By Ghayath Janoudi
AI HTA Market Access HTA Clinical Evidence Systematic Reviews HTA Submissions Regulatory Real-World Evidence

HTA Submissions as the Gateway to Patient Access

Health Technology Assessment (HTA) submissions determine whether pharmaceutical innovations reach patients through reimbursement approval. These evaluations, conducted by agencies such as NICE in the UK, Canada's Drug Agency, and IQWiG in Germany, assess clinical effectiveness, cost-effectiveness, and budget impact to inform coverage decisions. The process directly affects patient access to new therapies and pharmaceutical revenue streams.

Current HTA submission processes rely heavily on manual evidence synthesis, with teams of health economists, biostatisticians, and regulatory specialists spending 12-24 months preparing comprehensive dossiers. Recent developments in artificial intelligence and natural language processing are beginning to automate portions of this workflow, with early implementations showing reductions in evidence generation timelines of 60-75%. This shift represents a significant operational change for pharmaceutical market access teams.

The Current State of HTA Submissions

Submission Requirements and Resource Allocation

HTA submissions require comprehensive evidence packages that include systematic literature reviews, network meta-analyses, economic models, and budget impact assessments. A typical submission for a new oncology therapy involves reviewing 3,000-5,000 scientific publications, extracting data from 100-200 relevant studies, and constructing complex Markov models with hundreds of parameters. This process currently requires teams of 10-15 specialists working for 12-24 months per submission.

Evidence Synthesis Timelines and Bottlenecks

Systematic literature reviews form the foundation of HTA evidence packages. Current manual processes require two independent reviewers to screen each abstract and full-text article, with a third reviewer resolving conflicts. For a typical oncology submission, this translates to over 10,000 screening decisions and 3,000 person-hours of work. Data extraction adds another 1,500-2,000 person-hours, while quality assurance processes require an additional 500-1,000 hours. These timelines create significant delays between regulatory approval and patient access.

"The evidence requirements for HTA submissions have expanded significantly over the past decade. Agencies now expect comprehensive indirect treatment comparisons, real-world evidence integration, and sophisticated economic modeling. Manual processes have not scaled to meet these demands." - Dr. Ghayath Janoudi, CEO, Loon

Automation Technologies in Evidence Generation

Natural Language Processing for Literature Screening

Machine learning models trained on historical screening decisions can now process literature databases with measured accuracy. Current implementations use transformer-based architectures to identify relevant studies based on population, intervention, comparator, and outcome (PICO) criteria. Loon's published validation studies report sensitivity rates of 99% for title/abstract and full-text screening and 96% accuracy when compared to dual screen adjudicated datasets. These systems process thousands of abstracts independantly, reliably, and consistently.

Automated Data Extraction and Synthesis

Advanced natural language processing systems can extract structured data from clinical trial publications, including patient characteristics, efficacy outcomes, and safety profiles. These systems identify tables, figures, and text passages containing relevant information, then map extracted data to standardized schemas. Loon's implementations achieve over 98% accuracy for data extraction when measured against data extracted by Canada's Drug Agency.

Regulatory Perspectives and Guidelines

HTA Agency Positions on Automated Evidence

Major HTA agencies have begun issuing guidance on automated evidence generation. NICE published a position statement in 2024 acknowledging the role of machine learning in systematic reviews, provided that methods are transparent and validated. Canada's Drug Agency (CDA-AMC) accepts AI-assisted evidence synthesis when accompanied by detailed methodology reports and validation metrics. IQWiG in Germany requires disclosure of automation tools but does not prohibit their use. These positions reflect growing acceptance of technological assistance in evidence generation.

Validation Standards and Audit Requirements

HTA agencies require comprehensive documentation of automated processes, including performance metrics and conduct transparency. Submissions must include audit trails showing which evidence was processed automatically versus manually reviewed. Validation typically involves comparing automated outputs against human-generated results on benchmark datasets. Agencies expect sensitivity analyses demonstrating that automation does not systematically exclude relevant evidence or introduce bias into economic models.

Operational Impact on Market Access Teams

Timeline Compression and Resource Efficiency

Pharmaceutical companies implementing automated evidence generation report significant timeline reductions. Resource allocation shifts from manual screening to strategic analysis, with teams focusing on evidence interpretation and stakeholder engagement rather than data processing.

Evidence Comprehensiveness and Consistency

Automated systems process larger evidence bases than manual approaches, typically screening 10,000-15,000 publications compared to 3,000-5,000 manually. This expanded scope identifies additional comparator studies and real-world evidence sources. Consistency improves through standardized extraction protocols and elimination of inter-reviewer variability.

Implementation Approach: Human-AI Collaboration

Successful implementations maintain human oversight at critical decision points. AI systems handle high-volume screening and data extraction, while human experts validate key information, interpret clinical relevance, and ensure regulatory compliance. This approach balances efficiency gains with quality assurance requirements mandated by HTA agencies.

Implementation Considerations for Market Access Organizations

Technology Selection and Integration

Market access teams evaluating automation platforms should assess validation evidence, regulatory acceptance, and integration capabilities with existing systems. Key selection criteria include: published performance metrics on relevant therapeutic areas, compatibility with standard evidence synthesis software, audit trail functionality meeting HTA requirements, and vendor support for regulatory submissions.

Organizational Readiness and Capability Building

Successful automation adoption requires investment in team capabilities and process redesign. Market access professionals need training on AI validation, output interpretation, and quality assurance protocols. Organizations report 2-4 weeks transition periods as teams adapt workflows and develop confidence in automated outputs. Clear governance structures defining human review requirements and escalation procedures ensure maintained quality standards throughout the transition.

Emerging Developments and Future Directions

Advanced Analytics and Predictive Modeling

Next-generation platforms incorporate predictive analytics to forecast HTA outcomes based on evidence packages. Loon's agentic AI tools can identify evidence gaps and recommend additional analyses to strengthen value propositions. Real-world evidence integration capabilities enable automated updates to economic models as new data becomes available post-launch. These developments suggest movement toward dynamic, continuously updated evidence packages rather than static submissions.

Cross-Jurisdictional Harmonization Potential

Loon Waters™ evidence use data from multiple HTA jurisdictions by mapping requirements and adapting outputs to country-specific requirements. Standardized data structuring enables rapid customization of economic models for different healthcare systems. These efficiencies particularly benefit smaller markets, where manual adaptation costs often exceed revenue potential.

Critical Success Factors for Implementation

  • Documented validation against HTA agency standards with published performance metrics

  • Hybrid workflows maintaining human expertise for clinical interpretation and strategic decisions

  • Comprehensive audit trails meeting regulatory documentation requirements

  • Structured change management programs addressing team capability development

  • Early engagement with HTA agencies to establish acceptance of automated approaches

  • Transparent methodology reporting enabling reviewer understanding and confidence

Navigate the Complexities of Market Access with Expert Insights

Learn how Loon's evidence-based solutions can help accelerate your HTA submissions and market access strategies.

Schedule a Consultation

Frequently Asked Questions

Frequently Asked Questions

Start Transforming Your HTA and Market Access Strategy Today

Join pharmaceutical companies that are accelerating their market access with evidence-based AI solutions.

Schedule Your Consultation