Real-World Data has a ways to go to replicate random clinical trials and hence reveals considerable limitations that still exist. Study sponsors—Mayo Clinic, Yale University, UCSF and Columbia University—scoured PubMed to compile a sample of 220 trials (both randomized and not) published across several of the top medical journals back in 2017. Only 33% of the studies were duplicable using observational data. Why? For the most part, fundamental elements of randomized clinical trials—from inclusion and exclusion criteria to indication and intervention to primary endpoints—were not routinely discovered from real-world data sources.
Of the 220 US-based trials, 33% could be replicated using observational data because of their intervention, indication, inclusion and exclusion criteria, and primary end points could be routinely ascertained from insurance claims and/or EHR data. Of the 220 trials, 39.1% had an intervention that could be ascertained from insurance claims and/or EHR data. 72.1% had an indication that could be ascertained. 72.6% of the 62 clinical trials had at least 80% of inclusion and exclusion criteria data that could be ascertained. Of these 45 studies, 73.3% had at least one primary end point that could be ascertained.
The final takeaway was that only 15% of the U.S.-based clinical trials published in high-impact journals in 2017 could be feasibly replicated through analysis of administrative claims or EHR data, as reported in JAMA Network. This leads to the conclusion that the potential for real-world evidence to complement clinical trials, both by examining the concordance between randomized experiments and observational studies and by comparing the generalizability of the trial population with the real-world population of interest.
Call to Action: Interested in more Real-World Evidence study research to compare against randomized clinical trials? TrialSite News can help.Source: JAMA Network