All eyes can be on study start-up.  This author was involved with major patient recruitment activities nearly two decades ago.  For over two decades, study start-up has been observed—primarily representing either a technology or a specialized services vendor. What follows is a tour into study start-up and what hopefully will offer a fresh and somewhat novel approach to the discussion.  It is important. New medicines count on it.  Life is precious, and the clinical trials imperative is incredibly important—first and foremost the health; the advancement of knowledge; the contribution to economy and GDP.

Back to the Wall: The Troubled Trial

In one memorable situation, a very large industry sponsor was conducting a gargantuan Phase III type 2 diabetes study.  Their backs were to the wall.  Sites—many of them—were not producing the patients they signed up to yield. We were called in as specialists to find patients—rapidly.  Think of it like as special forces for patient recruitment back then.

It was during the Blockbuster period. Patient recruitment services were still in a nascent state. Big sponsors were just starting to figure out some of the tricks the vendors had up their sleeves—and hence started to learn, incorporate and bring in-house.

This was a real, high pressure crisis and many millions getting burned by the hour. Thus, an unbelievably high bounty was placed on each subject enrollment. It eventually morphed into an eight-figure deal.  When meeting with the client, the intensity of the place was apparent.  Generally, those that had any responsibility or awareness and consciousness looked anxious.  When it was conveyed to them there would be some ramp up time (this is before social networks and smart phones took off), they became quite uneasy. The results couldn’t wait—they needed action. We got to work.

The tension was was palpable and thick; one could almost cut the air with a knife. The sponsor, a well-known brand and a fine company, was one of nearly all sponsors that have gone through something similar to this. But why? With plenty of capital, smart, ambitious and hard working talent, not to mention what could be considered tremendous resources at their disposal such as contract research associates—how could something like this occur?

What’s the Problem?

There are many points of view. Clinical trial industry professionals understand  time is money. Intellectual property is patented and generally speaking, there is only 20 years and it may take 10+  years to move the drug through the clinical trials process. If the drug needs to be studied and evaluated, based on the protocol, in large numbers around the world—well that  becomes harder. A key dependency in study start up at this point is the investigator site.

In some cases, investigators (or their institutions) offer tremendous enthusiasm and commitment going into the site qualification through budgeting and contracting stage; but are these sites really qualified? Back to the real-world example, the fascinating observation was the bias observed in the clinical operations’ planners.  In the excitement and frenzy (and C-suite expectations) they fed off of many investigational sites positive signals of engagement and readiness to commence.

It became apparent that at least in some cases, those sites promising a steady stream of patients really didn’t have access. Any number of hurdles existed—from the way their practice management software tool worked to not having the appropriate staff that could execute the patient recruitment program. In fact, in one case they just assumed an office manager could double as patient recruiter. Multiply that, many times, around the world and that magnifies the angst felt on the sponsor side—from the top down.

As noted in an academic paper on this topic, the investigators error in ability is referred to as “Lasagna’s Law”  and Muench’s Third Law.  The net: investigators, or at least some of them, may overestimate the pool of available patients who meet the inclusion criteria.

Society (and Business) Continues to Advance

Back then unless the clinical investigational sites were seasoned professionals, it became apparent that they would never be able to perform at the level of expectation.  Sponsors had to be very precise with this social and business endeavor. Many of these sites, as enthusiastic as they were going into the contract, just as quickly lost interest because there was no way to be successful. The “one and done” clinical investigator phenomena are well known in industry circles, but things have changed a lot in nearly two decades. The study starts up problem should be solved—right?  We are in a new age and study start up should be better.  In the past 20 years the following dynamics (and more) have unleashed tremendous opportunity:

  • Leaderships’ lessons learned get documented and infused back into the company culture to improve (plan, do, act)
  • Intense infusion of process improvement methods such as Six Sigma into the clinical trials processes
  • Continuously improved and refined division of labor (now sponsors have patient recruitment specialists that they were just starting to think about during the time era mentioned above)
  • Massive outsourcing to CROs to offer industry better SSU KPIs among other things
  • Clinical trial software applications (of all types) & integrated systems for whole picture
  • Advent of the cloud, Big Data, and access to powerful computing capability at fraction of price a decade earlier
  • Advanced analytics of all sorts

Given the confluence of truly wonderful and positive pivotal streams above, one would think that study start-up has become a whole lot better than 20 years ago. Has study start-up performance improved?  Well, ostensibly the inclination would be an affirmative.   The industry’s understanding, awareness and experience is far deeper and richer and gets more so, in many ways, every year.

So Many Options (Outsource, Business Process, Technology)

Outsourcers have taken, according to some estimates, 40% to 50% of clinical trials work—and they are after all designed and optimized to do just this kind of work efficiently and effectively. It is true that CROs struggle with talent identification, acquisition and retention but they are paid to manage that problem). One study revealed that 83% of top pharma data management is outsourced.

The options for technology are frankly amazing.  This author remembers auditors declaring a decade or so ago that “nothing under my watch will ever go into the cloud.”  Now everything appears to be headed there (although there are probably a few CQA hold-outs that will still demand to not only look at, but also touch an actual server; stare at its’ serial number to ensure the numbers are recorded because that is part of the boilerplate checklist.

People get smarter by the decade. Legions of knowledge workers (human capital) have added new and complimentary skills and capabilities. For example, six sigma (and other similar practices) masters establish beachhead after beachhead in clinical operations. Trained to rapidly study and identify inefficient process (and people), they quickly identify the targeted “as is” study start-up processes to plan to ultimately move to the “to be” zone.

Getting there is the hard part, but they have been hired, paid and purportedly “empowered” to change process so that study start up metrics and KPIs improve toward established targets.

With this seemingly and apparent mandate, they go about mapping existing study start up processes (and the people carefully); with a fervent, energetic and almost cult-mission for efficiency, speed and continuous improvement—they can see the “to be” within reach.  However, it turns out that actually, process represents power relations, dynamics and hierarchy. Many a blackbelt got chopped by a scientist.

Finally, the data-driven realities that are so feasible today. With the advent of cloud and wave of advancement over the past decade incredible computing capability is ready and able for even the smallest biotech or research site. There are many fine applications to choose from.  The cloud makes Information Technology/Information Systems employees less relevant in their traditional roles. Rather than controlling the clinical operations on every server, now they become cloud application portfolio managers.  They too can still add value by supporting the business in their mission to cut down study start up time.

Are Study Start Up Metrics Improving?

So TrialSite News asks the readers for honest, candid and straightforward feedback to this article—there is a comment feature. Has study start up metrics significantly improved?  Despite all of the progress it is suspected that study start up is still a labor and process intensive affair.  Despite outsourcing to CROs; despite a plethora of digital tracking tools not to mention data capture, management and other apps. Study start up combines cross-functional, often inter-company activities including:

  • Site identification and feasibility
  • Site CDA execution and qualification
  • Contract negotiations
  • Patient recruitment planning
  • Managing and tracking the flow of essential documents
  • Drug Accountability

The biopharma industry has spent over a decade seeking to improve the process.  Much of the clinical trial business has been outsourced to CROs, which complicates matters further.  As the sponsor must still be accountable, it requires another layer of oversight—sponsor over CRO and ultimately sponsor must ensure sites perform with quality, safety and productivity in mind.

Some study starts up metrics. There are four primary categories to track including:

  • Timeliness
  • Utilization
  • Quality
  • Cost/Performance

Metrics are not static nor are they any good if they do not reveal what is actually going on.  The old adage “garbage in and garbage out” is as true now as ever.  Large industry sponsors have invested many millions of dollars in sophisticated software systems; major academic and research alliances too have invested in many systems and processes over the past decade.  Is the technology working? Does our study start up benchmarking, monitoring and management become better?  Have we taken a hard look at our processes to determine how we can reshape and reorder to capitalize on dynamic, unfolding changes in the market place? Or is there too much bureaucracy  and political red tape?

The Apple Interlude

Before we go on to consider metrics, a brief interlude to dwell on an important observation. This author has a friend and colleague who is high up in Apple. As a program and portfolio manager, he has engaged with many different initiatives—in some cases in the billions of dollars. This person also happens to have worked in the pharmaceutical industry as well supporting clinical and regulatory.

When on a tour with him at Apple’s headquarters some years ago, he said something very interesting when he was asked a question. What is the difference between Apple and big pharma? He noted as a program manager, he works on very large projects.  Now, in pharma, there is sort of a scientific and intellectual ego present in the management. It was conservative; it was waterfall; it was collegial in a tense kind of way.  Pharma will often have an idea (perhaps coming from the top or somewhere high up) that will have profound impact on those below.  The idea can turn into a major initiative.

A program is set up; resources and funding are allocated and workstreams launched; tools are instituted and there are lots of meetings and reports.  Now sometimes these initiatives don’t go anywhere really. They don’t produce the results that would justify the continuous outlay of money, human capital and other resources. It just keeps going—year after year.  It will finally die or morph but not until the basis are covered. There is a very different unfolding dynamic in Apple according to this insider.

They too have lots of money—actually more than any other company. They do have egos too. When an initiative starts, they do all the same things as above. The money, the talent, the tools for the program, etc.  However, the minute that individuals start to sense or smell a scent of failure they will start talking about pulling the plug on the program—and they have been known to rapidly pull the project plug. That is a fundamental difference that an insider with experience from both sectors swore on-—a company such as Apple has in its’  DNA to fail faster; and when that does happen it is not looked upon badly—it is a learning experience. Perhaps failure is more acceptable in Silicon Valley than in some of the life science clusters.

What follows is a list of some of the key study start up metrics by study stage. Hopefully the prose above provides some perspective to help think about how to truly look at study start up with a different and more holistic lens.  The operating premise here: that by year 2019 study start up metrics (and real results) should be far improved across the clinical trials industry—due to the dynamics mentioned above; but are they really better?

Some key SSU metrics include:

  • Investigator recruitment and progress
  • Protocol ready on time
  • Vendor selection (labs, etc.)
  • Cycle time to site qualification
  • Cycle time to first site activated
  • Cycle time to site activated
  • Regulatory pack approval
  • Percent planned sites activated
  • Clinical supply readiness
  • First patient first visit (FPFV) on time

As an extension study maintenance often occurs as sites are coming online to the study.

Study Maintenance Metrics

  • Execution per protocol
    • Meet eligibility criteria
    • Protocol procedures/evaluations
    • Compliance with medication restrictions
    • Integrity of randomization and of blind
    • Integrity of investigational product and administration
  • Timelines for report delivery
  • Site quality
  • % planned sites activated
  • % open sites not recruiting/closed
  • Integrity of data
    • Data completion
    • Missing data
    • Open queries
    • Trial data errors
      • Discrepancy rates
      • Discrepancy resolution rates
    • AE underreporting
  • Protection of Subject Rights/Welfare
    • All subjects consent appropriately
    • Notification of new safety information to the subject
    • Investigator and IRB/EC
    • Privacy
    • Subjects removed from the study when appropriate
    • Necessary emergency un-binding
  • Compliance of Documents
    • TMF/eTMF accurate and complete
      • Remember all sites must have their own ISF (part of their regulatory binder—these are increasingly digital)
      • Authorities in Europe, for example, will consider TMF a system in that there may be multiple systems that operate independently but as a whole will be inspected
    • Regulatory submission on time
    • Safety reporting completed
    • Regulatory response

Close out Metrics

  • Last patient last visit (LPLV) on time
  • Time from LPLV to database lock
  • Database lock to final analysis
  • Analysis plan answered the protocol questions
  • Analysis free of errors that matter
  • Time to last site closure

Financial Metrics

  • Recruitment duration projected vs. actual
  • Study and/or project budget projected vs actual
  • Number of change controls transactions
  • Variance to budget
    • Per site
    • Per country

CRO Performance

  • Monitor the conduct of a study
    • As surrogate for vendor performance
    • Manage the relationship
  • Resource allocation
  • Patients per monitor
  • Staff contractor retention
  • Quality/Risk-based monitoring

Conclusion

Clinical trial sponsors operate in an incredibly important risky business. They must become more adept at looking in the mirror in more fundamental ways.  Of course the targets include being more efficient, effective and productive.  This doesn’t necessarily mean going faster. At times, an organization may need to slow down to truly understand itself— for example, a self—reflection and assessment as to what are core strengths vs. weaknesses are. Put another way, by speeding up study start up an organization can actually double its average time over the ultimate length of a trial.  Not a desired result.

Just outsourcing the problem away doesn’t necessarily work either.  Anecdotally and via industry chatter there are talent shortages in key resource areas such as CRAs—that is a problem. They are incredibly important. Surely as much as any risk algorithm.  A good CRA almost possess a “sixth site sense.” This cannot be underestimated. But there is an 80/20 rule.  20% of the CRAs produce 80% of the value. How do we produce more of those 20%ers?

Process change doesn’t make much difference if it is done in a vacuum or a specially designated Six Sigma lane.  To redesign study start up processes is to speak truth to corporate power in many instances. Are companies really ready for that culturally?

Finally technology—from cloud infrastructure to compelling data warehouse and advanced BI and AI-driven purpose-built apps, are multiplying fast in the market. Just in the past few years, a plethora of new apps supporting and augmenting clinical from sponsor to CRO to site have arrived at market. There are truly exceptional choices, but unless the other factors and forces come together for that critical self-reflection, it isn’t clear how effective a study start-up item tracker will be as compared to a Microsoft Excel spreadsheet. Or a patient/trial matching tool that doesn’t fit into the workflow of an organization for that matter. TrialSite News would like to hear about your perspective on study start up. Feel free to send us an email.

 

Originally published March 28, 2019. 

Source:

Pin It on Pinterest