A recent perspective, published in npj Digital Medicine, explores digital health solutions and the current challenges in undertaking digital health evaluations.
Over the past two decades, since the concept was first introduced by Seth Frank, digital health has rapidly evolved. The FDA consider digital health a broad spectrum of categories that includes mobile health, health information technology, wearable devices, telehealth and telemedicine, and personalised medicine. The market of digital health is thriving. Figures have shown that more than 300,000 health applications exist, with innovators adding more than 200 each day.
Digital solutions can be grouped as follows (based on potential risk to patients):
- Solutions that improve system efficiency but with no measurable patient outcome benefit
- Mobile digital health that informs or delivers basic monitoring and encourages behaviour change and self-management
- Clinical decision support (CDS) and prediction models, which guide treatment, deliver active monitoring, calculate and/or diagnose
A huge challenge for end users is determining a new solution’s credibility. Ageing adults (who society consider the most digitally divided demographic group) present unique challenges and researchers endeavor to develop strategies for implementation.
The evolution of the guidance and regulatory landscape
Over the past ten years, experts have developed a plethora of guidance for digital health innovators. In this article, the team observe a pattern in the development of such documents. This included initial development by industry, optimisation by non-government organisations and finally refinement by government agencies.
The speed of development, diversity of interventions and potential risks have prompted policymakers to produce more targeted guidance on solution classification and evidence requirements. However, the authors express that current guidance does not go far enough to allow innovators and end-users to know what evidence generation approaches are appropriate and practical for all classes of digital health solutions.
Randomised controlled clinical trials (RCTs) are the most commonly recognised evidence for healthcare interventions. However, the team only identify a handful of products which have been tested in this way. They believe that this indicates that these methods are no longer practicable, likely because of the speed of digital product development and iterative upgrading.
Surveys and Interviews
In the early stages of development, innovators establish product usability, feasibility and efficacy. They typically do this through surveys and interviews, which are cost-effective, efficient and scalable. Although common, researchers do not typically turn these results into peer-reviewed publications. A key approach for digital solution development is usability testing. This can examine whether users achieve intended use and can identify any encountered problems. Controversy exists around what is considered an appropriate number of participants. Commonly, for formative testing, there are 5 participants and for summative tests, there are 20 participants.
Prospective RCTs are the most accepted method for evaluating healthcare interventions. The randomised part of the trial can be individuals, groups or even specific solution components. For digital solutions targeting an individual user, individual-randomisation trials (IRTs) are well-suited. Healthcare researchers typically use IRTs as their experimental design. Alternatively, cluster-randomisation trials (CRTs) are better for digital solutions supporting group efforts. Public health researchers are increasingly adopting this approach. Researchers often use this in circumstances where contamination may occur. CRTs allow for evaluation of both direct and indirect effects of an intervention. When researchers want to determine empirically the efficacy of a specific component, micro-randomisation trials are helpful. They involve randomly assigning an intervention option at specific time points, generating longitudinal data with repeated measures. They can be very powerful during early stages of product development.
The most commonly used method for evaluating digital health solutions is the pre-post design. This involves pre-phase, which provides control data to allow familiarisation and to limit bias related to implementation, and post-phase, to collect data on solution effectiveness. Nonetheless, this design requires a longer duration. Therefore, this design is difficult when evaluating iterative solution upgrades that are often seen with digital health products.
Researchers can employ retrospective studies to evaluate pre-existing data. Types of retrospective studies include case series, cohort or case-control studies. They are generally quicker, cheaper and easier than prospective studies. However, they are subject to biases and confounding factors. To date, relatively few publications have assessed digital solutions with retrospective data. This is likely due to limited use of digital solutions in clinical practice and challenges in data access.
Systematic reviews are important in evidence-based medicine and development of clinical guidelines. Reviews on a specific solution can provide stronger evidence for its impact. However, this would require a sufficient number of individual evaluation studies.
For end-users to justify solution adoption, the positive economic benefits must be demonstrated. It is also important for other key players, such as government agencies, to endorse the need for change. The team argue that tracking usage and performance data of users compared to non-users is required.
Generally, approaches for evidence generation at early stages of product development deliver weaker evidence. These groups require more robust, traditional evidence based approaches. The authors believe that there is a gap between quick, lower-cost approaches applied at early stages and higher-cost approaches needed to convince the majority of stakeholders.
The team believe that traditional approaches present fundamental limitations for researchers to create evidence of digital health solutions. Experts have cited evaluation of digital health solutions as a major obstacle for wider adoption. Digital solution evaluation requires collective efforts.
Small and medium-sized enterprises typically prioritise and allocate their research and development budget to product development. They often have limited resources to undertake clinical studies. Multinational corporations, on the other hand, have more resources to develop evidence. However, these corporations are as equally time-restricted. As it usually takes 2-3 years to conduct a study, evidence published today may not reflect a product that has been updated. For many companies, it is better to invest in sales and manufacturing, which have a more predictable return on investment than in clinical studies. Academic institutions tend to favour traditional methodologies because of the increased likelihood of high-impact publication. In addition, obtaining sufficient research funding can be challenging.
Pragmatic evidence generation
The team believe that large differences exist between the evidence required for initial adopters and that required for the majority. They suggest that researchers could adopt pragmatic approaches to control cost at early stages, then use RCTs for later-stage final assessment.
They highlight that researchers could apply various simulation approaches to evaluate digital solutions. These include computational, system and clinical simulations. Computational simulation for software evaluation involves two steps – verification and validation. System simulation models the effect of an intervention on a healthcare system without disrupting the real setting. Experts have developed clinical simulation as an approach to test systems and digital solutions with representative users doing representative tasks in representative settings. This approach may be helpful in facilitating patient engagement and/or involvement. Researchers have been increasingly using clinical simulation to evaluate digital health solutions. In addition, several academic centres have established clinical simulation test environments. However, this approach has several limitations, including high-fidelity as a prerequisite for generating valid and effective evidence. The team believe that researchers could employ this approach in combination with traditional study designs.
Innovators face significant challenges in overcoming the paradox in digital health – “no evidence, no implementation—no implementation, no evidence”. The team believe that innovative approaches, such as simulation-based research, will help generate higher-quality, lower-cost and more timely evidence.