CeHRes Roadmap
This roadmap serves as a practical guideline to help plan, coordinate, and execute the participatory development process of eHealth technologies. The framework is meant for developers, researchers, and policy makers and for educational purposes. It also serves as an analytical instrument for decision making about the use of eHealth technologies.
L. Alpay et al., Health Informatics Journal (2018), doi:10.1177/1460458218796642
D.C. Mohr et al., Am. J. Prev. Med. (2013), doi:10.1016/j.amepre.2013.06.006
Continuous evaluation of evolving behavioral intervention technologies (CEEBIT)
A methodologic framework that can support the evaluation of multiple Behavioral intervention technologies (BITs) or evolving versions, eliminating those that demonstrate poorer outcomes, while allowing new BITs to be entered at any time. CEEBIT could be used to ensure the effectiveness of BITs provided through deployment platforms in clinical care organizations.
Mohr, D. C., et al., Am. J. Prev. Med. (2013). doi:10.1016/j.amepre.2013.06.006 Nicholas, J. et al., Ment. Health (2016). doi:10.1136/eb-2015-102278
Five-stage model for comprehensive research on telehealth
A five-stage model as a framework for planning a comprehensive telehealth research program for a new intervention or service system. The stages are: (1) Concept development, (2) Service design, (3) Pre-implementation, (4) Implementation, (5) Post-implementation.
Fatehi, F. et al., J Telemed Telecare (2017)
Life-cycle–based approach to evaluation
sThe overall aim of this model is to maximise the benefits while minimising any risks associated with the eHealth intervention. This balance is achieved by iterative formative evaluations at four key stages of the eHealth intervention’s lifecycle: I inception, II requirements and analysis, III design develop and test, IV implement and deploy. This model has the additional advantage of providing a means to understand the implementation process.
Catwell, L. et al., PLoS Med. (2009). doi:10.1371/journal.pmed.1000126
mHealth Agile and User-Centered Research and Development
This mHealth research model mirrors traditional clinical research methods in its attention to safety and efficacy, while also accommodating the rapid and iterative development and evaluation required to produce effective, evidence-based, and sustainable digital products. It consists of a project identification stage followed by four phases of clinical evaluation: Phase 1: User Experience Design, Development, & Alpha Testing; Phase 2: Beta testing; Phase 3: Clinical Trial Evaluation; and Phase 4: Post-Market Surveillance. These phases include sample gating questions and are adapted to accommodate the unique nature of digital product development.
Wilson, K. et al., npj Digit. Med. (2018).
mHealth Development and Evaluation Framework
The mHealth Framework includes six stages, some of which may be implemented concurrently: first, conceptualization, to determine the theoretical basis and empirical foundation of a new intervention; second, formative research, to gauge target audience response and refine the concept; third, pre-testing, to determine the intervention’s acceptability, and further refine the intervention; fourth, pilot testing, involving a small non-randomized study to test feasibility of the intervention and study processes (e.g., recruitment and data collection); fifth, randomized controlled trial, to test the effect of the intervention in comparison with a control group(s); and sixth, qualitative research, for further refinement before moving to a more scaled-up intervention.
Jacobs, M. A. et al., Curr. Opin. Psychol. (2016)
Model for Assessment of Telemedicine applications (MAST)
Model for Assessment of Telemedicine applications (MAST)87,88 The Model for Assessment of Telemedicine (MAST) focuses on the measurement of effectiveness and quality of care. MAST represents a multidisciplinary process, evaluating the medical, social, economic, and ethical aspects of telemedicine in a systematic, unbiased, robust manner. The use of MAST includes three steps: preceding assessment (Step 1) the maturity of the telemedicine technology and the organization using the service is assessed before the assessment of effectiveness is carried out; multidisciplinary assessment (Step 2) of the effectiveness of the technology by encompassing seven domains, and an assessment should be made of the transferability of the results (Step 3).
Kidholm, K. et al., J Telemed Telecare 23, 803–813 (2017). Kidholm, K. et al., J Telemed Telecare 24, 118–125 (2018).
Multiphase Optimization Strategy (MOST)
MOST is an alternative way of building, optimizing, and evaluating eHealth interventions. It incorporates the standard RCT, but before the RCT is undertaken MOST also uses a principled method for identifying which components are active in an intervention and which levels of each component lead to the best outcomes. The principles underlying MOST are drawn from engineering, and emphasize efficiency. The MOST method consists of three phases, each of which addresses a different set of questions about the intervention by means of randomized experimentation.
Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022 Jacobs, M. A. et al., Curr. Opin. Psychol. 9, 33–37 (2016).
Proposed Framework for Evaluating mHealth Services
The proposed framework, includes three main stages named as Service Requirement Analysis, Service Development and Service Delivery. The iterative nature of the proposed framework guarantees continuous improvement of m-health services. Moreover, important evaluation dimensions including technical, organizational, social and legal, strategic and usability as well as effects of key stakeholders of m-Health service on mentioned dimensions have been considered in the proposed framework.
Sadegh, S. S. et al., Int J Med Inf. 112, 123–130 (2018).
RE-AIM framework
The RE-AIM model has been widely used to plan, evaluate and review health promotion and disease management interventions. RE-AIM is a conceptual model designed to enhance the quality, speed, and public health impact of efforts to move from research into long-term effectiveness in real-world settings. It may be particularly useful for increasing the potential of eHealth interventions intended to be translated into practice. RE-AIM consists of five evaluative dimensions related to both internal and external validity: Reach, Efficacy/Effectiveness, Adoption, Implementation, and Maintenance and is intended for use at all stages of research, from planning to evaluation.
Glasgow, R. E., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.023 Glasgow, R. E., et al., Int. J. Med. Inform. (2014). doi:10.1016/j.ijmedinf.2013.07.002
Glasgow, R. et al., Am. J. Public Health 89, 1322–1327 (1999).
Stage Model of Behavioral Therapies Research
The Stage Model of Behavioral Therapies Research articulates three progressive stages of development and evaluation of behavioral interventions. This model is especially relevant to Web-based intervention research given its goals of encouraging innovation and facilitating widespread use of empirically validated behavioral programs. Stage I consists of pilot/feasibility testing, manual writing, training program development, and adherence/competence measure development for new and untested treatments. Stage II initially consists of randomized clinical trials (RCTs) to evaluate efficacy of manualized and pilot-tested treatments which have shown promise or efficacy in earlier studies. Stage III consists of studies to evaluate transportability of treatments for which efficacy has been demonstrated in at least two RCTs. Key stage III research issues revolve around generalizability; implementation; cost effectiveness issues; and consumer/marketing issues.
Danaher, B. G. et al., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0 Onken, L. S. et al., Book: Innovative approaches for difficult-to-treat populations. (1997).
Stead’s et al. evaluation framework
The premises of the Stead et al. (1994) framework are that evaluation is essential to each of the five stages of system development and that the level of evaluation should be well matched to the development stage. The appropriate type of evaluation will vary according to the stage of work, but all evaluations must be rigorous and systematic. The stages of development correspond to a standard software design life cycle that begins with system specification and concludes with routine use of a product. The levels of evaluation present a range of methods to apply at each stage of development. For example, formative methods (e.g., needs requirement) are used in the earlier stages, and a more summative approach to evaluate the validity and efficacy of a system (e.g., a controlled clinical trial) is used in the later stages.
Kaufman, D. et al., Nurs. Res. (2006). doi:10.1097/00006199-200603001-00007 Stead, W. W. et al., J. Am. Med. Informatics Assoc. (1994). doi:10.1136/jamia.1994.95236134
Concept mapping study
Concept mapping methodology overcomes the drawbacks of qualitative study designs by integrating results from qualitative group sessions with multivariate statistical analysis to represent ideas of diverse stakeholders visually on maps. As the method is purposefully designed to integrate input from larger groups of participants with differing content expertise or interest in a domain in an efficient way and short time frame.
Van Engen-Verheul, M. et al., Studies in Health Technology and Informatics (2015). doi:10.3233/978-1-61499-512-8-110
eHealth Needs Assessment Questionnaire (ENAQ)
The E-health Needs Assessment Questionnaire (ENAQ) is useful to map the general needs of older adults with low health literacy regarding eHealth.
REF
Focus group
A focus group is a group discussion on a particular topic organised for research purposes. This discussion is guided, monitored and recorded by a researcher (sometimes called a moderator or facilitator). Focus groups are used for generating information on collective views, and the meanings that lie behind those views.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192 Peeters, J. M. et al., JMIR Med. Informatics (2016). doi:10.2196/medinform.4515
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Method for technology-delivered Healthcare Measures
The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability.
Kramer-Jackman, K. L. et al., CIN - Comput. Informatics Nurs. (2011). doi:10.1097/NCN.0b013e318224b581
Model of Oinas-Kukkonen
The model of Oinas-Kukkonen includes principles for persuasive design and describes the key issues behind them. The model allows defining the persuasive context, describing the targeted users, their goals, intentions and technology use.
Alpay, L. et al., Heal. Informatics J (2018). doi:10.1177/1460458218796642
Rapid review
The term ‘RR’ does not appear to have one single definition but is framed in the literature as utilizing various stipulated time frames between 1 and 6 months. The word ‘rapid’ indicates that it will be carried out quickly, although this labelling does not inform us as to exactly which part of the review is intended to be carried out at a faster pace than a full SR. The name could imply the manipulation of agreed SR processes such as quicker searching and searching fewer databases, faster inclusion screening and/or having a narrower remit for inclusion of studies, limiting data extraction or analysing the data by using only selected methods of quantitative or qualitative analysis in order to draw rapid conclusions about a specific research question. Indeed, it seems that any or all of these specifications could be applied to a RR in order to draw fast conclusions about a specific health intervention.
Harker, J. et al., J. Evid. Based. Healthc. 10, 397–410 (2012).
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Systematic review
A systematic review summarises the results of available carefully designed healthcare studies (controlled trials) and provides a high level of evidence on the effectiveness of healthcare interventions. Judgments may be made about the evidence and inform recommendations for healthcare. These reviews are complicated and depend largely on what clinical trials are available, how they were carried out (the quality of the trials) and the health outcomes that were measured. Review authors pool numerical data about effects of the treatment through a process called meta-analyses. Then authors assess the evidence for any benefits or harms from those treatments. In this way, systematic reviews are able to summarise the existing clinical research on a topic.
REF Systematic review - cochrane.
Think aloud method
The think aloud method can be of high value in evaluating a system’s design on usability flaws and is therefore frequently used to gather information about a system’s usability in testing computer systems with potential end users. During recorded usability sessions, users ‘interact’ with a (prototype) system or interface according to a predetermined set of scenarios while verbalizing their thoughts. Analyses of these verbal reports provide detailed insight into usability problems actually encountered by end users but also in the causes underlying these problems.
Jaspers, M. W. M. et al., Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002
A/B testing
A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a webpage or app against each other to determine which one performs better. AB testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.
Gabarron et al., BMC Med. Inform. Decis. Mak.(2015), doi:10.1186/s12911-015-0143-9
Cognitive task analysis (CTA)
CTA has been applied in the design of systems in order to create a better understanding of human information needs in development of systems. It categorizes tasks and observes patients (or other test persons) while performing these tasks (e.g. usage of an eHealth application).
Kushniruk, A. W. et al., Journal of Biomedical Informatics (2004). doi:10.1016/j.jbi.2004.01.003
Cognitive walkthrough
The cognitive walkthrough method is a type of usability evaluation technique that focuses on evaluating an (early) system design for learnability by exploration. In a cognitive walkthrough, an evaluator, preferably a usability expert evaluates a user interface by analysing the cognitive processes required for accomplishing tasks that users would typically carry out supported by the computer.
Jaspers, M. W. M. Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002 Khajouei, R. et al., J Am Med Inf. Assoc (2017)
Concept mapping study
Concept mapping methodology overcomes the drawbacks of qualitative study designs by integrating results from qualitative group sessions with multivariate statistical analysis to represent ideas of diverse stakeholders visually on maps. As the method is purposefully designed to integrate input from larger groups of participants with differing content expertise or interest in a domain in an efficient way and short time frame.
Van Engen-Verheul, M. et al., Studies in Health Technology and Informatics (2015). doi:10.3233/978-1-61499-512-8-110
Critical incident technique
First described by John C. Flanagan in 1954, the critical incident technique (CIT) is a well-established qualitative research tool used in many areas of the health sciences. Flanagan describes the technique as consisting of “a set of procedures for collecting direct observations of human behavior in such a way as to facilitate their potential usefulness in solving practical problems.” The CIT began its life as an offshoot of the Aviation Psychology Program of the United States Army Air Forces in World War II. FitzGerald, K. et al., Journal of Dental Education (2008).
eHealth Analysis and Steering Instrument (eASI)
The eASI surveys how eHealth services score on 3 dimensions (ie, utility, usability, and content) and 12 underlying categories (ie, insight in health condition, self-management decision making, performance of self-management, involving the social environment, interaction, personalization, persuasion, description of health issue, factors of influence, goal of eHealth service, implementation, and evidence).
Henkemans, O. A. B. et al., J. Med. Internet Res. (2013). doi:10.2196/med20.2571
eHealth Needs Assessment Questionnaire (ENAQ)
The E-health Needs Assessment Questionnaire (ENAQ) is useful to map the general needs of older adults with low health literacy regarding eHealth.
REF
Focus group
A focus group is a group discussion on a particular topic organised for research purposes. This discussion is guided, monitored and recorded by a researcher (sometimes called a moderator or facilitator). Focus groups are used for generating information on collective views, and the meanings that lie behind those views.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192 Peeters, J. M. et al., JMIR Med. Informatics (2016). doi:10.2196/medinform.4515
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Method for technology-delivered Healthcare Measures
The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability.
Kramer-Jackman, K. L. et al., CIN - Comput. Informatics Nurs. (2011). doi:10.1097/NCN.0b013e318224b581
Mixed methods
Mixed methods research (MMR) is an emerging and evolving research methodology that requires both qualitative and quantitative approaches within the same study. It is an approach to research in the social, behavioural and health sciences in which the investigator gathers both quantitative and qualitative data, integrates the two, and then draws interpretations based on the combined strengths of both sets of data to understand research problems. MMR is important for telehealth research because questions that profit most from a mixed methods design tend to be broad, complex and multifaceted.
Caffery, L. J. et al., J. Telemed. Telecare (2017). doi:10.1177/1357633X16665684
Lee, S. et al., CIN - Comput. Informatics Nurs. (2012). doi:10.1097/NXN.0b013e31824b1f96
Model of Fogg
The model is useful for understanding human behavior and to operationalize the factors related to it. It is applicable when designing persuasive technologies. The model of Fogg is relevant when developing eHealth self-management systems since behavioral changes reside at the core of such systems.
Alpay, L. et al., Heal. Informatics J (2018). doi:10.1177/1460458218796642
Model of Oinas-Kukkonen
The model of Oinas-Kukkonen includes principles for persuasive design and describes the key issues behind them. The model allows defining the persuasive context, describing the targeted users, their goals, intentions and technology use.
Alpay, L. et al., Heal. Informatics J (2018). doi:10.1177/1460458218796642
Participatory study
Participatory Design (PD) is one way of involving users and other stakeholders during the design phase. Three issues dominate PD: 1) the philosophy and politics behind the design concept, 2) the tools and techniques, and 3) the ability of the approach to provide a realm for understanding the socio-technical context and business strategic aims where the design solutions are to be applied. A core principle of PD is that users and other stakeholders are actively participating in design activities, where they have the power to influence the design solutions, and that they participate on equal terms.
Borycki, E. et al., Yearb Med Inf. 30–40 (2016). doi:10.15265/iy-2016-029 Clemensen, J. et al., J Telemed Telecare 23, 780–785 (2017).
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Systematic review
A systematic review summarises the results of available carefully designed healthcare studies (controlled trials) and provides a high level of evidence on the effectiveness of healthcare interventions. Judgments may be made about the evidence and inform recommendations for healthcare. These reviews are complicated and depend largely on what clinical trials are available, how they were carried out (the quality of the trials) and the health outcomes that were measured. Review authors pool numerical data about effects of the treatment through a process called meta-analyses. Then authors assess the evidence for any benefits or harms from those treatments. In this way, systematic reviews are able to summarise the existing clinical research on a topic.
REF Systematic review - cochrane.
Technology Acceptance Model (TAM)
The TAM is an information technology framework for understanding users’ adoption and use of emerging technologies particularly in the workplace environment. The theory posits that a person’s intent to use (acceptance of technology) and usage behavior (actual use) of a technology is predicated by the person’s perceptions of the specific technology’s usefulness (benefit from using the technology) and ease of use.
Portz, J. D. et al., J. Med. Internet Res. (2019). doi:10.2196/11604 Bastien, J. M. C., Int. J. Med. Inform. (2010). doi:10.1016/j.ijmedinf.2008.12.004
Think aloud method
The think aloud method can be of high value in evaluating a system’s design on usability flaws and is therefore frequently used to gather information about a system’s usability in testing computer systems with potential end users. During recorded usability sessions, users ‘interact’ with a (prototype) system or interface according to a predetermined set of scenarios while verbalizing their thoughts. Analyses of these verbal reports provide detailed insight into usability problems actually encountered by end users but also in the causes underlying these problems.
Jaspers, M. W. M. et al., Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002
User-based evaluation
User-based evaluations are usability evaluation methods in which users directly participate. Users are invited to do typical tasks with a product, or simply asked to explore it freely, while their behaviors are observed and recorded in order to identify design flaws that cause user errors or difficulties.
Bastien, J. M. C., Int. J. Med. Inform. (2010). doi:10.1016/j.ijmedinf.2008.12.004
User-centered design (UDC) methods
User-centered design is an approach to the design of information systems characterized as follows: (1) an early and continual focus on end users, (2) the empirical evaluation of systems, and (3) application of iterative design processes. As part of user-centered design, usability testing of systems has become a key method for carrying out empirical evaluation of designs from the end user’s perspective. Results from iterative and continued usability testing of early system designs, prototypes, and near completed systems can reveal a range of usability problems and areas where systems can be optimized and improved during the design process and before finalization of the system.
Kushniruk, A. W. et al., Journal of Biomedical Informatics (2004). doi:10.1016/j.jbi.2004.01.003 Borycki, E. et al., Yearb Med Inf. 30–40 (2016). doi:10.15265/iy-2016-029
Vignette study
A quantitative vignette study consists of two components: (a) a vignette experiment as the core element, and (b) a traditional survey for the parallel and supplementary measurement of additional respondent-specific characteristics, which are used as covariates in the analysis of vignette data. A vignette is a short, carefully constructed description of a person, object, or situation, representing a systematic combination of characteristics. Within vignette studies, respondents are typically confronted not only with one single vignette but with a whole population of vignettes in order to elicit their beliefs, attitudes, judgments, knowledge, or intended behavior with respect to the presented vignette scenarios. Finally, the aim of a vignette study is to identify and assess the importance of those vignette factors which causally affect individual responses to the contextualized but hypothetical vignette settings.
Atzmüller, C. et al., Methodology (2010). doi:10.1027/1614-2241/a000014
A/B testing
A/B testing (also known as split testing or bucket testing) is a method of comparing two versions of a webpage or app against each other to determine which one performs better. AB testing is essentially an experiment where two or more variants of a page are shown to users at random, and statistical analysis is used to determine which variation performs better for a given conversion goal.
Gabarron et al., BMC Med. Inform. Decis. Mak.(2015), doi:10.1186/s12911-015-0143-9
Cognitive task analysis (CTA)
CTA has been applied in the design of systems in order to create a better understanding of human information needs in development of systems. It categorizes tasks and observes patients (or other test persons) while performing these tasks (e.g. usage of an eHealth application).
Kushniruk, A. W. et al., Journal of Biomedical Informatics (2004). doi:10.1016/j.jbi.2004.01.003
Cognitive walkthrough
The cognitive walkthrough method is a type of usability evaluation technique that focuses on evaluating an (early) system design for learnability by exploration. In a cognitive walkthrough, an evaluator, preferably a usability expert evaluates a user interface by analysing the cognitive processes required for accomplishing tasks that users would typically carry out supported by the computer.
Jaspers, M. W. M. Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002 Khajouei, R. et al., J Am Med Inf. Assoc (2017)
Concept mapping study
Concept mapping methodology overcomes the drawbacks of qualitative study designs by integrating results from qualitative group sessions with multivariate statistical analysis to represent ideas of diverse stakeholders visually on maps. As the method is purposefully designed to integrate input from larger groups of participants with differing content expertise or interest in a domain in an efficient way and short time frame.
Van Engen-Verheul, M. et al., Studies in Health Technology and Informatics (2015). doi:10.3233/978-1-61499-512-8-110
Critical incident technique
First described by John C. Flanagan in 1954, the critical incident technique (CIT) is a well-established qualitative research tool used in many areas of the health sciences. Flanagan describes the technique as consisting of “a set of procedures for collecting direct observations of human behavior in such a way as to facilitate their potential usefulness in solving practical problems.” The CIT began its life as an offshoot of the Aviation Psychology Program of the United States Army Air Forces in World War II. FitzGerald, K. et al., Journal of Dental Education (2008).
eHealth Analysis and Steering Instrument (eASI)
The eASI surveys how eHealth services score on 3 dimensions (ie, utility, usability, and content) and 12 underlying categories (ie, insight in health condition, self-management decision making, performance of self-management, involving the social environment, interaction, personalization, persuasion, description of health issue, factors of influence, goal of eHealth service, implementation, and evidence).
Henkemans, O. A. B. et al., J. Med. Internet Res. (2013). doi:10.2196/med20.2571
eHealth Needs Assessment Questionnaire (ENAQ)
The E-health Needs Assessment Questionnaire (ENAQ) is useful to map the general needs of older adults with low health literacy regarding eHealth.
REF
Focus group
A focus group is a group discussion on a particular topic organised for research purposes. This discussion is guided, monitored and recorded by a researcher (sometimes called a moderator or facilitator). Focus groups are used for generating information on collective views, and the meanings that lie behind those views.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192 Peeters, J. M. et al., JMIR Med. Informatics (2016). doi:10.2196/medinform.4515
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Method for technology-delivered Healthcare Measures
The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability.
Kramer-Jackman, K. L. et al., CIN - Comput. Informatics Nurs. (2011). doi:10.1097/NCN.0b013e318224b581
Mixed methods
Mixed methods research (MMR) is an emerging and evolving research methodology that requires both qualitative and quantitative approaches within the same study. It is an approach to research in the social, behavioural and health sciences in which the investigator gathers both quantitative and qualitative data, integrates the two, and then draws interpretations based on the combined strengths of both sets of data to understand research problems. MMR is important for telehealth research because questions that profit most from a mixed methods design tend to be broad, complex and multifaceted.
Caffery, L. J. et al., J. Telemed. Telecare (2017). doi:10.1177/1357633X16665684
Lee, S. et al., CIN - Comput. Informatics Nurs. (2012). doi:10.1097/NXN.0b013e31824b1f96
Model of Fogg
The model is useful for understanding human behavior and to operationalize the factors related to it. It is applicable when designing persuasive technologies. The model of Fogg is relevant when developing eHealth self-management systems since behavioral changes reside at the core of such systems.
Alpay, L. et al., Heal. Informatics J (2018). doi:10.1177/1460458218796642
Model of Oinas-Kukkonen
The model of Oinas-Kukkonen includes principles for persuasive design and describes the key issues behind them. The model allows defining the persuasive context, describing the targeted users, their goals, intentions and technology use.
Alpay, L. et al., Heal. Informatics J (2018). doi:10.1177/1460458218796642
Participatory study
Participatory Design (PD) is one way of involving users and other stakeholders during the design phase. Three issues dominate PD: 1) the philosophy and politics behind the design concept, 2) the tools and techniques, and 3) the ability of the approach to provide a realm for understanding the socio-technical context and business strategic aims where the design solutions are to be applied. A core principle of PD is that users and other stakeholders are actively participating in design activities, where they have the power to influence the design solutions, and that they participate on equal terms.
Borycki, E. et al., Yearb Med Inf. 30–40 (2016). doi:10.15265/iy-2016-029 Clemensen, J. et al., J Telemed Telecare 23, 780–785 (2017).
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Systematic review
A systematic review summarises the results of available carefully designed healthcare studies (controlled trials) and provides a high level of evidence on the effectiveness of healthcare interventions. Judgments may be made about the evidence and inform recommendations for healthcare. These reviews are complicated and depend largely on what clinical trials are available, how they were carried out (the quality of the trials) and the health outcomes that were measured. Review authors pool numerical data about effects of the treatment through a process called meta-analyses. Then authors assess the evidence for any benefits or harms from those treatments. In this way, systematic reviews are able to summarise the existing clinical research on a topic.
REF Systematic review - cochrane.
Technology Acceptance Model (TAM)
The TAM is an information technology framework for understanding users’ adoption and use of emerging technologies particularly in the workplace environment. The theory posits that a person’s intent to use (acceptance of technology) and usage behavior (actual use) of a technology is predicated by the person’s perceptions of the specific technology’s usefulness (benefit from using the technology) and ease of use.
Portz, J. D. et al., J. Med. Internet Res. (2019). doi:10.2196/11604 Bastien, J. M. C., Int. J. Med. Inform. (2010). doi:10.1016/j.ijmedinf.2008.12.004
Think aloud method
The think aloud method can be of high value in evaluating a system’s design on usability flaws and is therefore frequently used to gather information about a system’s usability in testing computer systems with potential end users. During recorded usability sessions, users ‘interact’ with a (prototype) system or interface according to a predetermined set of scenarios while verbalizing their thoughts. Analyses of these verbal reports provide detailed insight into usability problems actually encountered by end users but also in the causes underlying these problems.
Jaspers, M. W. M. et al., Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002
User-based evaluation
User-based evaluations are usability evaluation methods in which users directly participate. Users are invited to do typical tasks with a product, or simply asked to explore it freely, while their behaviors are observed and recorded in order to identify design flaws that cause user errors or difficulties.
Bastien, J. M. C., Int. J. Med. Inform. (2010). doi:10.1016/j.ijmedinf.2008.12.004
User-centered design (UDC) methods
User-centered design is an approach to the design of information systems characterized as follows: (1) an early and continual focus on end users, (2) the empirical evaluation of systems, and (3) application of iterative design processes. As part of user-centered design, usability testing of systems has become a key method for carrying out empirical evaluation of designs from the end user’s perspective. Results from iterative and continued usability testing of early system designs, prototypes, and near completed systems can reveal a range of usability problems and areas where systems can be optimized and improved during the design process and before finalization of the system.
Kushniruk, A. W. et al., Journal of Biomedical Informatics (2004). doi:10.1016/j.jbi.2004.01.003 Borycki, E. et al., Yearb Med Inf. 30–40 (2016). doi:10.15265/iy-2016-029
Vignette study
A quantitative vignette study consists of two components: (a) a vignette experiment as the core element, and (b) a traditional survey for the parallel and supplementary measurement of additional respondent-specific characteristics, which are used as covariates in the analysis of vignette data. A vignette is a short, carefully constructed description of a person, object, or situation, representing a systematic combination of characteristics. Within vignette studies, respondents are typically confronted not only with one single vignette but with a whole population of vignettes in order to elicit their beliefs, attitudes, judgments, knowledge, or intended behavior with respect to the presented vignette scenarios. Finally, the aim of a vignette study is to identify and assess the importance of those vignette factors which causally affect individual responses to the contextualized but hypothetical vignette settings.
Atzmüller, C. et al., Methodology (2010). doi:10.1027/1614-2241/a000014
Cognitive walkthrough
The cognitive walkthrough method is a type of usability evaluation technique that focuses on evaluating an (early) system design for learnability by exploration. In a cognitive walkthrough, an evaluator, preferably a usability expert evaluates a user interface by analysing the cognitive processes required for accomplishing tasks that users would typically carry out supported by the computer.
Jaspers, M. W. M. Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002 Khajouei, R. et al., J Am Med Inf. Assoc (2017)
Heuristic evaluation
Among the usability inspection methods, heuristic evaluation is the most common and most popular. In a heuristic evaluation, a small set of evaluators inspects a system and evaluates its interface against a list of recognized usability principles—the heuristics. Typically, these heuristics are general principles, which refer to common properties of usable systems. Heuristic evaluation is in its most common form based on the following set of usability principles: (1) use simple and natural dialogue, (2) speak the user’s language, (3) minimize memory load, (4) be consistent, (5) provide feedback, (6) provide clearly marked exits, (7) provide shortcuts, (8) provide good error messages, (9) prevent errors, and (10) provide help and documentation.
Jaspers, M. W. M. Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002 Khajouei, R. et al., J Am Med Inf. Assoc (2017)
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Method for technology-delivered Healthcare Measures
The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability.
Kramer-Jackman, K. L. et al., CIN - Comput. Informatics Nurs. (2011). doi:10.1097/NCN.0b013e318224b581
Feasibility study
Feasibility Studies are pieces of research done before a main study. They are used to estimate important parameters that are needed to design the main study. For instance: standard deviation of the outcome measure, which is needed in some cases to estimate sample size; willingness of participants to be randomised, willingness of clinicians to recruit participants, number of eligible patients. Crucially, feasibility studies do not evaluate the outcome of interest; that is left to the main study.
Arain, M. et al., BMC Med. Res. Methodol. (2010). doi:10.1186/1471-2288-10-67 Seto, E. et al., J. Med. Internet Res. (2019). doi:10.2196/11722
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Method for technology-delivered Healthcare Measures
The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability.
Kramer-Jackman, K. L. et al., CIN - Comput. Informatics Nurs. (2011). doi:10.1097/NCN.0b013e318224b581
Think aloud method
The think aloud method can be of high value in evaluating a system’s design on usability flaws and is therefore frequently used to gather information about a system’s usability in testing computer systems with potential end users. During recorded usability sessions, users ‘interact’ with a (prototype) system or interface according to a predetermined set of scenarios while verbalizing their thoughts. Analyses of these verbal reports provide detailed insight into usability problems actually encountered by end users but also in the causes underlying these problems.
Jaspers, M. W. M. et al., Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002
Feasibility study
Feasibility Studies are pieces of research done before a main study. They are used to estimate important parameters that are needed to design the main study. For instance: standard deviation of the outcome measure, which is needed in some cases to estimate sample size; willingness of participants to be randomised, willingness of clinicians to recruit participants, number of eligible patients. Crucially, feasibility studies do not evaluate the outcome of interest; that is left to the main study.
Arain, M. et al., BMC Med. Res. Methodol. (2010). doi:10.1186/1471-2288-10-67 Seto, E. et al., J. Med. Internet Res. (2019). doi:10.2196/11722
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Method for technology-delivered Healthcare Measures
The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability.
Kramer-Jackman, K. L. et al., CIN - Comput. Informatics Nurs. (2011). doi:10.1097/NCN.0b013e318224b581
Think aloud method
The think aloud method can be of high value in evaluating a system’s design on usability flaws and is therefore frequently used to gather information about a system’s usability in testing computer systems with potential end users. During recorded usability sessions, users ‘interact’ with a (prototype) system or interface according to a predetermined set of scenarios while verbalizing their thoughts. Analyses of these verbal reports provide detailed insight into usability problems actually encountered by end users but also in the causes underlying these problems.
Jaspers, M. W. M. et al., Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Method for technology-delivered Healthcare Measures
The Method for Technology-delivered Healthcare Measures is designed to systematically guide the development and evaluation of technology-delivered measures. The five-step Method for Technology-delivered Healthcare Measures includes establishment of content, e-Health literacy, technology delivery, expert usability, and participant usability.
Kramer-Jackman, K. L. et al., CIN - Comput. Informatics Nurs. (2011). doi:10.1097/NCN.0b013e318224b581
Think aloud method
The think aloud method can be of high value in evaluating a system’s design on usability flaws and is therefore frequently used to gather information about a system’s usability in testing computer systems with potential end users. During recorded usability sessions, users ‘interact’ with a (prototype) system or interface according to a predetermined set of scenarios while verbalizing their thoughts. Analyses of these verbal reports provide detailed insight into usability problems actually encountered by end users but also in the causes underlying these problems.
Jaspers, M. W. M. et al., Int. J. Med. Inform. (2009). doi:10.1016/j.ijmedinf.2008.10.002
Focus group
A focus group is a group discussion on a particular topic organised for research purposes. This discussion is guided, monitored and recorded by a researcher (sometimes called a moderator or facilitator). Focus groups are used for generating information on collective views, and the meanings that lie behind those views.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192 Peeters, J. M. et al., JMIR Med. Informatics (2016). doi:10.2196/medinform.4515
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Normalization process model
Normalization is defined as the embedding of a technique, technology or organizational change as a routine and taken-for-granted element of clinical practice. The normalization process model offers a means of conceptualizing complex interventions in practice. It focuses on interactions within and between processes of practice, (characterized as endogenous and exogenous) and is thus not intended to compete with wider conceptual models of innovation diffusion or of network behavior in organizations. The model takes as its starting point the points of contact between four domains: (i) the interactional work that professionals and patients do within the clinical encounter and its temporal order, (interactional workability); (ii) the embeddedness of trust in professional knowledge and practice, (relational integration); (iii) the organizational distribution of work, knowledge and practice across divisions of labor (skill set workability); and, (iv) its contexts of institutional location and organizational capacity, (contextual integration).
May, C., BMC Health Serv. Res. (2006). doi:10.1186/1472-6963-6-86
Patient reported outcome measures (PROMs)
PROMs seek to ascertain patients’ views of their symptoms, their functional status, and their health related quality of life. PROMs are often wrongly referred to as so called “outcome measures,” though they actually measure health—by comparing a patient’s health at different times, the outcome of the care received can be determined. It’s important to distinguish PROMs from patient reported experience measures (PREMs), which focus on aspects of the humanity of care, such as being treated with dignity or being kept waiting.
Black, N., BMJ (2013). doi:10.1136/bmj.f167
Cohort study (retro- and prospective)
Observational design, in which groups of patients are followed over time. Usually, multiple exposures and outcomes can be defined in a cohort. Retro-and prospective mostly refers to the timing of data acquisition (before or after designing the study). Patients are sampled on the basis of exposure. Information about baseline characteristics is obtained, and the occurrence of outcomes is assessed during a specified follow-up period. At baseline, all exposed or unexposed persons or both may be included.
Vandenbroucke, J. P. British Medical Journal (1991). doi:10.1136/bmj.302.6775.528-d
Cross-sectional study
Observational study design, which samples the exposure and outcome at one moment in time. Useful to get quick insight in possible associations. Drawback is the lack of follow-up time to study relations between exposure and outcome over time.
Hansen, A. H. et al., J. Med. Internet Res. (2018). doi:10.2196/11322
Feasibility study
Feasibility Studies are pieces of research done before a main study. They are used to estimate important parameters that are needed to design the main study. For instance: standard deviation of the outcome measure, which is needed in some cases to estimate sample size; willingness of participants to be randomised, willingness of clinicians to recruit participants, number of eligible patients. Crucially, feasibility studies do not evaluate the outcome of interest; that is left to the main study.
Arain, M. et al., BMC Med. Res. Methodol. (2010). doi:10.1186/1471-2288-10-67 Seto, E. et al., J. Med. Internet Res. (2019). doi:10.2196/11722
Preference clinical trial (PCT)
In a preference clinical trial (PCT), two or more health-care interventions are compared among several groups of patients, at least some of whom have purposefully chosen the intervention to be administered to them. This stands in contrast to the randomized, controlled clinical trial (RCT), where patients are randomly assigned to receive one of the available test interventions.
Kowalski, C. J. et al., Perspect. Biol. Med. (2013). doi:10.1353/pbm.2013.0004
Simulation study
A simulation or a simulator may be defined as a device ‘that attempts to re-create characteristics of the real world’. Study results show that full scale simulation studies are a useful method for testing the feasibility of information systems especially when taking into account the resources spent. Clinical simulation covers only part of the range of tests which should be conducted, and it should not be a substitute for a pilot implementation test in real settings. However it is possible to use clinical simulations to gain important knowledge concerning work practices, usability and human factors prior to widespread system release, and they can thereby contribute greatly to ensuring patient safety.
Ammenwerth, E. et al., Heal. Inf. Manag. J. (2012). doi:10.1177/183335831204100202 Jensen, S. et al., J. Biomed. Inform. (2015). doi:10.1016/j.jbi.2015.02.002
Single-case experiment (N=1 trial)
Single-case designs include a family of methods in which each participant serves as his or her own control. In a typical study, some behavior or self-reported symptom is measured repeatedly during all conditions for all participants. The experimenter systematically introduces and withdraws control and intervention conditions and then assesses effects of the intervention on behavior across replications of these conditions within and across participants. Thus, the telltale traits of these studies include repeated and frequent assessment of behavior, experimental manipulation of the independent variable, and replication of effects within and across participants.
Dallery, J. et al., J. Med. Internet Res. (2013). doi:10.2196/jmir.2227 Dempsey, W. et al., Significance (2015). doi:10.1111/j.1740-9713.2015.00863.x
Klasnja, P. et al., Heal. Psychol. (2015). doi:10.1037/hea0000305
Nicholas, J. et al., Evid. Based. Ment. Health (2016). doi:10.1136/eb-2015-102278
Critical incident technique
First described by John C. Flanagan in 1954, the critical incident technique (CIT) is a well-established qualitative research tool used in many areas of the health sciences. Flanagan describes the technique as consisting of “a set of procedures for collecting direct observations of human behavior in such a way as to facilitate their potential usefulness in solving practical problems.” The CIT began its life as an offshoot of the Aviation Psychology Program of the United States Army Air Forces in World War II. FitzGerald, K. et al., Journal of Dental Education (2008).
eHealth Analysis and Steering Instrument (eASI)
The eASI surveys how eHealth services score on 3 dimensions (ie, utility, usability, and content) and 12 underlying categories (ie, insight in health condition, self-management decision making, performance of self-management, involving the social environment, interaction, personalization, persuasion, description of health issue, factors of influence, goal of eHealth service, implementation, and evidence).
Henkemans, O. A. B. et al., J. Med. Internet Res. (2013). doi:10.2196/med20.2571
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Cohort study (retro- and prospective)
Observational design, in which groups of patients are followed over time. Usually, multiple exposures and outcomes can be defined in a cohort. Retro-and prospective mostly refers to the timing of data acquisition (before or after designing the study). Patients are sampled on the basis of exposure. Information about baseline characteristics is obtained, and the occurrence of outcomes is assessed during a specified follow-up period. At baseline, all exposed or unexposed persons or both may be included.
Vandenbroucke, J. P. British Medical Journal (1991). doi:10.1136/bmj.302.6775.528-d
Cross-sectional study
Observational study design, which samples the exposure and outcome at one moment in time. Useful to get quick insight in possible associations. Drawback is the lack of follow-up time to study relations between exposure and outcome over time.
Hansen, A. H. et al., J. Med. Internet Res. (2018). doi:10.2196/11322
Feasibility study
Feasibility Studies are pieces of research done before a main study. They are used to estimate important parameters that are needed to design the main study. For instance: standard deviation of the outcome measure, which is needed in some cases to estimate sample size; willingness of participants to be randomised, willingness of clinicians to recruit participants, number of eligible patients. Crucially, feasibility studies do not evaluate the outcome of interest; that is left to the main study.
Arain, M. et al., BMC Med. Res. Methodol. (2010). doi:10.1186/1471-2288-10-67 Seto, E. et al., J. Med. Internet Res. (2019). doi:10.2196/11722
Preference clinical trial (PCT)
In a preference clinical trial (PCT), two or more health-care interventions are compared among several groups of patients, at least some of whom have purposefully chosen the intervention to be administered to them. This stands in contrast to the randomized, controlled clinical trial (RCT), where patients are randomly assigned to receive one of the available test interventions.
Kowalski, C. J. et al., Perspect. Biol. Med. (2013). doi:10.1353/pbm.2013.0004
Simulation study
A simulation or a simulator may be defined as a device ‘that attempts to re-create characteristics of the real world’. Study results show that full scale simulation studies are a useful method for testing the feasibility of information systems especially when taking into account the resources spent. Clinical simulation covers only part of the range of tests which should be conducted, and it should not be a substitute for a pilot implementation test in real settings. However it is possible to use clinical simulations to gain important knowledge concerning work practices, usability and human factors prior to widespread system release, and they can thereby contribute greatly to ensuring patient safety.
Ammenwerth, E. et al., Heal. Inf. Manag. J. (2012). doi:10.1177/183335831204100202 Jensen, S. et al., J. Biomed. Inform. (2015). doi:10.1016/j.jbi.2015.02.002
Single-case experiment (N=1 trial)
Single-case designs include a family of methods in which each participant serves as his or her own control. In a typical study, some behavior or self-reported symptom is measured repeatedly during all conditions for all participants. The experimenter systematically introduces and withdraws control and intervention conditions and then assesses effects of the intervention on behavior across replications of these conditions within and across participants. Thus, the telltale traits of these studies include repeated and frequent assessment of behavior, experimental manipulation of the independent variable, and replication of effects within and across participants.
Dallery, J. et al., J. Med. Internet Res. (2013). doi:10.2196/jmir.2227 Dempsey, W. et al., Significance (2015). doi:10.1111/j.1740-9713.2015.00863.x
Klasnja, P. et al., Heal. Psychol. (2015). doi:10.1037/hea0000305
Nicholas, J. et al., Evid. Based. Ment. Health (2016). doi:10.1136/eb-2015-102278
Critical incident technique
First described by John C. Flanagan in 1954, the critical incident technique (CIT) is a well-established qualitative research tool used in many areas of the health sciences. Flanagan describes the technique as consisting of “a set of procedures for collecting direct observations of human behavior in such a way as to facilitate their potential usefulness in solving practical problems.” The CIT began its life as an offshoot of the Aviation Psychology Program of the United States Army Air Forces in World War II. FitzGerald, K. et al., Journal of Dental Education (2008).
eHealth Analysis and Steering Instrument (eASI)
The eASI surveys how eHealth services score on 3 dimensions (ie, utility, usability, and content) and 12 underlying categories (ie, insight in health condition, self-management decision making, performance of self-management, involving the social environment, interaction, personalization, persuasion, description of health issue, factors of influence, goal of eHealth service, implementation, and evidence).
Henkemans, O. A. B. et al., J. Med. Internet Res. (2013). doi:10.2196/med20.2571
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Critical incident technique
First described by John C. Flanagan in 1954, the critical incident technique (CIT) is a well-established qualitative research tool used in many areas of the health sciences. Flanagan describes the technique as consisting of “a set of procedures for collecting direct observations of human behavior in such a way as to facilitate their potential usefulness in solving practical problems.” The CIT began its life as an offshoot of the Aviation Psychology Program of the United States Army Air Forces in World War II. FitzGerald, K. et al., Journal of Dental Education (2008).
eHealth Analysis and Steering Instrument (eASI)
The eASI surveys how eHealth services score on 3 dimensions (ie, utility, usability, and content) and 12 underlying categories (ie, insight in health condition, self-management decision making, performance of self-management, involving the social environment, interaction, personalization, persuasion, description of health issue, factors of influence, goal of eHealth service, implementation, and evidence).
Henkemans, O. A. B. et al., J. Med. Internet Res. (2013). doi:10.2196/med20.2571
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Focus group
A focus group is a group discussion on a particular topic organised for research purposes. This discussion is guided, monitored and recorded by a researcher (sometimes called a moderator or facilitator). Focus groups are used for generating information on collective views, and the meanings that lie behind those views.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192 Peeters, J. M. et al., JMIR Med. Informatics (2016). doi:10.2196/medinform.4515
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Patient reported outcome measures (PROMs)
PROMs seek to ascertain patients’ views of their symptoms, their functional status, and their health related quality of life. PROMs are often wrongly referred to as so called “outcome measures,” though they actually measure health—by comparing a patient’s health at different times, the outcome of the care received can be determined. It’s important to distinguish PROMs from patient reported experience measures (PREMs), which focus on aspects of the humanity of care, such as being treated with dignity or being kept waiting.
Black, N., BMJ (2013). doi:10.1136/bmj.f167
Focus group
A focus group is a group discussion on a particular topic organised for research purposes. This discussion is guided, monitored and recorded by a researcher (sometimes called a moderator or facilitator). Focus groups are used for generating information on collective views, and the meanings that lie behind those views.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192 Peeters, J. M. et al., JMIR Med. Informatics (2016). doi:10.2196/medinform.4515
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Patient reported outcome measures (PROMs)
PROMs seek to ascertain patients’ views of their symptoms, their functional status, and their health related quality of life. PROMs are often wrongly referred to as so called “outcome measures,” though they actually measure health—by comparing a patient’s health at different times, the outcome of the care received can be determined. It’s important to distinguish PROMs from patient reported experience measures (PREMs), which focus on aspects of the humanity of care, such as being treated with dignity or being kept waiting.
Black, N., BMJ (2013). doi:10.1136/bmj.f167
(Fractional-)factorial (ANOVA) design
Evaluation of eHealth treatments often occurs via randomized clinical trials. While there is a vital role for such trials, they often do not provide as much information as alternative experimental strategies. For instance, engineering researchers typically use highly efficient factorial and fractional-factorial designs that allow for the testing of multiple hypotheses or interventions with no loss of power even as the number of tested interventions increases.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Cluster randomised controlled trial
Randomized controlled trial not randomizing individuals, but ‘clusters’, mostly health care centers, or primary care practices.
Oliveira-Ciabati, L. et al., Reprod Heal. (2017).
Cohort study (retro- and prospective)
Observational design, in which groups of patients are followed over time. Usually, multiple exposures and outcomes can be defined in a cohort. Retro-and prospective mostly refers to the timing of data acquisition (before or after designing the study). Patients are sampled on the basis of exposure. Information about baseline characteristics is obtained, and the occurrence of outcomes is assessed during a specified follow-up period. At baseline, all exposed or unexposed persons or both may be included.
Vandenbroucke, J. P. British Medical Journal (1991). doi:10.1136/bmj.302.6775.528-d
Controlled before-after study / non-randomized controlled trial (CBA / NRCT)
A study in which observations are made before and after the implementation of an intervention, both in a group that receives the intervention and in a control group that does not.
NB nog ref
Controlled clinical trial (CCT)
A clinical study that includes a comparison (control) group. The comparison group receives a placebo, another treatment, or no treatment at all.
van der Meij, E. et al., Lancet (2018). doi:10.1016/S0140-6736(18)31113-9
Cross-sectional study
Observational study design, which samples the exposure and outcome at one moment in time. Useful to get quick insight in possible associations. Drawback is the lack of follow-up time to study relations between exposure and outcome over time.
Hansen, A. H. et al., J. Med. Internet Res. (2018). doi:10.2196/11322
Crossover study
Randomized, parallel group clinical trials often require large groups of patients; this is expensive and takes time. A randomized cross-over trial can be an efficient and more affordable alternative. A cross-over design can be used to study chronic disorders in which treatments have temporary effects. Participants receive all treatments in consecutive periods and outcomes are measured after every period. In general, only a quarter of the total group size is needed for cross-over studies compared with parallel group studies.
Bonten, T. N. et al., Ned. Tijdschr. Geneeskd. (2013).
Non-inferiority trial
sDemonstrating superiority of the new solution in terms of quality or efficacy of treatment is not always necessary, as the telemedicine/e-health solution/application may have other types of advantages, including saved travel time or saved costs. Testing that the new solution is not inferior to a traditional counterpart may therefore seem to be sufficient in many cases.
Kummervold, P. E. et al., Journal of Medical Internet Research (2012). doi:10.2196/jmir.2169
Pragmatic randomised controlled trial (P-RCT)
The term “pragmatic” for RCTs was introduced half a century ago. In contrast to “explanatory” RCTs that test hypotheses on whether the intervention causes an outcome of interest in ideal circumstances, “pragmatic” RCTs aim to provide information on the relative merits of real-world clinical alternatives in routine care. A critical aim of an explanatory RCT is to ensure internal validity (prevention of bias); conversely, a pragmatic RCT focuses on maximizing external validity (generalizability of the results to many real-world settings), but should try to preserve as much internal validity as possible.
Danaher, B. G., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0 Dal-Ré, R., BMC Med. (2018). doi:10.1186/s12916-018-1038-2
Preference clinical trial (PCT)
In a preference clinical trial (PCT), two or more health-care interventions are compared among several groups of patients, at least some of whom have purposefully chosen the intervention to be administered to them. This stands in contrast to the randomized, controlled clinical trial (RCT), where patients are randomly assigned to receive one of the available test interventions.
Kowalski, C. J. et al., Perspect. Biol. Med. (2013). doi:10.1353/pbm.2013.0004
Pretest-posttest design
The basic premise behind the pretest–posttest design involves obtaining a pretest measure of the outcome of interest prior to administering some treatment, followed by a posttest on the same measure after treatment occurs. Pretest–posttest designs are employed in both experimental and quasi-experimental research and can be used with or without control groups. For example, quasi-experimental pretest–posttest designs may or may not include control groups, whereas experimental pretest–posttest designs must include control groups. Furthermore, despite the versatility of the pretest–posttest designs, in general, they still have limitations, including threats to internal validity. Although such threats are of particular concern for quasi-experimental pretest–posttest designs, experimental pretest–posttest designs also contain threats to internal validity.
Grigsby, J. et al., J. Telemed. Telecare (2006). doi:10.1258/135763306778393162 Salkind, N. J., Encycl. Res. Des. (2010). doi:https://dx.doi.org/10.4135/9781506326139.n538
Randomised controlled trial
The randomised control trial (RCT) is a trial in which subjects are randomly assigned to one of two groups: one (the experimental group) receiving the intervention that is being tested, and the other (the comparison group or control) receiving an alternative (conventional) treatment. The two groups are then followed up to see if there are any differences between them in outcome. The results and subsequent analysis of the trial are used to assess the effectiveness of the intervention, which is the extent to which a treatment, procedure, or service does patients more good than harm.
Kendall, J. M., Emergency Medicine Journal (2003).
Sequential Multiple Assignment Randomized Trial (SMART)
The SMART approach is a randomized experimental design that has been developed especially for building time-varying adaptive interventions. The SMART approach enables the intervention scientist to address questions like these in a holistic yet rigorous manner, taking into account the order in which components are presented rather than considering each component in isolation. A SMART trial provides an empirical basis for selecting appropriate decision rules and tailoring variables. The end goal of the SMART approach is the development of evidence-based adaptive intervention strategies, which are then evaluated in a subsequent RCT.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Danaher, B. G. et al., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0
Almirall, D. et al., Transl. Behav. Med. (2014). doi:10.1007/s13142-014-0265-0
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Stepped wedge trial
In a stepped wedge design, an intervention is rolled-out sequentially to the trial participants (either as individuals or clusters of individuals) over a number of time periods. The order in which the different individuals or clusters receive the intervention is determined at random and, by the end of the random allocation, all individuals or groups will have received the intervention. Stepped wedge designs incorporate data collection at each point where a new group (step) receives the intervention. Data analysis to determine the overall effectiveness of the intervention subsequently involves comparison of the data points in the control section of the wedge with those in the intervention section. There are two key (non-exclusive) situations in which a stepped wedge design is considered advantageous when compared to a traditional parallel design. First, if there is a prior belief that the intervention will do more good than harm, rather than a prior belief of equipoise, it may be unethical to withhold the intervention from a proportion of the participants, or to withdraw the intervention as would occur in a cross-over design. Second, there may be logistical, practical or financial constraints that mean the intervention can only be implemented in stages.
Brown, C. A. et al., BMC Medical Research Methodology (2006). doi:10.1186/1471-2288-6-54 Hussey, M. A. et al., Contemp. Clin. Trials (2007). doi:10.1016/j.cct.2006.05.007
Spiegelman, D., Am. J. Public Health (2016). doi:10.2105/AJPH.2016.303068
Trial of intervention principles (TIPs)
Trials of Behavioral intervention technologies (BIT) should be viewed as experiments to test principles within that BIT that can then be more broadly applied by developers, designers, and researchers in the creation of BITs and the science behind technology-based behavioral intervention. As such, we refer to these trials as “Trials of Intervention Principles” (TIPs), as they test the theoretical concepts represented within the BIT, rather than the specific technological instantiation of the BIT itself.
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Wait list control group study
A wait list control group, also called a wait list comparison, is a group of participants included in an outcome study that is assigned to a waiting list and receives intervention after the active treatment group. This control group serves as an untreated comparison group during the study, but eventually goes on to receive treatment at a later date. Wait list control groups are often used when it would be unethical to deny participants access to treatment, provided the wait is still shorter than that for routine services.
Nguyen, H. Q. et al., Canadian Journal of Nursing Research (2007). Elliott, S. A. et al., Behav. Res. Ther. (2002). doi:10.1016/S0005-7967(01)00082-1
(Fractional-)factorial (ANOVA) design
Evaluation of eHealth treatments often occurs via randomized clinical trials. While there is a vital role for such trials, they often do not provide as much information as alternative experimental strategies. For instance, engineering researchers typically use highly efficient factorial and fractional-factorial designs that allow for the testing of multiple hypotheses or interventions with no loss of power even as the number of tested interventions increases.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Cluster randomised controlled trial
Randomized controlled trial not randomizing individuals, but ‘clusters’, mostly health care centers, or primary care practices.
Oliveira-Ciabati, L. et al., Reprod Heal. (2017).
Cohort study (retro- and prospective)
Observational design, in which groups of patients are followed over time. Usually, multiple exposures and outcomes can be defined in a cohort. Retro-and prospective mostly refers to the timing of data acquisition (before or after designing the study). Patients are sampled on the basis of exposure. Information about baseline characteristics is obtained, and the occurrence of outcomes is assessed during a specified follow-up period. At baseline, all exposed or unexposed persons or both may be included.
Vandenbroucke, J. P. British Medical Journal (1991). doi:10.1136/bmj.302.6775.528-d
Controlled before-after study / non-randomized controlled trial (CBA / NRCT)
A study in which observations are made before and after the implementation of an intervention, both in a group that receives the intervention and in a control group that does not.
NB nog ref
Controlled clinical trial (CCT)
A clinical study that includes a comparison (control) group. The comparison group receives a placebo, another treatment, or no treatment at all.
van der Meij, E. et al., Lancet (2018). doi:10.1016/S0140-6736(18)31113-9
Cross-sectional study
Observational study design, which samples the exposure and outcome at one moment in time. Useful to get quick insight in possible associations. Drawback is the lack of follow-up time to study relations between exposure and outcome over time.
Hansen, A. H. et al., J. Med. Internet Res. (2018). doi:10.2196/11322
Crossover study
Randomized, parallel group clinical trials often require large groups of patients; this is expensive and takes time. A randomized cross-over trial can be an efficient and more affordable alternative. A cross-over design can be used to study chronic disorders in which treatments have temporary effects. Participants receive all treatments in consecutive periods and outcomes are measured after every period. In general, only a quarter of the total group size is needed for cross-over studies compared with parallel group studies.
Bonten, T. N. et al., Ned. Tijdschr. Geneeskd. (2013).
Non-inferiority trial
sDemonstrating superiority of the new solution in terms of quality or efficacy of treatment is not always necessary, as the telemedicine/e-health solution/application may have other types of advantages, including saved travel time or saved costs. Testing that the new solution is not inferior to a traditional counterpart may therefore seem to be sufficient in many cases.
Kummervold, P. E. et al., Journal of Medical Internet Research (2012). doi:10.2196/jmir.2169
Pragmatic randomised controlled trial (P-RCT)
The term “pragmatic” for RCTs was introduced half a century ago. In contrast to “explanatory” RCTs that test hypotheses on whether the intervention causes an outcome of interest in ideal circumstances, “pragmatic” RCTs aim to provide information on the relative merits of real-world clinical alternatives in routine care. A critical aim of an explanatory RCT is to ensure internal validity (prevention of bias); conversely, a pragmatic RCT focuses on maximizing external validity (generalizability of the results to many real-world settings), but should try to preserve as much internal validity as possible.
Danaher, B. G., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0 Dal-Ré, R., BMC Med. (2018). doi:10.1186/s12916-018-1038-2
Preference clinical trial (PCT)
In a preference clinical trial (PCT), two or more health-care interventions are compared among several groups of patients, at least some of whom have purposefully chosen the intervention to be administered to them. This stands in contrast to the randomized, controlled clinical trial (RCT), where patients are randomly assigned to receive one of the available test interventions.
Kowalski, C. J. et al., Perspect. Biol. Med. (2013). doi:10.1353/pbm.2013.0004
Pretest-posttest design
The basic premise behind the pretest–posttest design involves obtaining a pretest measure of the outcome of interest prior to administering some treatment, followed by a posttest on the same measure after treatment occurs. Pretest–posttest designs are employed in both experimental and quasi-experimental research and can be used with or without control groups. For example, quasi-experimental pretest–posttest designs may or may not include control groups, whereas experimental pretest–posttest designs must include control groups. Furthermore, despite the versatility of the pretest–posttest designs, in general, they still have limitations, including threats to internal validity. Although such threats are of particular concern for quasi-experimental pretest–posttest designs, experimental pretest–posttest designs also contain threats to internal validity.
Grigsby, J. et al., J. Telemed. Telecare (2006). doi:10.1258/135763306778393162 Salkind, N. J., Encycl. Res. Des. (2010). doi:https://dx.doi.org/10.4135/9781506326139.n538
Randomised controlled trial
The randomised control trial (RCT) is a trial in which subjects are randomly assigned to one of two groups: one (the experimental group) receiving the intervention that is being tested, and the other (the comparison group or control) receiving an alternative (conventional) treatment. The two groups are then followed up to see if there are any differences between them in outcome. The results and subsequent analysis of the trial are used to assess the effectiveness of the intervention, which is the extent to which a treatment, procedure, or service does patients more good than harm.
Kendall, J. M., Emergency Medicine Journal (2003).
Sequential Multiple Assignment Randomized Trial (SMART)
The SMART approach is a randomized experimental design that has been developed especially for building time-varying adaptive interventions. The SMART approach enables the intervention scientist to address questions like these in a holistic yet rigorous manner, taking into account the order in which components are presented rather than considering each component in isolation. A SMART trial provides an empirical basis for selecting appropriate decision rules and tailoring variables. The end goal of the SMART approach is the development of evidence-based adaptive intervention strategies, which are then evaluated in a subsequent RCT.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Danaher, B. G. et al., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0
Almirall, D. et al., Transl. Behav. Med. (2014). doi:10.1007/s13142-014-0265-0
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Stepped wedge trial
In a stepped wedge design, an intervention is rolled-out sequentially to the trial participants (either as individuals or clusters of individuals) over a number of time periods. The order in which the different individuals or clusters receive the intervention is determined at random and, by the end of the random allocation, all individuals or groups will have received the intervention. Stepped wedge designs incorporate data collection at each point where a new group (step) receives the intervention. Data analysis to determine the overall effectiveness of the intervention subsequently involves comparison of the data points in the control section of the wedge with those in the intervention section. There are two key (non-exclusive) situations in which a stepped wedge design is considered advantageous when compared to a traditional parallel design. First, if there is a prior belief that the intervention will do more good than harm, rather than a prior belief of equipoise, it may be unethical to withhold the intervention from a proportion of the participants, or to withdraw the intervention as would occur in a cross-over design. Second, there may be logistical, practical or financial constraints that mean the intervention can only be implemented in stages.
Brown, C. A. et al., BMC Medical Research Methodology (2006). doi:10.1186/1471-2288-6-54 Hussey, M. A. et al., Contemp. Clin. Trials (2007). doi:10.1016/j.cct.2006.05.007
Spiegelman, D., Am. J. Public Health (2016). doi:10.2105/AJPH.2016.303068
Trial of intervention principles (TIPs)
Trials of Behavioral intervention technologies (BIT) should be viewed as experiments to test principles within that BIT that can then be more broadly applied by developers, designers, and researchers in the creation of BITs and the science behind technology-based behavioral intervention. As such, we refer to these trials as “Trials of Intervention Principles” (TIPs), as they test the theoretical concepts represented within the BIT, rather than the specific technological instantiation of the BIT itself.
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Wait list control group study
A wait list control group, also called a wait list comparison, is a group of participants included in an outcome study that is assigned to a waiting list and receives intervention after the active treatment group. This control group serves as an untreated comparison group during the study, but eventually goes on to receive treatment at a later date. Wait list control groups are often used when it would be unethical to deny participants access to treatment, provided the wait is still shorter than that for routine services.
Nguyen, H. Q. et al., Canadian Journal of Nursing Research (2007). Elliott, S. A. et al., Behav. Res. Ther. (2002). doi:10.1016/S0005-7967(01)00082-1
(Fractional-)factorial (ANOVA) design
Evaluation of eHealth treatments often occurs via randomized clinical trials. While there is a vital role for such trials, they often do not provide as much information as alternative experimental strategies. For instance, engineering researchers typically use highly efficient factorial and fractional-factorial designs that allow for the testing of multiple hypotheses or interventions with no loss of power even as the number of tested interventions increases.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Cluster randomised controlled trial
Randomized controlled trial not randomizing individuals, but ‘clusters’, mostly health care centers, or primary care practices.
Oliveira-Ciabati, L. et al., Reprod Heal. (2017).
Cohort study (retro- and prospective)
Observational design, in which groups of patients are followed over time. Usually, multiple exposures and outcomes can be defined in a cohort. Retro-and prospective mostly refers to the timing of data acquisition (before or after designing the study). Patients are sampled on the basis of exposure. Information about baseline characteristics is obtained, and the occurrence of outcomes is assessed during a specified follow-up period. At baseline, all exposed or unexposed persons or both may be included.
Vandenbroucke, J. P. British Medical Journal (1991). doi:10.1136/bmj.302.6775.528-d
Controlled before-after study / non-randomized controlled trial (CBA / NRCT)
A study in which observations are made before and after the implementation of an intervention, both in a group that receives the intervention and in a control group that does not.
NB nog ref
Controlled clinical trial (CCT)
A clinical study that includes a comparison (control) group. The comparison group receives a placebo, another treatment, or no treatment at all.
van der Meij, E. et al., Lancet (2018). doi:10.1016/S0140-6736(18)31113-9
Cost-effectiveness analysis
Cost effectiveness analysis (CEA) produces a numerical ratio—the incremental cost effectiveness ratio—in value (dollars, euro’s) per a gain in health from a measure (for example, years of life (QALY). This ratio is used to express the difference in cost effectiveness between new diagnostic tests or treatments and current ones.
de la Torre-Díez, I. et al., Telemed. e-Health (2015). doi:10.1089/tmj.2014.0053 Elbert, N. J. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2790
Cross-sectional study
Observational study design, which samples the exposure and outcome at one moment in time. Useful to get quick insight in possible associations. Drawback is the lack of follow-up time to study relations between exposure and outcome over time.
Hansen, A. H. et al., J. Med. Internet Res. (2018). doi:10.2196/11322
Crossover study
Randomized, parallel group clinical trials often require large groups of patients; this is expensive and takes time. A randomized cross-over trial can be an efficient and more affordable alternative. A cross-over design can be used to study chronic disorders in which treatments have temporary effects. Participants receive all treatments in consecutive periods and outcomes are measured after every period. In general, only a quarter of the total group size is needed for cross-over studies compared with parallel group studies.
Bonten, T. N. et al., Ned. Tijdschr. Geneeskd. (2013).
Methods comparison study
Two different overarching methodologies for method-comparison studies have been commonly used: equivalence studies and non-inferiority studies. In equivalence studies, we are interested in whether the new assessment does not differ from the conventional (usually in-person) assessment in either direction by a pre-specified amount (i.e. a two-sided test). In an equivalence trial the new assessment method will be selected regardless of whether it is better or worse than an existing assessment as long as the difference falls within the predefined zone of allowable difference (and meets other criteria such as cost effective and stakeholder satisfaction). Commonly in telehealth, the existing model of care (e.g. specialist assessment in tertiary hospital for cognitive impairment) will not be replaced, but rather the telehealth option will be used for people who cannot access conventional services. In this case, the question is whether the telehealth assessment is ‘as good’ as or rather ‘not inferior’ to conventional practice.
Russell, T. G. et al., J. Telemed. Telecare (2017). doi:10.1177/1357633X17727772
Non-inferiority trial
sDemonstrating superiority of the new solution in terms of quality or efficacy of treatment is not always necessary, as the telemedicine/e-health solution/application may have other types of advantages, including saved travel time or saved costs. Testing that the new solution is not inferior to a traditional counterpart may therefore seem to be sufficient in many cases.
Kummervold, P. E. et al., Journal of Medical Internet Research (2012). doi:10.2196/jmir.2169
Patient reported outcome measures (PROMs)
PROMs seek to ascertain patients’ views of their symptoms, their functional status, and their health related quality of life. PROMs are often wrongly referred to as so called “outcome measures,” though they actually measure health—by comparing a patient’s health at different times, the outcome of the care received can be determined. It’s important to distinguish PROMs from patient reported experience measures (PREMs), which focus on aspects of the humanity of care, such as being treated with dignity or being kept waiting.
Black, N., BMJ (2013). doi:10.1136/bmj.f167
Pragmatic randomised controlled trial (P-RCT)
The term “pragmatic” for RCTs was introduced half a century ago. In contrast to “explanatory” RCTs that test hypotheses on whether the intervention causes an outcome of interest in ideal circumstances, “pragmatic” RCTs aim to provide information on the relative merits of real-world clinical alternatives in routine care. A critical aim of an explanatory RCT is to ensure internal validity (prevention of bias); conversely, a pragmatic RCT focuses on maximizing external validity (generalizability of the results to many real-world settings), but should try to preserve as much internal validity as possible.
Danaher, B. G., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0 Dal-Ré, R., BMC Med. (2018). doi:10.1186/s12916-018-1038-2
Preference clinical trial (PCT)
In a preference clinical trial (PCT), two or more health-care interventions are compared among several groups of patients, at least some of whom have purposefully chosen the intervention to be administered to them. This stands in contrast to the randomized, controlled clinical trial (RCT), where patients are randomly assigned to receive one of the available test interventions.
Kowalski, C. J. et al., Perspect. Biol. Med. (2013). doi:10.1353/pbm.2013.0004
Pretest-posttest design
The basic premise behind the pretest–posttest design involves obtaining a pretest measure of the outcome of interest prior to administering some treatment, followed by a posttest on the same measure after treatment occurs. Pretest–posttest designs are employed in both experimental and quasi-experimental research and can be used with or without control groups. For example, quasi-experimental pretest–posttest designs may or may not include control groups, whereas experimental pretest–posttest designs must include control groups. Furthermore, despite the versatility of the pretest–posttest designs, in general, they still have limitations, including threats to internal validity. Although such threats are of particular concern for quasi-experimental pretest–posttest designs, experimental pretest–posttest designs also contain threats to internal validity.
Grigsby, J. et al., J. Telemed. Telecare (2006). doi:10.1258/135763306778393162 Salkind, N. J., Encycl. Res. Des. (2010). doi:https://dx.doi.org/10.4135/9781506326139.n538
Propensity score
The propensity score is the conditional probability of receiving treatment A rather than treatment B, given the observed covariates. Rosenbaum and Rubin (1983) state that the propensity score is a balancing score in the sense that it is a function of the observed covariates such that conditional on the propensity score, the distribution of observed baseline covariates will be similar between the two treatment groups. Then, the propensity score methods can be used to assess treatment group comparability with respect to patient baseline covariates and adjust for imbalances in those covariates to allow for a sensible treatment comparison in clinical outcomes. More importantly, for observational studies in regulatory settings, the methodology can be utilized to design an observational study and mimic RCT in the aspects of study design integrity and interpretability of study results.
Campbell, G. et al., J. Biopharm. Stat. (2016). doi:10.1080/10543406.2015.1092037
Randomised controlled trial
The randomised control trial (RCT) is a trial in which subjects are randomly assigned to one of two groups: one (the experimental group) receiving the intervention that is being tested, and the other (the comparison group or control) receiving an alternative (conventional) treatment. The two groups are then followed up to see if there are any differences between them in outcome. The results and subsequent analysis of the trial are used to assess the effectiveness of the intervention, which is the extent to which a treatment, procedure, or service does patients more good than harm.
Kendall, J. M., Emergency Medicine Journal (2003).
Sequential Multiple Assignment Randomized Trial (SMART)
The SMART approach is a randomized experimental design that has been developed especially for building time-varying adaptive interventions. The SMART approach enables the intervention scientist to address questions like these in a holistic yet rigorous manner, taking into account the order in which components are presented rather than considering each component in isolation. A SMART trial provides an empirical basis for selecting appropriate decision rules and tailoring variables. The end goal of the SMART approach is the development of evidence-based adaptive intervention strategies, which are then evaluated in a subsequent RCT.
Baker, T. B. et al., Journal of Medical Internet Research (2014). doi:10.2196/jmir.2925 Collins, L. M. et al., Am. J. Prev. Med. (2007). doi:10.1016/j.amepre.2007.01.022
Danaher, B. G. et al., Annals of Behavioral Medicine (2009). doi:10.1007/s12160-009-9129-0
Almirall, D. et al., Transl. Behav. Med. (2014). doi:10.1007/s13142-014-0265-0
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Stepped wedge trial
In a stepped wedge design, an intervention is rolled-out sequentially to the trial participants (either as individuals or clusters of individuals) over a number of time periods. The order in which the different individuals or clusters receive the intervention is determined at random and, by the end of the random allocation, all individuals or groups will have received the intervention. Stepped wedge designs incorporate data collection at each point where a new group (step) receives the intervention. Data analysis to determine the overall effectiveness of the intervention subsequently involves comparison of the data points in the control section of the wedge with those in the intervention section. There are two key (non-exclusive) situations in which a stepped wedge design is considered advantageous when compared to a traditional parallel design. First, if there is a prior belief that the intervention will do more good than harm, rather than a prior belief of equipoise, it may be unethical to withhold the intervention from a proportion of the participants, or to withdraw the intervention as would occur in a cross-over design. Second, there may be logistical, practical or financial constraints that mean the intervention can only be implemented in stages.
Brown, C. A. et al., BMC Medical Research Methodology (2006). doi:10.1186/1471-2288-6-54 Hussey, M. A. et al., Contemp. Clin. Trials (2007). doi:10.1016/j.cct.2006.05.007
Spiegelman, D., Am. J. Public Health (2016). doi:10.2105/AJPH.2016.303068
Survey methods
Surveys are commonly used in telehealth research to assess patient satisfaction, patient experiences, patient preferences and attitudes, and the technical quality of a teleconsultation. The popularity of the survey as a method of measurement can be understood through three major strengths of this technique. First, confidential survey questions are well suited to capture individuals’ experiences, perceptions and attitudes. Second, pre-existing scales can be used across studies, enabling the comparison and replication of results. Third, the validity and reliability of survey instruments can be assessed through rigorous, transparent and well-accepted validation methods, providing the researcher with confidence that the measures tap the intended constructs, and provide an accurate measurement.
Langbecker, D. et al., J Telemed Telecare 23, 770–779 (2017).
Trial of intervention principles (TIPs)
Trials of Behavioral intervention technologies (BIT) should be viewed as experiments to test principles within that BIT that can then be more broadly applied by developers, designers, and researchers in the creation of BITs and the science behind technology-based behavioral intervention. As such, we refer to these trials as “Trials of Intervention Principles” (TIPs), as they test the theoretical concepts represented within the BIT, rather than the specific technological instantiation of the BIT itself.
Mohr, D. C. et al., J. Med. Internet Res. (2015). doi:10.2196/jmir.4391
Wait list control group study
A wait list control group, also called a wait list comparison, is a group of participants included in an outcome study that is assigned to a waiting list and receives intervention after the active treatment group. This control group serves as an untreated comparison group during the study, but eventually goes on to receive treatment at a later date. Wait list control groups are often used when it would be unethical to deny participants access to treatment, provided the wait is still shorter than that for routine services.
Nguyen, H. Q. et al., Canadian Journal of Nursing Research (2007). Elliott, S. A. et al., Behav. Res. Ther. (2002). doi:10.1016/S0005-7967(01)00082-1
Focus group
A focus group is a group discussion on a particular topic organised for research purposes. This discussion is guided, monitored and recorded by a researcher (sometimes called a moderator or facilitator). Focus groups are used for generating information on collective views, and the meanings that lie behind those views.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192 Peeters, J. M. et al., JMIR Med. Informatics (2016). doi:10.2196/medinform.4515
Interview
There are three fundamental types of research interviews: structured, semi-structured and unstructured. Structured interviews are, essentially, verbally administered questionnaires, in which a list of predetermined questions are asked, with little or no variation and with no scope for follow-up questions to responses that warrant further elaboration. Conversely, unstructured interviews do not reflect any preconceived theories or ideas and are performed with little or no organization. Semi-structured interviews consist of several key questions that help to define the areas to be explored, but also allows the interviewer or interviewee to diverge in order to pursue an idea or response in more detail.
Gill, P. et al., Br. Dent. J. (2008). doi:10.1038/bdj.2008.192
Living lab
A Living Lab is a user-centered, open innovation ecosystem based on a systematic user co-creation approach, integrating research and innovation processes in real-life communities and settings.
Swinkels, I. C. S. et al., Journal of Medical Internet Research (2018). doi:10.2196/jmir.9110
Patient reported outcome measures (PROMs)
PROMs seek to ascertain patients’ views of their symptoms, their functional status, and their health related quality of life. PROMs are often wrongly referred to as so called “outcome measures,” though they actually measure health—by comparing a patient’s health at different times, the outcome of the care received can be determined. It’s important to distinguish PROMs from patient reported experience measures (PREMs), which focus on aspects of the humanity of care, such as being treated with dignity or being kept waiting.
Black, N., BMJ (2013). doi:10.1136/bmj.f167