References

Adamou M, Jones SL, Marks L, Lowe D Efficacy of Continuous Performance Testing in Adult ADHD in a Clinical Sample Using QbTest. J Atten Disord. 2022; 26:(11)1483-1491 https://doi.org/10.1177/10870547221079798

Ahn R, Woodbridge A, Abraham A, Saba S, Korenstein D, Madden E, Boscardin WJ, Keyhani S Financial ties of principal investigators and randomized controlled trial outcomes: cross sectional study. BMJ. 2017; 356 https://doi.org/10.1136/bmj.i6770

Aveyard H Doing a Literature Review in Health and Social Care, 4th edn. Berks: Open University Press; 2018

Bijlenga D, Ulberstad F, Thorell LB, Christiansen H, Hirsch O, Kooij JJS Objective assessment of attention-deficit/hyperactivity disorder in older adults compared with controls using the QbTest. Int J Geriatr Psychiatry. 2019; 34:(10)1526-1533 https://doi.org/10.1002/gps.5163

Braun V, Clarke V Thematic Analysis: A Practical Guide.: SAGE Publications; 2022

Brunkhorst-Kanaan N, Verdenhalven M, Kittel-Schneider S, Vainieri I, Reif A, Grimm O The Quantified Behavioral Test-A Confirmatory Test in the Diagnostic Process of Adult ADHD?. Front Psychiatry. 2020; 11 https://doi.org/10.3389/fpsyt.2020.00216

Chan WWY, Shum KK, Sonuga-Barke EJS Attention-deficit/hyperactivity disorder (ADHD) in cultural context: Do parents in Hong Kong and the United Kingdom adopt different thresholds when rating symptoms, and if so why?. Int J Methods Psychiatr Res. 2022; 31:(3) https://doi.org/10.1002/mpr.1923

Coughlan M, Cronin P Doing a Literature Review in Nursing, Health and Social Care.London: SAGE; 2020

Coughlan M, Cronin P, Ryan F Step-by-step guide to critiquing research. Part 1: quantitative research. Br J Nurs. 2007; 16:(11)658-63 https://doi.org/10.12968/bjon.2007.16.11.23681

Critical Appraisal Skills Programme. Critical Appraisal Checklists. 2018. https://casp-uk.net/casp-tools-checklists (accessed 17 September 2024)

Edebol H, Helldin L, Norlander T Measuring adult Attention Deficit Hyperactivity Disorder using the Quantified Behavior Test Plus. Psych J. 2013; 2:(1)48-62 https://doi.org/10.1002/pchj.17

Ellis P Evidence-based practice in nursing, 4th edn. London: Learning Matters; 2019

Emser TS, Johnston BA, Steele JD, Kooij S, Thorell L, Christiansen H Assessing ADHD symptoms in children and adults: evaluating the role of objective measures. Behavioral and Brain Functions. 2018; 14:(1) https://doi.org/10.1186/s12993-018-0143-x

Eriksen MB, Frandsen TF The impact of patient, intervention, comparison, outcome (PICO) as a search strategy tool on literature search quality: a systematic review. J Med Libr Assoc. 2018; 106:(4)420-431 https://doi.org/10.5195/jmla.2018.345

Fink A Conducting Research Literature Reviews.: SAGE Publications; 2019

Accelerating the adoption of QbTest in the NHS. 2020. https://adhdnews.qbtech.com/accelerating-the-adoption-of-qbtest-in-the-nhs (accessed 17 September 2024)

Fridman M, Banaschewski T, Sikirica V, Quintero J, Chen KS Access to diagnosis, treatment, and supportive services among pharmacotherapy-treated children/adolescents with ADHD in Europe: data from the Caregiver Perspective on Pediatric ADHD survey. Neuropsychiatr Dis Treat. 2017; 13:947-958 https://doi.org/10.2147/NDT.S128752

Groom MJ, Young Z, Hall CL, Gillott A, Hollis C The incremental validity of a computerised assessment added to clinical rating scales to differentiate adult ADHD from autism spectrum disorder. Psychiatry Res. 2016; 243:168-73 https://doi.org/10.1016/j.psychres.2016.06.042

Hall CL, James M, Brown S Protocol investigating the clinical utility of an objective measure of attention, impulsivity and activity (QbTest) for optimising medication management in children and young people with ADHD ‘QbTest Utility for Optimising Treatment in ADHD’ (QUOTA): a feasibility randomised controlled trial. BMJ Open. 2018; 8:(2) https://doi.org/10.1136/bmjopen-2017-021104

Hanneman R, Kposowa AJ, Riddle M Basic statistics for social research.San Francisco, Ca: Jossey-Bass; 2013

Hollis C, Hall CL, Guo B, James M, Boadu J, Groom MJ, Brown N, Kaylor-Hughes C, Moldavsky M, Valentine AZ, Walker GM, Daley D, Sayal K, Morriss R The impact of a computerised test of attention and activity (QbTest) on diagnostic decision-making in children and young people with suspected attention deficit hyperactivity disorder: single-blind randomised controlled trial. J Child Psychol Psychiatry. 2018; 59:(12)1298-1308 https://doi.org/10.1111/jcpp.12921

Hult N, Kadesjö J, Kadesjö B, Gillberg C, Billstedt E ADHD and the QbTest: Diagnostic Validity of QbTest. J Atten Disord. 2018; 22:(11)1074-1080 https://doi.org/10.1177/1087054715595697

Jakobsen JC, Gluud C, Wetterslev J, Winkel P When and how should multiple imputation be used for handling missing data in randomised clinical trials - a practical guide with flowcharts. BMC Med Res Methodol. 2017; 17:(1) https://doi.org/10.1186/s12874-017-0442-1

Javadi M, Zarea K Understanding Thematic Analysis and Its Pitfall. Journal of Client Care. 2016; 1:33-39 https://doi.org/10.15412/J.JCC.02010107

Johansson V, Norén Selinus E, Kuja-Halkola R, Lundström S, Durbeej N, Anckarsäter H, Lichtenstein P, Hellner C The Quantified Behavioral Test Failed to Differentiate ADHD in Adolescents With Neurodevelopmental Problems. J Atten Disord. 2021; 25:(3)312-321 https://doi.org/10.1177/1087054718787034

Lennox C, Hall CL, Carter LA, Beresford B, Young S, Kraam A, Brown N, Wilkinson-Cunningham L, Reeves M, Chitsabesan P FACT: a randomised controlled trial to assess the feasibility of QbTest in the assessment process of attention deficit hyperactivity disorder (ADHD) for young people in prison-a feasibility trial protocol. BMJ Open. 2020; 10:(1) https://doi.org/10.1136/bmjopen-2019-035519

McBride D The Process of Research and Statistical Analysis in Psychology.London: SAGE; 2020

McGonnell M, Corkum P, McKinnon M, MacPherson M, Williams T, Davidson C, Jones DB, Stephenson D Doing it right: an interdisciplinary model for the diagnosis of ADHD. J Can Acad Child Adolesc Psychiatry. 2009; 18:(4)283-6

McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol. 2016; 75:40-6 https://doi.org/10.1016/j.jclinepi.2016.01.021

Extraneous Variables In Research: Types & Examples. 2023. https://www.simplypsychology.org/extraneous-variable.html (accessed 17 September 2024)

Méndez-Freije I, Areces D, Rodríguez C Language Skills in Children with Attention Deficit Hyperactivity Disorder and Developmental Language Disorder: A Systematic Review. Children (Basel). 2023; 11:(1) https://doi.org/10.3390/children11010014

Merrill RM, Merrill AW, Madsen M Attention-Deficit Hyperactivity Disorder and Comorbid Mental Health Conditions Associated with Increased Risk of Injury. Psychiatry J. 2022; 2022 https://doi.org/10.1155/2022/2470973

Milioni AL, Chaim TM, Cavallet M High IQ May “Mask” the Diagnosis of ADHD by Compensating for Deficits in Executive Functions in Treatment-Naïve Adults With ADHD. J Atten Disord. 2017; 21:(6)455-464

Moher D, Liberati A, Tetzlaff J, Altman DG Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009; 6:(7) https://doi.org/10.1371/journal.pmed.1000097

Understanding Thematic Analysis and Its Pitfall. 2016. https://pdfs.semanticscholar.org/7e0d/d7ca59631b0cc61a1ce28effd6eb0c78a665.pdf (accessed 17 September 2024)

National Institute for Health and Care Excellence. Attention Deficit Hyperactivity disorder: Diagnosis and Management. 2018. https://www.nice.org.uk/guidance/ng87/chapter/Recommendations (accessed 17 September 2024)

National Institute for Health and Care Excellence. QbTest for the assessment of attention deficit hyperactivity disorder (ADHD). 2023. https://www.nice.org.uk/advice/mib318 (accessed 17 September 2024)

NIHR Collaboration for Leadership in Applied Health Research and Care. ADHD care in the East Midlands. 2017. https://arc-em.nihr.ac.uk/arc-store-resources/adhd-care-east-midlands (accessed 17 September 2024)

Parikh R, Mathai A, Parikh S, Chandra Sekhar G, Thomas R Understanding and using sensitivity, specificity and predictive values. Indian J Ophthalmol. 2008; 56:(1)45-50 https://doi.org/10.4103/0301-4738.37595

Pellegrini S, Murphy M, Lovett E The QbTest for ADHD assessment: Impact and implementation in Child and Adolescent Mental Health Services. Children and Youth Services Review. 2020; 114 https://doi.org/10.1016/j.childyouth.2020.105032

Qbtech. Accelerating the adoption of QbTest in the NHS. 2020. https://www.qbtech.com/blog/accelerating-the-adoption-of-qbtest-in-the-nhs (accessed 26 September 2024)

Reh V, Schmidt M, Lam L, Schimmelmann BG, Hebebrand J, Rief W, Christiansen H Behavioral Assessment of Core ADHD Symptoms Using the QbTest. J Atten Disord. 2015; 19:(12)1034-45 https://doi.org/10.1177/1087054712472981

Attention deficit hyperactivity disorder (ADHD) in adults with intellectual disability.London: RCP; 2021

Sharma A, Singh B Evaluation of the role of Qb testing in attention deficit hyperactivity disorder.: Royal College Spring Meeting; 2009

Slobodin O, Davidovitch M Gender Differences in Objective and Subjective Measures of ADHD Among Clinic-Referred Children. Front Hum Neurosci. 2019; 13 https://doi.org/10.3389/fnhum.2019.00441

Söderström S, Pettersson R, Nilsson KW Quantitative and subjective behavioural aspects in the assessment of attention-deficit hyperactivity disorder (ADHD) in adults. Nord J Psychiatry. 2014; 68:(1)30-7 https://doi.org/10.3109/08039488.2012.762940

Song P, Zha M, Yang Q, Zhang Y, Li X, Rudan I The prevalence of adult attention-deficit hyperactivity disorder: A global systematic review and meta-analysis. J Glob Health. 2021; 11 https://doi.org/10.7189/jogh.11.04009

Substance Abuse and Mental Health Services Administration. DSM-IV to DSM-5 Attention-Deficit/Hyperactivity Disorder Comparison. 2016. https://www.ncbi.nlm.nih.gov/books/NBK519712/table/ch3.t3/ (accessed 26 September 2024)

Thomas R, Sanders S, Doust J, Beller E, Glasziou P Prevalence of attention-deficit/hyperactivity disorder: a systematic review and meta-analysis. Pediatrics. 2015; 135:(4)e994-1001 https://doi.org/10.1542/peds.2014-3482

Villagomez AN, Muñoz FM, Peterson RL Neurodevelopmental delay: Case definition & guidelines for data collection, analysis, and presentation of immunization safety data. Vaccine. 2019; 37:(52)7623-7641 https://doi.org/10.1016/j.vaccine.2019.05.027

Vogt C Clinical Conundrums When Integrating the QbTest into a Standard ADHD Assessment of Children and Young People. Neuropediatrics. 2021; 52:(3)155-162 https://doi.org/10.1055/s-0040-1722674

Wang XQ, Albitos PJ, Hao YF, Zhang H, Yuan LX, Zang YF A review of objective assessments for hyperactivity in attention deficit hyperactivity disorder. J Neurosci Methods. 2022; 370 https://doi.org/10.1016/j.jneumeth.2022.109479

How accurate is the QbTest for measuring symptoms of ADHD in children and adults?

02 October 2024
Volume 6 · Issue 10

Abstract

This literature review aimed to assess the reliability and validity of the quantified behavioural test (QbTest) to measure the symptoms of attention deficit hyperactivity disorder (ADHD), determining the extent to which results of the selected studies were accurate and generalisable. A literature review was undertaken, with each paper appraised using the Critical Appraisal Skills Programme randomised controlled trial checklist. Thematic analysis was also implemented to identify key themes and relationships between data sets. Three papers concluded that the QbTest increases clinical efficiency without compromising diagnostic accuracy, while others identified limitations relating to its ability to identify symptoms of impulsivity correctly. Additional issues were identified relating to external validity, generalisability and the extent to which the QbTest could differentiate ADHD from other conditions. It is concluded that the QbTest is an unreliable means of assessing ADHD in both children and adults, particularly when used as a stand-alone assessment tool. Thematic analysis highlighted concerns around diagnostic accuracy and the QbTest's inability to differentiate symptoms of ADHD from other conditions – fundamental flaws affecting the overarching fidelity of QbTesting.

Attention deficit hyperactivity disorder (ADHD) is characterised by excessive levels of hyperactivity, impulsivity and inattention (Wang et al, 2022). Thomas et al (2015) report a worldwide prevalence rate of between 5.29% and 7.2% in children, and 6.76% in adults (Song et al, 2021). National Institute for Health and Care Excellence (NICE, 2018) guidance changes placing ADHD into mental health services, alongside growing recognition of the condition, have put increasing pressure on healthcare providers.

Therefore, there is an urgent need for objective assessment methods to strengthen neurodevelopmental assessment processes, to accurately diagnose ADHD in a more streamlined manner (Wang et al, 2022).

The quantified behavioural test (QbTest; Qbtech Ltd) is an objective screening tool to supplement ADHD diagnosis. Hall et al (2018) found the QbTest improves reliability and clinical decision-making speed. Hollis et al (2018) also determined the QbTest had valuable applications in the assessment of ADHD in children, with clinicians 1.44 times more likely to reach a diagnostic decision. Moreover, Bijlenga et al (2019) concluded that the QbTest was a suitable tool for assessing ADHD in older adults.

However, the evidence base is conflicting, with some researchers claiming the QbTest is unable to accurately distinguish ADHD from healthy controls (Brunkhorst-Kanaan et al, 2020). Reh et al (2015) highlighted poor concurrent validity between the QbTest and psychometric assessment tools. Factors significantly affecting QbTest reliability and validity include clinicians' capabilities to accurately interpret data from the QbTest; the extent to which quantitative measurements are reflective of real-life behaviours; and the degree to which QbTesting reliably differentiates symptoms of ADHD from comorbid conditions, such as autism (Johansson et al, 2021; Vogt, 2021).

Despite this, the QbTest is now used across the NHS in England. Arguably, this is due to benefits relating to cost-effectiveness, with QbTesting estimated to save £80 000 per year, per clinic (Qbtech, 2020). Additionally, the NIHR Collaboration for Leadership in Applied Health Research and Care (2017) estimated that the QbTest saves an average of 32.6%, due to a reduction in appointments. However, some researchers have associated financial ties with positive results in the literature (Groom, 2016; Ahn et al, 2017). Consequently, NICE (2023) guidance stipulates there is a need for a systematic review of the current evidence base for QbTesting, and there is a clear requirement for further research examining its effectiveness.

This literature review provides a platform to identify factors affecting diagnostic accuracy and reliability, to improve nursing practices and strengthen neurodevelopmental assessments (Hollis et al, 2018). Diagnostic criteria is shown in Table 1.


InattentionSix (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that has a negative impact directly on social and academic/occupational activities
  • Fails to pay close attention to details or makes careless mistakes in work during other activities
  • Difficulties in sustaining attention in tasks or play activities
  • Often does not seem to listen when spoken to directly
  • Does not follow through on instructions and is easily sidetracked
  • Difficulties organising tasks and activities
  • Avoids or is reluctant to engage in tasks that require sustained mental effort
  • Loses things for tasks and activities
  • Easily distracted by external stimuli
  • Often forgetful in daily activities
  • Hyperactivity and impulsivitySix (or more) of the following symptoms have persisted for at least 6 months to a degree that is inconsistent with developmental level and that have a negative impact directly on social and academic/occupational activities
  • Fidgets with or taps hands and feet or squirms in seat
  • Often leaves seat in situations where remaining seated is expected
  • Runs about and climbs in situations deemed inappropriate
  • Unable to play or take part in leisure activities quietly
  • Frequently ‘on the go’ acting as if ‘driven by motor’
  • Talks excessively
  • Blurts out answers before a question has been completed
  • Trouble waiting one's turn
  • Often interrupts conversations or intrudes on others
  • Several inattentive or hyperactive–impulsive symptoms were present before age 12
  • Several inattentive or hyperactive–impulsive symptoms are present in two or more settings
  • Evidence symptoms interfere with daily living or reduce functioning
  • Symptoms are not excessive in the presence of schizophrenia or other psychiatric disorders and are not better explained by another mental disorder
  • Source: Substance Abuse and Mental Health Services Administration, 2016

    Method

    As this project aimed to appraise and synthesise results of previous research, this review was conducted systematically, using primary research between 2013 and 2022, to evaluate the reliability and validity of QbTesting. Although all papers used randomised controlled trials (RCTs) providing quantitative data, a thematic analysis was used to identify patterns and establish relationships between data, as recommended by Braun and Clarke (2022).

    Consequently, this review summarised the reoccurring themes across the body of research. Due to the quantitative nature of this review, a PICO tool was used to facilitate the search strategy, identifying relevant articles through cross-database searches (Coughlan and Cronin, 2020).

    As literature searching forms the basis of systematic reviews, it was imperative the search strategy was accurate and extensive, as this has a significant impact on the quality of the review process (McGowan et al, 2016).

    Thus, the population, intervention, comparison and outcomes (PICO) tool was used with caution, due to its limited evidence base and difficulties defining the scope of the topic (Eriksen and Frandsen, 2018). Inclusion criteria were identified to refine the search process as follows:

  • Scholarly and peer-reviewed articles
  • Current articles
  • Use of synonyms
  • Use of RCTs
  • Use of articles with children and adults aged 6–80 years
  • Focused on the diagnosis of ADHD
  • Focused only on the QbTest and QbTest Plus
  • Written in English.
  • Electronic databases Summon, CINAHL and Medline were used to source articles relevant to this literature review. Databases selected provided access to primary research relevant to nursing, as recommended by Aveyard (2018). Secondary sources, such as reference lists of selected articles, were examined. Due to time restrictions and the limited number of primary studies available on QbTesting, seven articles were selected for review. A literature search was conducted to identify relevant articles, using the inclusion and exclusion criteria. Due to limited research 55 articles were identified (Figure 1).

    Figure 1. PRISMA (Moher et al, 2009) diagram

    Thematic analysis is a method of identifying and analysing reoccurring themes, patterns and meanings throughout data (Braun and Clarke, 2022). However, thematic analyses have been criticised for their vulnerability to bias, poor coherence and overlap between themes due to poor research design and ambiguous guidelines for interpreting data (Javadi and Zarea, 2016). Themes were therefore established prior to conducting this review, due to previous research and experience within this field. Inferential statistics provide a means of accurately assessing the reliability of quantitative data, using tools derived from statistical tests (Ellis, 2019).

    Due to the quantitative nature of this review, evidence supporting findings from the thematic analysis was derived from examining inferential statistics to further scrutinise the fidelity of each study's conclusions.

    Data extraction and data synthesis

    Guidelines by Coughlan et al (2007) and the CASP (2018) RCT checklist were used to critically appraise articles. Findings were combined to provide an overview of key themes, using a pragmatic, structured process (Figure 2).

    Figure 2. Braun and Clarke's (2022) six-stage model of thematic analysis used to guide this review

    A total of 55 records were retrieved through initial database searches, with 52 abstracts screened after duplicates were removed to assess eligibility. The search yielded seven articles that met the inclusion criteria, conducted in England and Sweden (Table 2).


    Paper Authors Findings
    1 Edebol et al (2013) Supports the use of QbTest in adult populations, highlighting good specificity at 83% and sensitivity at 86% across all groups. Due to this study's large sample size and standardised procedure, it was deemed useful for review
    2 Hollis et al (2018) Concluded QbTest is a useful means of reducing consultation time, with clinicians 1.44 times more likely to reach a diagnostic decision. Hence, it was determined the QbTest increases clinical efficiency without compromising diagnostic accuracy
    3 Johansson et al (2021) Results highlighted the QbTest's ability to correctly classify symptoms of ADHD in children was poor. Discriminatory analysis showed sensitivity and specificity was also unsatisfactory
    4 Hult et al (2018) Supports the use of QbTest, stating that it was able to identify higher rates of hyperactivity and inattention in children with ADHD. However, scores relating to impulsivity were unaffected, requiring further examination
    5 Emser et al (2018) Supports the use of the QbTest when combined with subjective assessment methods. Researchers reported higher rates of accuracy, 79% (adults) and 78% (children), when using the QbTest to detect symptoms of ADHD. This increased when combined with self-report measures. Despite this, it was identified as being an unreliable predictor of ADHD in adults, again requiring further investigation
    6 Bijlenga et al (2019) Researchers concluded the QbTest was a suitable means of assessing ADHD in older adults, but this did not apply to impulsivity, similar to paper 4
    7 Adamou et al (2022) The QbTest was unable to differentiate symptoms of ADHD from healthy adult controls. QbTest demonstrated 70% accuracy when identifying those with a clinical diagnosis, buut only 43% specificity was identified when detecting the absence of ADHD in those who did not

    Characteristics and quality

    All articles included were peer reviewed to increase rigor and reduce bias (Coughlan and Cronin, 2020). Key strengths across all papers included the use of experimental designs increasing reliability. Additionally, all used inferential statistics to examine the accuracy of data. Key limitations related to poor discriminant validity and generalisability, as sample representativeness was not demonstrated.

    Discussion

    As recommended by Coughlan and Cronin (2020), themes were derived based on the inclusion criteria, relevance to the research question and identification of sub-themes throughout the text, strengthening this review's integrity.

    Thematic analysis highlighted most studies lacked generalisability, were poorly standardised and had weak external validity. Failures related to an absence of clinician experience when interpreting and administering the QbTest, an absence of standardised testing procedures, issues regarding cross-cultural validity and the impact of comorbid conditions. Furthermore, inconsistencies were noted across three papers regarding the QbTest's ability to accurately measure impulsivity.

    The ‘gold standard’ of research design is underpinned by the five themes of reliability, validity, accuracy, standardisation and generalisability (McBride, 2020). This review explored the fidelity of QbTesting, comparing and contrasting data sets across several journal articles, in accordance with these themes.

    Reliability

    QbTest is a reliable means of assessing ADHD in adults, claim Edebol et al (2013). Findings concur with Hollis et al (2018), who concluded QbTest aided quicker diagnostic decisions. Emser et al (2018) support this, reporting higher rates of accuracy when using QbTest to detect symptoms of ADHD in adults and children. Thus, with sensitivity increasing so does reliability, indicating the QbTest can yield positive results regarding ADHD symptomatology (Parkih et al, 2008).

    Conversely, Hult et al (2018) concluded the QbTest only had moderate ability to identify ADHD, when used as a ‘stand-alone’ tool. Despite the use of convenience sampling and issues related to potential researcher bias, findings were consistent with Johansson et al (2021), who concluded QbTest was an unreliable predictor of ADHD in children, with discriminatory analysis concluding sensitivity and specificity was unsatisfactory. Additionally, Adamou et al (2022) highlighted the QbTest was unable to differentiate symptoms of ADHD in adults.

    Although Edebol et al (2013) and Hollis et al (2018) used large sample sizes, indicating stronger reliability, it is too simplistic to conclude these studies findings are more reliable. Interestingly, Hult et al (2018), Hollis et al (2018) and Johansson et al (2021) reported confidence intervals, suggesting only their samples were truly representative of the general population (Hanneman et al, 2013). Failure to report confidence intervals increases the likelihood of high variability in samples, indicating a likelihood of bias (Aveyard, 2018).

    Thus, although these studies advocate the QbTest is a reliable tool, implications relating to weak experimental designs reduce the clinical rigour and robustness of these claims.

    Impulsivity measurements

    Results regarding the QbTest's ability to identify symptoms of impulsivity were mixed. Bijlenga et al (2019), Hult et al (2018) and Adamou et al (2022) stress the QbTest could not reliably detect impulsivity; hence this was not a valid differentiator for adults and children. Edebol et al (2013) also recognised hyperactivity was the most common feature of ADHD, with impulsivity the least common.

    Bidderman et al (2000) (cited in Emser et al, 2018) offer a useful explanation, stating hyperactivity and inattention decline to a greater extent over time. Still, this does not explain why the QbTest was less sensitive to impulsivity in a sample of children by Hult et al (2018) with a mean age of 10 years. This is also inconsistent with findings from Bijlenga et al (2019) claiming QbTesting can accurately identify symptoms of hyperactivity in older adults.

    Validity

    Discriminant validity

    Johansson et al (2021) argued the QbTest cannot differentiate ADHD from comorbid neurodevelopmental conditions, concurring with Sharma and Singh's (2009) claims. Despite issues relating to missing data, findings by Johansson et al (2021) are supported by Vogt (2021), who stated research should measure the QbTest's ability to differentiate ADHD from difficulties related to emotional dysregulation.

    Moreover, Edebol et al (2018) reported sensitivity dropped to 36% when using the QbTest to assess ADHD in individuals with personality disorders. This is significant as ADHD is a heterogenous disorder, with 51.8% of individuals with ADHD exhibiting at least one comorbid condition (Merrill et al, 2022). Consequently, use of valid measurement tools to identify ADHD symptoms is vital.

    Johansson et al (2021) suggest the QbTest could not differentiate between sub-types of ADHD in children, indicating random to poor validity. Furthermore, Hult et al (2018) and Adamou et al (2022) concluded that when comparing performance against healthy controls, discriminatory validity was poor across adults and children. Findings concur with claims that inconsistencies have been identified in the QbTest's convergent and discriminant validity when used with children (Emser et al, 2018).

    Internal validity

    All researchers made robust attempts at successfully measuring QbTest accuracy, using RCTs or mixed-method approaches, as recommended by Aveyard (2018). Valid and reliable diagnostic tools were used to inform diagnostic decisions, which were compared to QbTest data. Baseline characteristics were accounted for, with QbTest results objectively compared to normative data of children of the same age and gender. This increases accuracy reducing the likelihood of confounding variables affecting the results (McBride, 2020; QbTech, 2020).

    Ecological validity

    All studies were conducted in a healthcare setting, replicating real-life assessment processes. Therefore to some extent, good ecological validity was established. However, the QbTest is conducted in a controlled, artificial setting with random distractions not present as they would be in real-life circumstances. Hence, the extent to which the results of the QbTest are representative of behaviours outside of a laboratory setting, such as school, are questionable (Wang et al, 2022).

    Accuracy

    Outcomes were comprehensively reported using inferential statistics, with p values objectively determining whether measured differences were due to the intervention, rather than chance (Fink, 2019). Conclusions by Hollis et al (2018) supporting QbTest hold valuable merit due to the use of single-blinding techniques and robust experimental procedure. However, a significant limitation relates to missing data.

    Psychiatrists' diagnoses were made with more than half of participants' information missing, providing an unreliable comparison against QbTest performance. This is problematic, as missing data compromises the accuracy of results and is subject to bias and inter-rater disagreement (Jakobsen et al, 2017). Similarly, despite holding opposing views, Johansson et al (2021) also reported 15 cases of missing data when measuring Qbactivity on children. This questions how sensitive QbTesting is to micro-movements, and to what extent rater bias has an impact on the interpretation of observable behaviours (Brunkhorst-Kanaan et al, 2020). Thus, results from the QbTest measuring activity are subjective and prone to error. This coincides with Emser et al's (2018) findings, who confirmed Qbactivity was not a reliable predictor of ADHD in adults and children.

    Little attention has been placed on the impact of extraneous variables, such as anxiety, which may distort QbTest results (Pellegrini et al, 2020). While Hollis et al (2018), Bijlenga et al (2019), Johansson et al (2021) and Edebol et al (2013) included participants with comorbid conditions, the impact of these were not considered, despite evidence confirming anxiety influences inattention and impulsivity in children when undertaking continuous performance tasks (Méndez-Freije et al, 2023). Accuracy of the results are therefore unclear, as ADHD and anxiety share similar characteristics. Claims coincide with Söderström et al (2014), who demonstrated poor discriminant validity of the QbTest when comorbid conditions were present. Furthermore, Emser et al (2018) identified children with ADHD had a lower IQ than controls. Reh et al (2015) support these findings, identifying children with higher IQ scores were less impulsive. Again, although this association has been established, the impact of IQ on QbTest performance remains unclear.

    Standardisation

    A key finding was a lack of standardisation across testing procedures. Although there are strict instructions for administering the test, the impact of environmental variables and clinician experience may distort its performance (Vogt, 2021). Implementing consistent testing procedures across all experimental conditions limits the impact of extraneous environmental variables, which may disrupt QbTest performance (McLeod, 2023). However, Hollis et al (2018), Johansson et al (2021) and Bijlenga et al (2019) conducted testing at multiple sites. Consequently, assessments were not standardised across clinics, with researchers failing to specify whether environmental controls were consistently implemented to avoid interference.

    Additionally, some clinicians had extensive experience of using the QbTest, while others had very little (Lennox et al, 2020). Furthermore, Hollis et al (2018) recognised lower sensitivity in the group where QbTest data was accessible, indicating clinicians may have been applying a more stringent diagnostic criteria. Hult et al (2018) also noted QbTest results were known to some clinicians who contributed to the final diagnosis, highlighting inconsistent assessment processes across clinics.

    Therefore, neurodevelopmental assessments must be conducted by competent and skilled clinicians, with careful interpretation of QbTest results undertaken following the ‘gold standard’ of diagnostic procedures (Villagomez et al, 2019).

    Generalisability

    All studies demonstrated poor cross-cultural validity; therefore, results cannot be generalised to the whole population. Fridman et al (2017) reported that countries including north America made significantly quicker diagnostic decisions than England. Thus, results are not representative of assessment practices outside of England and Sweden.

    Additionally, Chan et al (2022) highlighted differences in diagnostic thresholds and behavioural symptoms when examining ADHD in Asian and British children. Nevertheless, few studies have examined cultural differences which may affect ADHD diagnosis and QbTest performance.

    Gender differences relating to QbTest performance are poorly understood. A quantitative study by Slobodin and Davidovitch (2019) recognised gender differences in ADHD are unclear, due to limited samples of females used in research.

    Despite issues with this study's small sample size, researchers found females largely suffer from inattention, as opposed to males who display more symptoms of hyperactivity, also reported by Edebol et al (2013).

    Ultimately, QbTest performance varies depending on gender differences, requiring further investigation. Recommendations for future practice are summarised in Table 3.


    Recommendation Justification
    Use of competent, skilled clinicians with extensive experience in neurodevelopmental assessments and QbTesting As highlighted by Vogt (2021), there is concern over whether clinicians are competent at interpreting and administering the QbTest. This questions the extent to which data from the QbTest is accurately measured and used appropriately to assist with diagnostic decision-making. As dictated by QbTech (2020), the QbTest should not be used as a ‘stand-alone’ diagnostic tool
    Use of a multi-disciplinary team approach to aid more reliable diagnosis of ADHD NICE (2023) guidance stipulates a multidisciplinary team approach must be used when conducting neurodevelopmental assessments. Use of multidisciplinary approaches aids more comprehensive ADHD assessments and enhances care coordination (McGonnell et al, 2009)
    Exclusion of QbTesting to detect ADHD symptoms in complex presentations The interaction between ADHD, autism and learning disabilities is not well understood. Separating symptoms of ADHD and intellectual difficulties is challenging, and it is recommended a more holistic approach is adopted, considering all aspects of the patient's development to support diagnostic decision-making (Royal College of Psychiatrists, 2021)
    Use of consistent, standardised testing procedures to reduce the influence of extraneous variables Despite QbTesting having strict instructions for administering the test, the impact of environmental variables and clinicians' experience still pose a risk of distorting results (Vogt, 2021). Strict controls must be implemented throughout the testing procedure to reduce interference from extraneous variables
    Further research investigating gender differences in ADHD symptomatology and QbTest performance Gender differences in ADHD presentation are poorly understood (Slobodin and Davidovitch, 2019). Research has not examined gender differences affecting QbTest performance. Moreover, disproportionate samples do not fairly represent female test performance, requiring further examination
    Further research examining QbTest effectiveness at differentiating symptoms of autism and ADHD Almost half of children with autism suffer from impulsivity, hyperactivity and inattention (Murray, 2010; as cited by Hult et al, 2018). Research has highlighted similarities in QbTest performance in children with ADHD and autism; therefore, the extent to which the QbTest can differentiate between the two conditions is unclear (Hall et al, 2018)
    Cross-cultural studies examining the effectiveness of QbTesting outside of Western culture Evidence supporting QbTesting has largely been conducted across Europe, indicating poor cross-cultural validity. Cultural differences that affect ADHD presentation have not been examined, and the reliability of the QbTest is unclear when used with different populations
    Continued use of RCTs to examine the QbTest's ability to accurately detect impulsivity Although thematic analysis highlighted the QbTest was unable to identify symptoms of impulsivity, it remains unclear as to why, requiring further examination (Hult et al, 2018; Bijlenga et al, 2019; Adamou et al, 2022)
    More research examining the interaction of IQ and learning disabilities of QbTest performance in adults Milioni et al (2017) report intellectual ability affects QbTest performance. The relationship between intellectual ability and QbTest performance is poorly understood, raising the question of whether the QbTest measures cognitive abilities rather than symptoms of ADHD (Johansson et al, 2021)

    Bias

    While this review aimed to provide a comprehensive overview of QbTesting, it was not possible to include all primary research, due to having limited access to databases. Potential bias relating to the reporting of confidence intervals was noted throughout this review. Only three papers included confidence intervals, with failure to report these increasing the likelihood of high variability in samples, indicating a likelihood of bias (Aveyard, 2018). Consequently, caution should be employed when interpreting the results of these studies due to weak experimental design.

    Moreover, some studies utilised small sample sizes and high drop-out rates were also reported (Emser et al, 2018; Adamou et al, 2022). This is indicative of attrition bias, with Hollis et al (2018) reporting 153 cases of missing data and Johansson et al (2021) 15 cases, limiting the certainty of results. Researchers did not identify whether the effects of attrition bias were examined.

    Additionally, researchers such as Hult et al (2018) did not implement blinding, increasing the potential of observer bias and demand characteristics, with Bijlenga et al (2019) failing to implement randomisation when using a convenience sample, increasing the likelihood of shared characteristics. This reduces the generalisability of claims and may have led to further bias.

    Conclusion

    Several papers acknowledged that the QbTest was unable to identify symptoms of impulsivity, but there is no reliable explanation for this. Additionally, the QbTest cannot differentiate symptoms of ADHD from comorbid conditions, with gender differences also poorly understood (Slobodin and Davidovitch, 2019; Johansson et al, 2021). Issues around a lack of standardisation, poor ecological validity and diagnostic accuracy contradict research supporting the QbTest. It must, therefore, be used with caution when continuing to implement it into neurodevelopmental assessments across healthcare services. Ultimately, further interpretation and research into QbTest reliability is required to strengthen diagnostic accuracy.

    Key Points

  • The QbTest is not an accurate measure of impulsivity when used on both children and adults, particularly when used as a stand-alone assessment tool
  • Issues around a lack of standardisation, poor ecological validity and diagnostic accuracy contradict research supporting the QbTest
  • Further research into the effectiveness of QbTesting is required to strengthen diagnostic accuracy
  • CPD reflective questions

  • To what extent is the QbTest reflective of behaviours outside of a controlled, artificial setting with distractions not present as much as they would be in real-life settings?
  • To what extent do extraneous variables, such as anxiety, affect and distort QbTest results?
  • Are clinicians over-reliant on the QbTest to identify symptoms of ADHD in more complex presentations?