1 Introduction

“Like all therapists, I personally experience an utter inability not to believe I effect results in individual cases; but as a psychologist I know it is foolish to take this conviction at face value. In order to bring about the needed research, it will probably be necessary for therapists and administrators to get really clear on this point: Our daily therapeutic experiences, which (on good days!) make it hard for us to take Eysenck seriously, can be explained within a crude statistical model of the patient-therapist population that assigns very little specific ‘power’ to therapeutic intervention. If the majority of neurotics are in ‘unstable equilibrium’ and hence tend to improve under moderately favorable regimes, those who are in therapy while improving will be talking about their current actions and feelings in the sessions. Client and therapist will naturally attribute changes to the therapy.”

Paul E. Meehl (1955)

Over the last couple of decades evidence-based psychotherapies have flourished, and there are now therapies that are well-established for a wide range of problems (Nathan and Gorman 2015). At the same time the mental-health burden is still enormous, and challenges to the dissemination of treatments are substantial (Holmes et al. 2018). While some celebrate the enormous achievements made by the evidence-based therapy movement, others are more concerned about the quality of the evidence.

In the clinical sciences, especially in the biomedical ones, the issues of research quality has had some vocal critics. In 1994 Doug Altman published his famous editorial The scandal of poor medical research which begins with the widely spread phrase “We need less research, better research, and research done for the right reasons” (Altman 1994). In an attempt to improve the reporting of clinical trials, the CONSORT statement was published 1996. The threat of various biases received increased attention ten years later when Ioannidis (2005) published the highly influential and discussed paper Why most published research findings are false. Similarly, Chalmers and Glasziou (2009) estimated that 85% of the investments in biomedical research are wasted, due to a combination of researchers focusing on outcomes that are not important to patients and clinicians, flawed research designs, studies never being published, and an underreporting or lack of transparency when findings are reported. The Lancet published a special series on increasing value and reducing research waste, where Macleod et al. (2014) argues that initially promising findings that fail to improve healthcare outcomes are the norm.

In psychology, the most active discussion regarding research quality has been by experimental and social psychologists. Perhaps, most famously through the reproducibility project (Open Science Collaboration 2015), where a large team of researchers tried to replicate 100 psychological experiments. In a large part of the replication attempts they found much weaker evidence compared to the original investigation. Likewise, in several influential papers, different authors have pointed out that several questionable research practices could contribute to untrustworthy studies (Simmons, Nelson, and Simonsohn 2011; Gelman 2013), and highlighted the wrong incentives at play (Nosek, Spies, and Motyl 2012; Bakker, van Dijk, and Wicherts 2012). Many of these concerns are likely behind the growing open science movement, consisting of people advocating for a more transparent and open science (Wallach, Boyack, and Ioannidis 2018; Nosek, Spies, and Motyl 2012).

How is the situation in clinical psychology? Are our results more robust and our trials more transparently reported? The replicability crisis has been frequently discussed in psychology in general, and as mentioned, especially in social psychology. However, clinical psychology has, according to some, been uninterested in participating in the discussion (Tackett et al. 2017; Hengartner 2018). Although the focus on open science might be new, in the subfield of psychotherapy research, an active debate has been going on for decades about the quality of psychotherapy research. In 1952 Eysenck published a review claiming that psychotherapy was ineffective and that change could largely be attributed to spontaneous remission (Eysenck 1952), which spurred a heated discussion.

Back when Eysenck published his critique the evidence for (and against) psychotherapy was mostly based on anecdotal clinical observations. Now several decades later, both clinicians and researchers act as if psychotherapy has clearly been established as efficacious in gold-standard and high-quality randomized controlled trials (RCTs). Although substantial gains in knowledge have been made, many issues remain unsolved, and there are many reasons to be skeptical of the current quality of the evidence. In the first part of this thesis, I give an overview of the broader discussion about the contemporary issues that concern the scientific investigation of psychotherapies, and threaten the validity of psychological treatment research. After the broader overview, I present a more detailed background to the two major issues investigated in Study I and II. Chapter 3 covers therapist effects and Section 4.2 semicontinuous gambling data. In Chapter 4, I cover gambling disorder and especially research that focuses on the concerned significant others (CSOs) of problem gamblers.

Skärholmen, Oktober 2019