Premature dropout challenges the success of clinical trials in general (Wahlbeck et al., 2001, Kemmler et al., 2005 and Rabinowitz et al., 2009). In antipsychotic clinical trials in particular, dropout rates often exceed 50% (Wahlbeck et al., 2001 and Martin et al., 2006), and, interestingly, are higher in the placebo- than active-controlled trials (48.1% vs 28.3%, respectively) (Kemmler et al., 2005). Moreover, in many clinical trials dropout constitutes the primary trial outcome (Lieberman et al., 2005 and Kahn et al., 2008), since it may reflect drug inefficacy, intolerability and lack of compliance (Rabinowitz and Davidov, 2008). Despite being an important outcome, dropout is primarily a methodological artefact, since it leads to missing information (e.g., regarding symptom change) affecting modeling, and analysis. Hence, dropout endangers the validity of evidence-based conclusions. This has made dropout a topic of debate (Mallinckrodt et al., 2003, Molenberghs et al., 2004 and Leon et al., 2006). There are three dropout mechanisms: missing completely at random, missing at random and missing not at random (Little and Rubin, 2002). First, missing completely at random occurs when dropout and outcome are unrelated, and dropout occurs randomly. Statistical analyses of such data that are missing completely at random do not introduce bias; but statistical power is reduced due to dropout. During a clinical trial, missing completely at random data may occur if a patient moves too far away from the study site to participate (Mallinckrodt et al., 2003). In this case, dropout is unrelated to the trial outcome (e.g., PANSS scores). Second, missing at random occurs when dropout is systematically related to a study variable (e.g., symptom severity). For example, missing at random data may occur if a patient drops out due to symptom exacerbation (Mallinckrodt et al., 2003) that is observed in the data as increasingly worse PANSS scores. Third, missing not at random occurs when an unmeasured factor increases dropout. Missing not at random may occur if a patient drops out due to exacerbation that occurred after their last midway assessment. In that case, the exacerbation would not be recorded in the data (Mallinckrodt et al., 2003). The last two dropout mechanisms are informative since missing data contain information about the outcome (e.g. response); thus, ignoring them may introduce bias.
Statistics should account for dropout, maximize data availability at all visits, minimalize bias and maximize statistical power in clinical trials. Mixed modeling is a widely applied (Lieberman et al., 2005) statistical approach in clinical trials. It uses repeated clinical assessments of the outcome over time (e.g. from a longitudinal study), taking into account that the measurements from one patient may be more correlated than measurements from different patients. Thus, in mixed modeling, a fixed-effect component describes the average outcome evolution over time for all the patients, whereas a random-effect component describes the outcome evolution over time for each patient. Mixed modeling, however, ignores the effect of dropouts on the outcome assuming that these two are unrelated. Research has shown that this is not the case in clinical trials of schizophrenia (Rabinowitz and Davidov, 2008); thus, the use of mixed modeling has the potential to introduce bias.
To address the dropout problem, there are statistical approaches that simultaneously model the outcome accounting for dropouts within a unified model-based framework (Little, 1995, Hogan and Laird, 1997 and Rizopoulos, 2010). Joint modeling is a framework that appropriately integrates the dropout and outcome processes. The framework acknowledges that there are two different, but not independent, simultaneous processes: (i) the survival event process that refers to the time to dropout; and (ii) the longitudinal process that refers to the outcome (e.g., the follow-up of PANSS change scores over time). First, the survival event process estimates dropout from a survival analysis. Then, adjusting for the resultant survival event process, the longitudinal outcome process is computed similar to a mixed model with all available assessments. Joint modeling acknowledges that the dropout mechanism is informative, specifically that the outcome analysis (e.g., PANSS change) is dependent on the dropout mechanism. The joint modeling approach has already been used to analyze clinical trial data on cancer ( Li et al., 2013 and Ediebah et al., 2014) and aids ( Baghfalaki et al., 2014) but not schizophrenia. The current manuscript aims to compare the results of joint and mixed modeling in three clinical trials of risperidone and olanzapine versus placebo in the treatment of schizophrenia.