Genome-wide analyses of common and rare genetic variations have documented the heritability of major psychiatric disorders, established their highly polygenic genetic architecture, and identified hundreds of contributing variants. In recent years, these studies have illuminated another key feature of the genetic basis of psychiatric disorders: the important role and pervasive nature of pleiotropy. It is now clear that a substantial fraction of genetic influences on psychopathology transcend clinical diagnostic boundaries. In this review, we summarize evidence in psychiatry for pleiotropy at multiple levels of analysis: from overall genome-wide correlation to biological pathways and down to the level of individual loci. We examine underlying mechanisms of observed pleiotropy, including genetic effects on neurodevelopment, diverse actions of regulatory elements, mediated effects, and spurious associations of genomic variation with multiple phenotypes. We conclude with an exploration of the implications of pleiotropy for understanding the genetic basis of psychiatric disorders, informing nosology, and advancing the aims of precision psychiatry and genomic medicine.
Keywords: Cross-disorder; GWAS; Genetic correlation; Nosology; Pleiotropy; Precision psychiatry; Psychiatric genetics.
Background: There have been considerable recent advances in understanding the genetic architecture of psychiatric disorders as well as the underlying neurocircuitry. However, there is little work on the concordance of genetic variations that increase risk for cross-disorder vulnerability, and those that influence subcortical brain structures. We undertook a genome-wide investigation of the genetic overlap between cross-disorder vulnerability to psychiatric disorders (p-factor) and subcortical brain structures.
Methods: Summary statistics were obtained from the PGC cross-disorder genome-wide association study (GWAS) (Ncase= 232,964, Ncontrol= 494,162) and the CHARGE-ENIGMA subcortical brain volumes GWAS (N=38,851). SNP effect concordance analysis (SECA) was used to assess pleiotropy and concordance. Linkage Disequilibrium (LD) Score Regression and ρ-HESS were used to assess genetic correlation and conditional false discovery (cFDR) was used to identify variants associated with p-factor, conditional on the variants association with subcortical brain volumes.
Results: Evidence of global pleiotropy between p-factor and all subcortical brain regions was observed. Risk variants for p-factor correlated negatively with brainstem. A total of 787 LD-independent variants were significantly associated with p-factor when conditioned on the subcortical GWAS results. Gene set enrichment analysis of these variants implicated actin binding and neuronal regulation.
Limitations: SECA could be biased due to the potential presence of overlapping study participants in the p-factor and subcortical GWASs.
Conclusion: Findings of genome-wide pleiotropy and possible concordance between genetic variants that contribute to p-factor and smaller brainstem volumes, are consistent with previous work. cFDR results highlight actin binding and neuron regulation as key underlying mechanisms. Further fine-grained delineation of these mechanisms is needed to advance the field.
Keywords: Concordance; Cross-disorder Vulnerability; Genetic Overlap; Pleiotropy; Shared genetic risk; Subcortical Brain Volumes.
Background: Definition of disorder subtypes may facilitate precision treatment for posttraumatic stress disorder (PTSD). We aimed to identify PTSD subtypes and evaluate their associations with genetic risk factors, types of stress exposures, comorbidity, and course of PTSD.
Methods: Data came from a prospective study of three U.S. Army Brigade Combat Teams that deployed to Afghanistan in 2012. Soldiers with probable PTSD (PTSD Checklist for Diagnostic and Statistical Manual of Mental Disorders-Fifth Edition ≥31) at three months postdeployment comprised the sample (N = 423) for latent profile analysis using Gaussian mixture modeling and PTSD symptom ratings as indicators. PTSD profiles were compared on polygenic risk scores (derived from external genomewide association study summary statistics), experiences during deployment, comorbidity at three months postdeployment, and persistence of PTSD at nine months postdeployment.
Results: Latent profile analysis revealed profiles characterized by prominent intrusions, avoidance, and hyperarousal (threat-reactivity profile; n = 129), anhedonia and negative affect (dysphoric profile; n = 195), and high levels of all PTSD symptoms (high-symptom profile; n = 99). The threat-reactivity profile had the most combat exposure and the least comorbidity. The dysphoric profile had the highest polygenic risk for major depression, and more personal life stress and co-occurring major depression than the threat-reactivity profile. The high-symptom profile had the highest rates of concurrent mental disorders and persistence of PTSD.
Conclusions: Genetic and trauma-related factors likely contribute to PTSD heterogeneity, which can be parsed into subtypes that differ in symptom expression, comorbidity, and course. Future studies should evaluate whether PTSD typology modifies treatment response and should clarify distinctions between the dysphoric profile and depressive disorders.
Keywords: Posttraumatic stress disorder; latent class analysis; military personnel; polygenic risk scores; typology.
Osteoarthritis (OA) and major depression (MD) are two debilitating disorders that frequently co-occur and affect millions of the elderly each year. Despite the greater symptom severity, poorer clinical outcomes, and increased mortality of the comorbid conditions, we have a limited understanding of their etiologic relationships. In this study, we conducted the first cross-disorder investigations of OA and MD, using genome-wide association data representing over 247K cases and 475K controls. Along with significant positive genome-wide genetic correlations (r g = 0.299 ± 0.026, p = 9.10 × 10-31), Mendelian randomization (MR) analysis identified a bidirectional causal effect between OA and MD (βOA → MD = 0.09, SE = 0.02, z-score p-value < 1.02 × 10-5; βMD → OA = 0.19, SE = 0.026, p < 2.67 × 10-13), indicating genetic variants affecting OA risk are, in part, shared with those influencing MD risk. Cross-disorder meta-analysis of OA and MD identified 56 genomic risk loci (P meta ≤ 5 × 10-8), which show heightened expression of the associated genes in the brain and pituitary. Gene-set enrichment analysis highlighted "mechanosensory behavior" genes (GO:0007638; P gene_set = 2.45 × 10-8) as potential biological mechanisms that simultaneously increase susceptibility to these mental and physical health conditions. Taken together, these findings show that OA and MD share common genetic risk mechanisms, one of which centers on the neural response to the sensation of mechanical stimulus. Further investigation is warranted to elaborate the etiologic mechanisms of the pleiotropic risk genes, as well as to develop early intervention and integrative clinical care of these serious conditions that disproportionally affect the aging population.
Keywords: comorbidity; cross-disorder GWAS; major depression; mechanosensory behavior; osteoarthritis; pain; pleiotropy.
This issue contains a thoughtful report by Gradus et al. (Am J Epidemiol. 2021;190(12):2517-2527) on a machine learning analysis of administrative variables to predict suicide attempts over 2 decades throughout Denmark. This is one of numerous recent studies that document strong concentration of risk of suicide-related behaviors among patients with high scores on machine learning models. The clear exposition of Gradus et al. provides an opportunity to review major challenges in developing, interpreting, and using such models: defining appropriate controls and time horizons, selecting comprehensive predictors, dealing with imbalanced outcomes, choosing classifiers, tuning hyperparameters, evaluating predictor variable importance, and evaluating operating characteristics. We close by calling for machine-learning research into suicide-related behaviors to move beyond merely demonstrating significant prediction-this is by now well-established-and to focus instead on using such models to target specific preventive interventions and to develop individualized treatment rules that can be used to help guide clinical decisions to address the growing problems of suicide attempts, suicide deaths, and other injuries and deaths in the same spectrum.
Suicide is a major public health problem. The contribution of common genetic variants for major depressive disorder (MDD) independent of personal and parental history of MDD has not been established. Polygenic risk score (using PRS-CS) for MDD was calculated for US Army soldiers of European ancestry. Associations between polygenic risk for MDD and lifetime suicide attempt (SA) were tested in models that also included parental or personal history of MDD. Models were adjusted for age, sex, tranche (where applicable), and 10 principal components reflecting ancestry. In the first cohort, 417 (6.3%) of 6,573 soldiers reported a lifetime history of SA. In a multivariable model that included personal [OR = 3.83, 95% CI:3.09-4.75] and parental history of MDD [OR = 1.43, 95% CI:1.13-1.82 for one parent and OR = 1.64, 95% CI:1.20-2.26 for both parents), MDD PRS was significantly associated with SA (OR = 1.22 [95% CI:1.10-1.36]). In the second cohort, 204 (4.2%) of 4,900 soldiers reported a lifetime history of SA. In a multivariable model that included personal [OR = 3.82, 95% CI:2.77-5.26] and parental history of MDD [OR = 1.42, 95% CI:0.996-2.03 for one parent and OR = 2.21, 95% CI:1.33-3.69 for both parents) MDD PRS continued to be associated (at p = .0601) with SA (OR = 1.15 [95% CI:0.994-1.33]). A soldier's PRS for MDD conveys information about likelihood of a lifetime SA beyond that conveyed by two predictors readily obtainable by interview: personal or parental history of MDD. Results remain to be extended to prospective prediction of incident SA. These findings portend a role for PRS in risk stratification for suicide attempts.
Objective: Suicide is one of the leading causes of death worldwide, yet clinicians find it difficult to reliably identify individuals at high risk for suicide. Algorithmic approaches for suicide risk detection have been developed in recent years, mostly based on data from electronic health records (EHRs). Significant room for improvement remains in the way these models take advantage of temporal information to improve predictions.
Materials and methods: We propose a temporally enhanced variant of the random forest (RF) model-Omni-Temporal Balanced Random Forests (OT-BRFs)-that incorporates temporal information in every tree within the forest. We develop and validate this model using longitudinal EHRs and clinician notes from the Mass General Brigham Health System recorded between 1998 and 2018, and compare its performance to a baseline Naive Bayes Classifier and 2 standard versions of balanced RFs.
Results: Temporal variables were found to be associated with suicide risk: Elevated suicide risk was observed in individuals with a higher total number of visits as well as those with a low rate of visits over time, while lower suicide risk was observed in individuals with a longer period of EHR coverage. RF models were more accurate than Naive Bayesian classifiers at predicting suicide risk in advance (area under the receiver operating curve = 0.824 vs. 0.754, respectively). The proposed OT-BRF model performed best among all RF approaches, yielding a sensitivity of 0.339 at 95% specificity, compared to 0.290 and 0.286 for the other 2 RF models. Temporal variables were assigned high importance by the models that incorporated them.
Discussion: We demonstrate that temporal variables have an important role to play in suicide risk detection and that requiring their inclusion in all RF trees leads to increased predictive performance. Integrating temporal information into risk prediction models helps the models interpret patient data in temporal context, improving predictive performance.
Keywords: clinical risk; modeling; random forest; suicide; temporal.
Background: It is critical to promptly identify and monitor mood and anxiety symptoms in young people with SUD. The primary aim of this study was to conduct a psychometric validation of the Patient Health Questionnaire (PHQ-9) and Generalized Anxiety Disorder scale (GAD-7) for depression and anxiety screening in young people seeking outpatient treatment for SUD. Our secondary aim was to compare the performance of the PHQ-9 and GAD-7 to their briefer two-item versions (PHQ-2 and GAD-2) in terms of detecting probable mood and anxiety disorders.
Method: Data were extracted from the electronic health records of patients (ages 14 to 26) who received a diagnostic evaluation following clinical implementation of the PHQ-9 and GAD-7 at a hospital-based outpatient SUD treatment program (N=121, average age 19.1 ± 3.1 years).
Results: The PHQ-9 and GAD-7 showed excellent internal consistency. A PHQ-9 cut score of 7 or 8 (PHQ-2 cut score: 2) and GAD-7 cut score of 6 (GAD-2 cut score: 2) had the best balance of sensitivity, specificity, and positive and negative predictive power in these data. These measures also showed good convergent and acceptable discriminant validity.
Limitations: The sample was predominantly White and non-Hispanic, and a validated (semi-)structured diagnostic interview was not used to establish mood and anxiety disorder diagnoses.
Conclusions: Results suggest the PHQ-9 and GAD-7 are reliable and potentially clinically useful screening tools for depression and anxiety in young people with SUD, and that the two-item versions may have similar clinical utility as the full measures.
Keywords: Adolescents; Anxiety; Depression; Screening; Substance use disorder; Young adults.
Anxiety and depressive disorders are common psychiatric conditions with high rates of co-occurrence. Although traditional cognitive-behavioral therapy (CBT) protocols targeting individual anxiety and depressive disorder diagnoses have been shown to be effective, such "single-diagnosis" approaches pose challenges for providers who treat patients with multiple comorbidities and for large-scale dissemination of and training in evidence-based psychological treatments. To help meet this need, newer "transdiagnostic" CBT interventions targeting shared underlying features across anxiety, depressive, and related disorders have been developed in recent years. Here we provide a rationale for and description of the transdiagnostic CBT model, followed by an overview of key therapeutic strategies included in transdiagnostic CBT protocols for patients with anxiety disorders and comorbid depression. We conclude with a brief review of the empirical evidence in support of transdiagnostic CBT for individuals with anxiety and depressive disorders and identify directions for future research.
Dropout from psychotherapy is common and can have negative effects for patients, providers, and researchers. A better understanding of when and why patients stop treatment early, as well as actionable factors contributing to dropout, has the potential to prevent it. Here, we examined dropout from a large randomized controlled trial of transdiagnostic versus single-diagnosis cognitive-behavioral treatment (CBT) for patients with anxiety disorders (n = 179; Barlow et al., 2017). We aimed to characterize the timing of and reasons for dropout and test whether participants who dropped out had different symptom trajectories than those who completed treatment. Results indicated that overall, the greatest risk of dropout was prior to the first treatment session. In single-diagnosis CBT, dropout risk was particularly elevated before the first session and after other early sessions, whereas in transdiagnostic CBT, dropout risk was low and stable before and during treatment. Participants most often dropped out due to failure to comply with study procedures or dissatisfaction with or desiring alternative treatment. Results from multilevel models showed that trajectories of anxiety symptoms did not significantly differ between dropouts and completers. These findings suggest that there may be specific time windows for targeted and timely interventions to prevent dropout from CBT.
Keywords: CBT; attrition; cognitive-behavioral therapy; dropout; transdiagnostic.
Advancements in the understanding and prevention of self-injurious thoughts and behaviors (SITBs) are urgently needed. Intensive longitudinal data collection methods-such as ecological momentary assessment-capture fine-grained, "real-world" information about SITBs as they occur and thus have the potential to narrow this gap. However, collecting real-time data on SITBs presents complex ethical and practical considerations, including about whether and how to monitor and respond to incoming information about SITBs from suicidal or self-injuring individuals during the study. We conducted a systematic review of protocols for monitoring and responding to incoming data in previous and ongoing intensive longitudinal studies of SITBs. Across the 61 included unique studies/samples, there was no clear most common approach to managing these ethical and safety considerations. For example, studies were fairly evenly split between either using automated notifications triggered by specific survey responses (e.g., indicating current suicide risk) or monitoring and intervening upon (generally with a phone-based risk assessment) incoming responses (36%), using both automated notifications and monitoring/intervening (35%), or neither using automated notifications nor monitoring/intervening (29%). Certain study characteristics appeared to influence the safety practices used. Future research that systematically evaluates optimal, feasible strategies for managing risk in real-time monitoring research on SITBs is needed.
Keywords: Ecological momentary assessment; Mobile health; Self-injury; Suicide.
For many years, psychiatrists have tried to understand factors involved in response to medications or psychotherapies, in order to personalize their treatment choices. There is now a broad and growing interest in the idea that we can develop models to personalize treatment decisions using new statistical approaches from the field of machine learning and applying them to larger volumes of data. In this pursuit, there has been a paradigm shift away from experimental studies to confirm or refute specific hypotheses towards a focus on the overall explanatory power of a predictive model when tested on new, unseen datasets. In this paper, we review key studies using machine learning to predict treatment outcomes in psychiatry, ranging from medications and psychotherapies to digital interventions and neurobiological treatments. Next, we focus on some new sources of data that are being used for the development of predictive models based on machine learning, such as electronic health records, smartphone and social media data, and on the potential utility of data from genetics, electrophysiology, neuroimaging and cognitive testing. Finally, we discuss how far the field has come towards implementing prediction tools in real-world clinical practice. Relatively few retrospective studies to-date include appropriate external validation procedures, and there are even fewer prospective studies testing the clinical feasibility and effectiveness of predictive models. Applications of machine learning in psychiatry face some of the same ethical challenges posed by these techniques in other areas of medicine or computer science, which we discuss here. In short, machine learning is a nascent but important approach to improve the effectiveness of mental health care, and several prospective clinical studies suggest that it may be working already.
Keywords: Computational psychiatry; electronic health records; external validation; machine learning; pharmacotherapies; prediction; psychotherapies; smartphone data; treatment outcomes.
Objective: Cannabis and alcohol use are correlated behaviors among youth. It is not known whether discontinuation of cannabis use is associated with changes in alcohol use. This study assessed alcohol use in youth before, during, and after 4 weeks of paid cannabis abstinence.
Methods: Healthy, non-treatment seeking, cannabis users (n = 160), aged 14-25 years, 84% of whom used alcohol in the last month, were enrolled for a 4-week study with a 2-4 week follow-up. Participants were randomly assigned to 4 weeks of either biochemically-verified cannabis abstinence achieved through a contingency management framework (CB-Abst) or monitoring with no abstinence requirement (CB-Mon). Participants were assessed at baseline and approximately 4, 6, 10, 17, 24, and 31 days after enrollment. A follow-up visit with no cannabis abstinence requirement for CB-Abst was conducted after 2-4 weeks.
Results: Sixty percent of individuals assigned to the CB-Abst condition increased in frequency and quantity of alcohol consumption during the 4-week period of incentivized cannabis abstinence. As a whole, CB-Abst increased by a mean of 0.6 drinking days and 0.2 drinks per day in the initial week of abstinence (p's < 0.006). There was no evidence for further increases in drinking frequency or quantity during the 30-day abstinence period (p's > 0.53). There was no change in drinking frequency or quantity during the 4-week monitoring or follow-up periods among CB-Mon.
Conclusions: On average, 4 weeks of incentivized (i.e., paid) cannabis abstinence among non-treatment seeking youth was associated with increased frequency and amount of alcohol use in week 1 that was sustained over 4 weeks and resolved with resumption of cannabis use. However, there was notable variability in individual-level response, with 60% increasing in alcohol use and 23% actually decreasing in alcohol use during cannabis abstinence. Findings suggest that increased alcohol use during cannabis abstinence among youth merits further study to determine whether this behavior occurs among treatment seeking youth and its clinical significance.
Keywords: Abstinence; Alcohol; Cannabis; Contingency Management; Marijuana; Substitution; Youth.
There are individual differences in health outcomes following exposure to childhood maltreatment, yet constant individual variance is often assumed in analyses. Among 286 Black, South African women, the association between childhood maltreatment and neurocognitive health, defined here as neurocognitive performance (NP), was first estimated assuming constant variance. Then, without assuming constant variance, we applied Goldstein's method (Encyclopedia of statistics in behavioral science, Wiley, 2005) to model "complex level-1 variation" in NP as a function of childhood maltreatment. Mean performance in some tests of information processing speed (Digit-symbol, Stroop Word, and Stroop Color) lowered with increasing severity of childhood maltreatment, without evidence of significant individual variation. Conversely, we found significant individual variation by severity of childhood maltreatment in tests of information processing speed (Trail Making Test) and executive function (Color Trails 2 and Stroop Color-Word), in the absence of mean differences. Exploratory results suggest that the presence of individual-level heterogeneity in neurocognitive performance among women exposed to childhood maltreatment warrants further exploration. The methods presented here may be used in a person-centered framework to better understand vulnerability to the toxic neurocognitive effects of childhood maltreatment at the individual level, ultimately informing personalized prevention and treatment.
Background: Approximately 56% of Kenya´s population resides in informal settlements (UN-Habitat, 2016). Female residents experience a range of psychosocial stressors including chronic poverty and high rates of interpersonal violence. Despite evidence that this population has some of the worst physical health outcomes in the country (APHRC, 2014), few studies have evaluated their mental health status and its correlates.
Objective: The purpose of this study was to identify risk and protective factors associated with mental health problems (posttraumatic stress & depression) among women living in informal settlements in Kenya. Hypothesized risk factors included economic stress, a history of experiencing childhood abuse and sexual violence, as well as partner-perpetrated psychological and physical abuse. Hypothesized protective factors were supportive relationships with family members and friends and having a sense community connection. Method: Local community health workers were trained to collect data via individual interviews using validated measures. Participants were recruited using systematic random sampling in two informal settlements in Nakuru County. We used path analysis to test the hypothesized model among a sample of 301 women.
Results: The model had an excellent fit (χ2 = 13.391, df = 8, p =.099; GFI =.99; CFI =.99; RMSEA =.05) and explained 25% of the variance in PTSS and 28% of the variance in depression. All predictor variables except support from friends were statistically significant in the expected direction. Specifically, economic stress, childhood abuse, sexual violence, as well as physical and psychological abuse from one´s partner had significant positive associations with PTSS and depression. Having supportive family members and a sense of being part of the community had significant negative associations with symptoms.
Conclusions: Results highlight the importance of addressing intimate partner and other forms of interpersonal violence in these settings and hold implications for tailoring interventions for this marginalized population.
Keywords: Informal settlements; Kenya; depression; intimate partner violence; posttraumatic stress.
Massachusetts General Hospital
Simches Research Building
185 Cambridge Street
Boston, MA 02114