Home
Coincidence Analysis
2nd international conference on Current Issues in Coincidence Analysis

Abstracts

This page collects the abtracts of the presentations to be given at the 2nd international conference on "Current Issues in Coincidence Analysis", held in Prague May 19 to 20, 2023.

Main content

Trond Løkling (NTNU Trondheim): What makes workplace conflict deescalate? A Coincidence Analysis on longitudinal panel data

Despite decades of research dedicated to the study of conflict dynamics in organizations, little is known about the complex configurations underlying the perception of workplace conflict, nor the mechanisms reducing its presence. Social and behavioral phenomena, like workplace conflict, are fundamentally complex, in the sense that subjects within a system dynamically and constantly interact with each other, and therefore are shaped by many interdependent causes. Researchers and practitioners that adopt a complex systems perspective have argued that, rather than focusing on a single causal relationship (e.g., conflict behavior) at a time, we need to investigate how the interaction or combination of different factors generate specific outcomes over time.

Building on JD-R theory, which proposes that job demands, and strain may lead to maladaptive self-regulation cognitions and conflict behaviors, the main objective of this article is to investigate how interdependent job demands and/or resource factors emerge as part of causal combinations explaining the reduction of workplace conflict. We used longitudinal panel data, from a large Norwegian university (n=16991). We included nine factors within the JDR framework, which were identified in conflict research as factors impacting the shape and presence of workplace conflict. The factors were meaning, social community, recognition, autonomy, role conflict, role overload, goal clarity, work home conflict and innovation climate. To conceptualize the outcome (reduced workplace conflict), we first conducted a Latent Class Growth Analysis on aggregated data to reveal patterns of development in the outcome over time. Further we selected the class of interest: Reduced conflict class (high-low) (n=103). Finally, we applied Coincidence Analysis to reveal the most parsimonious configuration leading to the outcome.

In line with previous conflict research, we expect the presence of perceived meaning, social community, recognition, autonomy, goal clarity, innovation climate and the absence of role overload, role conflict, and work home conflict will be part of causal combinations explaining reduced conflict levels. Unpacking these configurations can serve as a guidance on how to tailor conflict interventions to mitigate destructive potentials inherent in the multiple levels of an organization, promoting a preventive effect. Moreover, by applying a complex system perspective, our findings could contribute to developing the JDR model further, by identifying causal combinations reducing workplace conflict over time, thus opening new trajectories between the loss cycle and gain cycle of the framework.

 

Ole Henning Sørensen (NFA Denmark): Simple roads to failure, complex paths to success: An evaluation of conditions explaining perceived fit of an organizational occupational health intervention

Organizational occupational health interventions (OOHIs) that are perceived by employees as relevant for their workplace are more likely to be implemented successfully, yet little is known about the conditions that produce such perceptions. This study identifies the conditions that create a perception among employees that an intervention fits their organization as well as the conditions that result in low levels of perceived fit. We used data from a longitudinal, quasi-experimental OOHI project implemented in 64 Danish preschools. Perceived fit was assessed through employee ratings at follow-up, while survey responses from implementation team members at five time points were used to assess four context and fourteen process factors. The results of a coincidence analysis showed that high levels of perceived fit were achieved through two paths. Each path consisted of a lack of co-occurring changes together with either very high levels of managerial support (path_1) or a combination of implementation team role clarity, staff involvement, and team learning (path_2). In contrast, low levels of perceived fit was brought about by single factors: limited leader support, low degree of role clarity, or concurrent organizational changes. The findings reveal the complexity involved in implementing OOHIs and offer insights into reasons they frequently fail.

 

Veli-Pekka Parkkinen (University of Bergen): Measuring the robustness of CNA models

Configurational comparative methods such as CNA are known to be sensitive to various data deficiencies, and to the choice of model-fit thresholds. These issues can combine to produce spurious findings whose falsity cannot be reliably diagnosed. To deal with this problem, Parkkinen and Baumgartner (2023) propose that instead of using a single conventionally chosen threshold, one should analyze data using many different fit thresholds and then look for models that are robust in the sense that they agree in their causal ascriptions with many other models found at different fit thresholds. Since the causal ascriptions of such models do not crucially depend on a particular choice of fit threshold, the argument goes, these models are more likely to track aspects of the true data-generating mechanism than models that make idiosyncratic causal claims. The latter, by contrast, are more likely to reflect noise that only by chance becomes causally interpretable at particular fit threshold settings. This idea is implemented in the {frscore} R package, which automates the process of reanalyzing a data set, and calculates a compatibility score called fit-robustness for the models returned in the analyses.

Parkkinen & Baumgartner originally proposed that causal compatibility be measured as submodel relations between models, and the {frscore} package implemented this approach. While this approach quite often works, the underlying assumption that causal compatibility reduces to a submodel relation is nonetheless false. I demonstrate with examples that a submodel relation is neither necessary nor sufficient for causal compatibility of models, except in the case of atomic (single-outcome) models. The reason is that the submodel-relation is defined as a mapping from each syntactically explicit, direct causal claim made by the submodel, to a similarly explicit causal claim made by the supermodel. Complex models that represent causal chains make claims about the indirect relevance of distal causes on their downstream effects, and these claims are not explicitly represented in any of the atomic components of the model. Hence, the compatibility of two models where at least one represents a chain cannot be decided by a simple check for submodel relation.

I describe an alternative method for programmatically deciding the causal compatibility of complex models. This involves explicating every putative indirect causal claim made by the models of interest by appropriate syntactic manipulation of the models, followed by minimization of the manipulated models to eliminate putative indirect claims that are in fact redundant. After that, compatibility with respect to both direct and indirect causal claims can be decided by testing for submodel relations between the atomic components of the explicated versions of the models. From version 0.3.0, {frscore} switches to this method of measuring causal compatibility of models. I describe how exactly this will influence the output of the frscore functions compared to their earlier versions, and describe changes in the functions' user interface that accompany the switch to the new scoring method. Lastly, time permitting, I discuss some remaining philosophical and practical issues, such as the problem presented by models that represent causal feedback.

References:
Parkkinen, V. P., & Baumgartner, M. (2023). Robustness and model selection in configurational causal modeling. Sociological Methods & Research, 52(1), 176-208.

 

Luna De Souter (Univ. of Bergen): Evaluating Boolean relationships in Configurational Comparative Methods

Configurational Comparative Methods (CCMs) aim to learn causal structures from datasets by exploiting Boolean sufficiency and necessity relationships. One important challenge for these methods is that such Boolean relationships are often not satisfied in real-life datasets, as these datasets usually contain noise. Hence, CCMs infer models that only approximately fit the data, introducing a risk of inferring incorrect or incomplete models, especially when data are also fragmented (have limited empirical diversity). In order to minimize this risk as much as possible, evaluation measures for sufficiency and necessity should capture all relevant evidence. I point out that the standard evaluation measures in CCMs, consistency and coverage, neglect certain evidence for these Boolean relationships. Correspondingly, I introduce two new measures, contrapositive consistency and contrapositive coverage, which are equivalent to the binary classification measures specificity and negative predictive value, respectively, to the CCM context as additions to consistency and coverage.

 

Jonathan Freitas (Univ. of Minas Gerais): A CNA of CNAs: Searching for causal models of the performance of the method

Parkkinen and Baumgartner (2023) benchmarked CNA against itself under different approaches to model selection. In this presentation, I report a follow-up benchmarking study on the performance of the method under different settings of some of its parameters when adopting the fit-robustness (FR) selection approach. I systematically manipulated the granularity of consistency and coverage variation in the reanalysis series, the maximum number of unique solution types to include in FR scoring, and the upper bound for the complexity of the atomic solution formulas (asf) to be searched for. The benchmark criteria were: ambiguity; correctness; fallacy-freeness; completeness; and speed. I defined the benchmarking scenarios in terms of asf complexity, sample size, and noise ratio. I focused on single-outcome structures, randomly introduced different levels of fragmentation, and held constant the number of factors. Besides presenting the benchmarking results, I explore the possibility of framing them as a configuration table. Hence, following the parametrizing recommendations derived from the study, I perform a coincidence analysis of the benchmarking scenarios with their corresponding outputs and discuss that “meta-level” analysis – ie., a CNA of CNAs.

References:Parkkinen, V. P., & Baumgartner, M. (2023). Robustness and model selection in configurational causal modeling. Sociological Methods & Research, 52(1), 176-208.

 

Deborah Cragun (Univ. of South Florida): Real-world examples illustrating challenges and considerations when building, selecting, and presenting CNA models

Although identifying a primary outcome is usually an important step when planning research, knowing or specifying an outcome is not necessary for CNA to determine whether data support causal relationships with one or more outcome. Specifying an outcome in CNA and/or setting the ordering argument as strict = TRUE may be necessary to achieve interpretable results and it can help reduce model ambiguity. However, allowing for more possibilities may reveal unique insights. I will use real-world examples to illustrate benefits and challenges that can arise when CNA is allowed to assess for multiple outcomes with no restrictions and when and why factors may be ordered to allow for only certain outcomes and causal chains to emerge. In the first example, evaluators of a home visiting program sought to determine what explains why some parents report being extremely likely to recommend the program to others (i.e., what differentiates program promotors from non-promoters). We conducted CNA using the FRscore software package and specified promoter as the outcome with the following factors as potential difference makers: 1) presence of family circumstances that impede participation; 2) family prefers in-person visits; 3) home visitor relationship facilitates ongoing participation, and 4) family was empowered. Although resulting models demonstrated adequate consistency and coverage, they were not very compelling. In a subsequent analysis with no outcome or ordering specified, CNA identified factors that explained the presence and the absence of family empowerment; and these models had better overall fit and provided more valuable insights for the evaluators. As part of a different study, we specified a single outcome but allowed all other factors to causally influence each other. The resulting models were difficult to explain, likely overfit, and had the same fit robustness scores. However, after using theory to order the factors, thus limiting which factors could causally influence others, the output revealed interpretable, but relatively complex models that simultaneously explain multiple outcomes. Visual approaches used to compare and report the most robust models will be shown because they helped members of our team better understand the concept of model ambiguity and more accurately interpret complex models. 

 

Martyna Swiatczak (Univ. of Bergen): To be or not to be a good leader? A Coincidence Analysis of identity leadership, work-related attitudes, and burnout

Identity leadership has been conceptualized as a group-based social influence process that revolves around leaders creating, embodying, promoting, and embedding a shared sense of "we" among those they lead. Meanwhile, identity leadership has been widely shown to promote beneficial work environments by positively influencing relationships at work, individual well-being, and organisational performance. However, previous studies investigating the relationship between identity leadership and burnout only partially find a direct negative relationship between identity leadership and burnout, while others reject such a relationship. This leads to the assumption that identity leadership might interact with other work-related attitudes that are relevant for the prevention of burnout, such as trust, identification, and satisfaction. To unravel the complex interplay between identity leadership, work-related attitudes, and burnout, in a research project with colleagues from the Global Identity Leadership Development (GILD) project, we use CNA on a large dataset (N=7,855) from the 2020-2021 GILD wave. I will present our study and discuss some methodological challenges we faced with the large N cross-sectional (non-homogenous) dataset comprising 15 potentially causally relevant factors.

 

Edward J. Miech (Regenstrief Institute): A bottom-up approach to factor selection in CNA: Findings from a reanalysis of three QCA-based studies using the "msc" routine  

Factor selection remains a crucial issue in configurational comparative methods, as results rely directly on which factors researchers choose to include in their models. For larger datasets, a oft-cited procedure within Qualitative Comparative Analysis has been to select factors on “theoretical grounds alone.” This provides a theory-based rationale for factor selection within QCA but can lead to serious concerns around replicability, the loss of potentially valuable information, and methodological rigor.

Coincidence Analysis (CNA) offers a fundamentally different approach to factor selection that is inductive, bottom-up and data-informed rather than deductive, top-down and entirely theoretical. This routine applies the ”minimally sufficient conditions” (i.e., “msc”) function within the R package “cna” to look across all cases and all factors in the original dataset at once to identify configurations of specific conditions with especially strong connections to the outcome of interest. This exhaustive process considers every possible one-, two- and three-condition configuration instantiated in the dataset, assesses each configuration against a prespecified consistency threshold, and retains configurations that meet the consistency threshold. The routine next organizes this Boolean output in a “condition table,” where rows represent individual configurations and columns list values for outcome, conditions, consistency, coverage and complexity. During this exploratory data analysis, the msc function can be run multiple times at different consistency levels (for example, 95%, 90%, 85%, 80%, 75%) in order to compare output at different thresholds. Researchers can then consult this condition table to identify a small number of “best of class” configurations (i.e., configurations with top coverage scores for their complexity level that also have separation from their next-nearest neighbors) that align with logic, theory and prior knowledge, and then glean the relatively small group of factors represented by those configurations. Using this bottom-up approach, researchers can inductively analyze their original datasets in their entirety, draw upon substantive knowledge when interpreting the mathematical output generated by the routine, and ultimately identify a subset of candidate factors for model development during the next step of the CNA analysis.

In this reanalysis of three different systematic reviews using Qualitative Comparative Analysis (QCA) as the primary method for data analysis, I directly compare the QCA findings originally reported in those studies with CNA results derived from applying the bottom-up “msc” approach to factor selection. These three studies were published by different teams of authors between 2016 and 2019, and used datasets with 9, 20 and 38 potential explanatory factors, respectively; the datasets were published alongside the original articles. In all three reanalyses, the bottom-up routine for factor selection yielded results that appear to show clear and compelling advantages over previous findings.

 

Reiping Huang (Northwestern): Evaluating the implementation of a multi-component surgical quality collaborative

To assess and compare the quality of surgical care and patient safety, groups of hospitals in the United States have formed various Quality Improvement Collaboratives (QICs) that implement common initiatives, share experiences, and benchmark performance. One of such a collaborative, Illinois Surgical Quality Improvement Collaborative (ISQIC) among 51 diverse hospitals, has initiated 21-component interventions to facilitate quality improvement in participating hospitals, their surgical QI teams, and the peri-operative microsystems.

The current study was a comprehensive evaluation about the impacts of these QI components on 5 surgical outcomes during the initial 3 years of ISQIC (2015-2017) in 31 hospitals with complete outcome data. To this end, the study team conducted site visits, interviews, and focus groups to assess the ways and degrees to which hospitals adapted and implemented these 21 components. Clinical data and surveys were then combined to assess hospital performance in 5 outcomes or processes (death and serious morbidity, colorectal surgical site infection, colorectal venous thromboembolism, hospital safety culture, and surgical site infection bundle adherence).

Our team applied coincidence analysis (CNA) to uncover the contributing components to hospital success in QI outcomes. However, given the large number of conditions (103) representing various aspects of the 21 components, we designed a systematic workflow with additional steps before and after a regular CNA. First, we organized the 103 conditions (elements) into 5 domains (guided implementation, education, comparative reports, financial support, and networking), 17 sub-domains, and 39 composites, based on project-specific qualitative and quantitative knowledge. Ten of the composites that were conceptually embedded in others were separated as proxies while the rest 29 composites served as the main inputs for CNA. Second, we used the ‘minimally sufficient conditions’ function in CNA to identify a subset of essential and secondary composite conditions for each outcome (defined high, medium, or low). Third, a full multi-value CNA was carried out by outcome separately to uncover best-performing atomic solution formula(s) (ASFs) based on consistency, coverage, simplicity, and fit-robustness. Besides case review, four kinds of sensitivity tests were done to assist interpreting or comparing competing ASFs: (1) substituting input with proxy composite(s), (2) substituting input with element(s) of composites, (3) a reanalysis that examined 2-level outcomes, and (4) a subset analysis on hospitals without missing composite values. Finally, we developed a scoring method for all composites being suggested by the best ASFs in individual outcome analysis.

Across multiple outcomes, the key QI components contributing to overall hospital success were effectiveness of collaborative coordinating center, effectiveness of surgeon champion, formal QI curriculum, and the use of benchmarking reports. Our analysis provides an example of applying CNA to complicated QI evaluations by leveraging analytic strategies in data preparation, variable selection, model selection, sensitivity test, and result summarization.