Session

Public Health Data & Structural Violence: From Big Data and Countering Algorithmic Bias to Confronting State and Corporate Surveillance

Craig Dearfield, Ph.D., Department of Epidemiology, George Washington University, Washington, DC and Zinzi Bailey, ScD, MSPH, Departments of Medicine & Public Health Sciences; Jay Weiss Institute for Health Equity, Sylvester Comprehensive Cancer Center, University of Miami Miller School of Medicine, Miami, FL

APHA's 2020 VIRTUAL Annual Meeting and Expo (Oct. 24 - 28)

Abstract

Public health data & structural violence: From big data and countering algorithmic bias to confronting state and corporate surveillance

Zinzi Bailey, ScD, MSPH1, Catherine Cubbin, PhD2, Craig Dearfield, Ph.D.3 and Nancy Krieger, PhD4
(1)University of Miami Miller School of Medicine, Miami, FL, (2)University of Texas at Austin, Austin, TX, (3)George Washington University, Washington, DC, (4)Harvard T.H. Chan School of Public Health, Boston, MA

APHA's 2020 VIRTUAL Annual Meeting and Expo (Oct. 24 - 28)

This presentation seeks to frame the discrete, context-aware and historically-informed empirical and conceptual studies that will be presented by a number of scholars. Possible foci for presentations, all in relation to issues of health justice, might be: (1) conceptual framing of complexities of data collection in relation to measuring and quantifying the adverse health impacts of structural violence in its many forms; (2) uses of Big Data to counter structural violence by the state, as per how Black Data Matters is using big data to take on documenting police violence for accountability; (3) uses of Big Data to bring new light to analyzing health justice issues for “small” populations, e.g., American Indians and Alaska Natives; (4) critical analysis of the non-neutrality of algorithms and their role in entrenching health inequities, especially in relation to social services, health care, education, and the carceral state; (5) critical analysis of who owns the data and the erasures of privacy – by state and corporate surveillance, drones, devices that monitor people’s health and their every move, phone call, email, twitter exchange, and more; (6) public health threats associated with doxing, and with challenging on-line hate speech and violence; (7) coding and misclassification of deaths (in the US and elsewhere) due to violence, including after police brutality and after military actions; (8) the politics that undercut accurate monitoring of, research on, and interventions to address gun violence; and (9) analyses that link policies affecting voting rights, voter suppression, and political representation to health outcomes.

Public health administration or related administration Public health or related laws, regulations, standards, or guidelines Public health or related organizational policy, standards, or other guidelines Public health or related public policy Public health or related research Social and behavioral sciences

Abstract

California's Medicaid population health management proposal: How the state’s use of risk assessment algorithms may further entrench health inequities

Michelle Grisat, PhD and Carmen Comsti, JD
National Nurses United, Oakland, CA

APHA's 2020 VIRTUAL Annual Meeting and Expo (Oct. 24 - 28)

California is preparing to submit a new proposal for its Medicaid program waiver, although it is not yet finalized. A population health focus with risk stratification will be a central feature. In its current form, the population health management proposal requires each health plan to stratify its enrollee population using both state-mandated risk tiers and their own algorithm, or a proprietary commercial algorithm, yet does not explain how they will interact.

In addition, despite acknowledging that focusing on utilization data may perpetuate structural inequalities, the proposal requires each health plan’s algorithm to be based, at least in part, on past utilization. Each health plan must submit a list of the data sources used to stratify its population, the algorithm (or its name if proprietary), and the method it used to analysis bias in the algorithm. The initial risk stratification will be based on available data supplemented by a subsequent individual risk assessment survey.

The author will discuss key concerns with California’s Medicaid proposal that include the use of proprietary risk stratification algorithms, differences among risk-stratification algorithms, and allowing health plans to evaluate their own algorithms for bias. The author will contextualize the discussion by providing an overview of the different types of health plans currently participating in California’s Medicaid program—including whether they are public or private, for-profit or not-for-profit—and outlining enforcement actions against the health plans for inappropriately denying care, disenrolling costly members, and other revenue-maximizing improprieties.

Provision of health care to the public Public health or related laws, regulations, standards, or guidelines Public health or related public policy

Abstract

The racial health (in)equity implications of a machine learning-based tool for emergency department triage: Examining feature bias

Stephanie Teeple
University of Pennsylvania, Philadelphia, PA

APHA's 2020 VIRTUAL Annual Meeting and Expo (Oct. 24 - 28)

There is increasing evidence that ‘artificial intelligence’ technologies inadvertently entrench social injustice across many sectors. One prominent scholarly response has been to conduct ‘algorithmic fairness’ research to declare a particular algorithmic ‘fair’ or ‘discriminatory’. However, purely technical assessments of this kind focus narrowly on the mathematical model and fail to appreciate the broader sociotechnical systems in which they are embedded. Moreover, current methods do not quantify the real-life impacts of these prediction models, which disproportionately affect people of marginalized identities. In response, this project examines the racial health equity implications of a machine learning-based clinical decision support tool for emergency department (ED) triage (E-Triage), currently in use in hospital settings. Specifically, we examine whether ‘feature bias’, or socially-patterned misclassification errors in predictor variables, contributes to differences in predictive performance of the E-Triage model for patients racialized as Black versus White.

Feature bias for this study is operationalized as under-diagnosis of five key medical conditions that are important predictors for the E-Triage model. For each of these medical conditions, the literature demonstrates underdiagnosis of Black patients compared to White due to structural barriers to quality healthcare. For this project, we fix the parameters of the E-Triage model and compare performance for Black versus White patients in (1) synthetic data with simulated feature bias (2) real electronic health record (EHR) data. Performance metrics are calculated using a nonparametric pairwise bootstrap. This study challenges the conceptualization of ‘objective’ risk prediction models for health and highlights the social patterning of clinical data.

Epidemiology Social and behavioral sciences

Abstract

Queer risk data: An emerging material commodity in global PrEP science

Amaya Perez-Brumer, PhD, MSc
University of Toronto, Toronto, ON, Canada

APHA's 2020 VIRTUAL Annual Meeting and Expo (Oct. 24 - 28)

BACKGROUND: The emergence of therapeutic prevention technologies (i.e., pre-exposure prophylaxis [PrEP]) alongside personalized diagnostic innovations (i.e., HIV self-testing) have emphasized prevention and, thus, shifted focus from individuals at-risk for HIV, to those most at-risk.

METHODS: Between 2016-2018, 50 in-depth interviews were conducted with Peruvian and American scientists/research staff engaged in HIV biomedical prevention research targeting people categorized as MSM and transgender women to query the practice of data measurement and collection. Audio files were transcribed verbatim and analyzed using Dedoose (v.6.1.18).

RESULTS: Narratives described the construction of queer risk data as a relational and subjective process, where the biomedical categories of “transgender women” and “MSM” were strategically deployed in line with the interests of the global HIV industry. Queer risk data was described as accruing both material (i.e., future grants, publications) and affective value (i.e., MSM researcher), yet, ability to leverage this commodity differed depending on whether the researcher originated from the US or Peru.

CONCLUSION: Findings suggest that emphasis on enumerative evidence on most-at risk has deeply implicated people categorized as MSM and transgender women, making queer risk data a material commodity that is traded on the global HIV biomedical marketplace. Unique to HIV in its 4th decade, queer risk data flattens sexual and gender diversity into categories devoid of cultural and social context, obscuring the historical and sociopolitical dynamics of HIV vulnerability. The contextual injustices that pattern health risks are inscribed onto biomedical identity categories and queer risk data itself, rendering structural solutions to prevent HIV inactionable.

Public health or related organizational policy, standards, or other guidelines Public health or related research Social and behavioral sciences