Main content start
Sociology

Claire Daviss

2023–24 Dissertation Fellowship

Scholars have documented persistent employer biases in hiring decisions by gender, race, class, and other social categories. In this dissertation, I ask: does how we hire influence whom we hire? I argue that hiring architecture—the way hiring processes are designed and structured—influences the degree to which hiring decision-makers attend to social categories and, by extension, the inequities that emerge from the hiring process. To illuminate the relationship between hiring architecture and social and economic inequities, I examine three structural elements of the hiring process: the gender composition of applicant pools, intensive search practices, and anonymous screening. I draw on proprietary administrative data from applicant tracking systems, as well as data from online survey experiments, to identify real-world patterns in hiring architecture and provide causal evidence of their effects on gender, race, and class inequities. This work will yield theoretical contributions to how we understand work, organizations, and inequality, as well as practical implications for companies seeking to increase diversity, equity, and inclusion.
 

Racialized Penalties of Being in the Gender Minority of Applicant Pools
2021 American Democracy Fellowship
Gender Bias, Hiring, and the Gender Composition of Applicant Pools
2021 Research Data Grant
Ideal Remote Workers: Exploring Gendered Implications of the Rise of Remote Work
2020–21 Survey Lab Project

In collaboration with Erin Macke and Emma Williams-Baron.

We use six survey experiments to investigate three main theoretical questions about gender inequality in the workplace. First, how does an accountability intervention operate across contexts (first in a context where gender and parental status are salient, and next in a gendered and racialized context)? Second, how does a candidate’s history of remote work affect evaluators’ perceptions of their hireability for positions with traditional work arrangements, and how does this effect differ by gender and parental status? And third, how do evaluators’ preferences for certain candidates (women versus men, parents versus non-parents) differ when hiring for remote positions versus when hiring for in-person positions? Across these three studies, each of which includes two survey experiments, we also explore a methodological question: can digital trace data be leveraged to detect bias and understand theoretical mechanisms in survey experiments? Digital trace data are records of participants’ behaviors (such as mouse clicks, time spent reading sections of a resume, etc.) rather than their expressed attitudes or beliefs, and can therefore yield new insights into theoretical questions when paired with traditional survey measures.