Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models
ACLMay 29, 2023Best Social Impact Paper
To recognize and mitigate harms from large language models (LLMs), we need to
understand the prevalence and nuances of stereotypes in LLM outputs. Toward
this end, we present Marked Personas, a prompt-based method to measure
stereotypes in LLMs for intersectional demographic groups without any lexicon
or data labeling. Grounded in the sociolinguistic concept of markedness (which
characterizes explicitly linguistically marked categories versus unmarked
defaults), our proposed method is twofold: 1) prompting an LLM to generate
personas, i.e., natural language descriptions, of the target demographic group
alongside personas of unmarked, default groups; 2) identifying the words that
significantly distinguish personas of the target group from corresponding
unmarked ones. We find that the portrayals generated by GPT-3.5 and GPT-4
contain higher rates of racial stereotypes than human-written portrayals using
the same prompts. The words distinguishing personas of marked (non-white,
non-male) groups reflect patterns of othering and exoticizing these
demographics. An intersectional lens further reveals tropes that dominate
portrayals of marginalized groups, such as tropicalism and the
hypersexualization of minoritized women. These representational harms have
concerning implications for downstream applications like story generation.