Page 3 - AI Vol 2: Risks of AI
P. 3
titles might be used. While these features appear
neutral, they can inadvertently disadvantage
certain groups. For example, prioritizing 'years of
experience' heavily could lead to decisions that
unfairly disfavor younger applicants, who may
be equally skilled but have fewer years in the
workforce.
The algorithms driving AI systems often assign
different weights to various inputs, influencing
the outcome. This prioritization, while designed
to optimize decision-making, can unintentionally
marginalize certain groups. For example,
consider an AI system used by a public agency
for allocating community development funds. If discrimination can be particularly insidious
the algorithm prioritizes factors such as historical because it often goes unnoticed, yet it can
tax revenue or past project success rates, it may have profound impacts on fairness and equity.
inadvertently disadvantage lower-income or Proxy variables are attributes or factors that are
historically underfunded communities. These not inherently discriminatory but are closely
areas, despite needing more resources, might correlated with protected characteristics. For
receive less funding because the algorithm example, an AI system in a public agency might use
overlooks their potential for improvement and zip code as a factor in decision-making processes,
focuses on past performance metrics. such as allocating resources or prioritizing
service requests. However, since zip codes can
Proxy discrimination in AI occurs when an closely correlate with racial and socioeconomic
algorithm uses variables that, while not explicitly demographics, relying heavily on this factor
related to protected characteristics like race could lead to decisions that inadvertently favor or
or gender, serve as stand-ins or proxies for disfavor certain groups based on where they live.
these characteristics. This indirect form of
AI-driven discriminatory feedback loops occur
when AI systems, through their decisions and
AI-DRIVEN DISCRIMINATORY actions, inadvertently reinforce and amplify
FEEDBACK LOOPS OCCUR existing biases or inequalities. These feedback
WHEN AI SYSTEMS, THROUGH loops begin when an AI system makes decisions
THEIR DECISIONS AND ACTIONS, based on biased data or criteria. The outcomes of
INADVERTENTLY REINFORCE
AND AMPLIFY EXISTING BIASES these decisions then become part of the new data
OR INEQUALITIES. set, which the AI continues to learn from, thereby
reinforcing the initial bias. Over time, this cycle
RISKS OF AI | LOZANOSMITH.COM VOLUME 2 | 3