A bank uses an AI model for loan approvals. It's discovered that the model denies loans to a disproportionately high number of applicants from a specific zip code, which is strongly correlated with a protected demographic group. Even though applicants' financial profiles are similar to approved applicants from other areas, their applications are rejected. What is the most likely ethical issue at play?
-
A
Lack of model accuracy
-
B
Insufficient training data volume
-
C
Algorithmic bias
-
D
Poor feature engineering