•  
  •  
 

Authors

Joseph Blass

Publication Date

10-2019

Abstract

The ability of social media companies to precisely target advertisements to individual users based on those users’ characteristics is changing how job opportunities are advertised. Companies like Facebook use machine learning to place their ads, and machine learning systems present risks of discrimination, which current legal doctrines are not designed to deal with. This Note will explain why it is difficult to ensure such systems do not learn discriminatory functions and why it is hard to discern what they have learned as long as they appear to be performing well on their assigned task. This Note then shows how litigation might adapt to these new systems to provide a remedy to individual plaintiffs but explains why deterrence is ill-suited in this context to prevent this discrimination from occurring in the first place. Preventing machine learning systems from learning to discriminate requires training those systems on broad, representative datasets that include protected characteristics—data that the corporations training these systems may not have. The Note proposes a proactive solution, which would involve a third party safeguarding a rich, large, nationally representative dataset of real people’s information. This third party could allow corporations like Facebook to train their machine learning systems on a representative dataset, while keeping the private data themselves out of those corporations’ hands.

Share

COinS