Digital Discrimination

Born out of our rich history as one of the country’s largest and most sophisticated civil rights law firms, our digital discrimination practice group leverages insights from attorneys, data scientists, academics, and social justice leaders, all in an effort to advance digital equity and inclusion.

Over the past three decades, our attorneys have successfully and systemically tackled a broad range of pressing civil rights issues – including the U.S. government’s discriminatory use of criminal background checks, widespread gender pay inequities in corporate America, speech suppression efforts in the technology industry, and much more. Similarly, the members of our Digital Discrimination Practice Group are leading the way on the issues of digital redlining, and the harms associated with algorithmic bias in hiring decisions.

Digital Redlining

We live in a world of big data, where social media platforms, technology companies, and search engines aggressively track your digital trail. In the process, they collect a massive amount of personal data – everything from your gender, age, and race to seemingly harmless information about the shows you stream, where you go on vacation, and what you typically purchase.

But big data has a dark side.

Imagine a world where banks, employers, landlords and other powerful interests close their physical doors to you, just because of your protected characteristics. Now imagine being denied the same opportunities, online. Like traditional redlining of the 20th century, which is prohibited by federal and state laws, online platforms and businesses use data to advertise in a way that promotes economic opportunities to certain communities, while deliberately ignoring others. This new form of redlining, which affects access to financial products, jobs, housing, education, and other opportunities, is particularly harmful across racial, gender and economic lines – making it one of the most important civil rights issues of our time.

Algorithmic Bias in Hiring

Employers are increasingly using artificial intelligence and machine learning technologies to find, select, and screen job candidates. While these technologies have been developed to show employers the “best” candidates for a given position, they often perpetuate and amplify biases and discriminatory practices.

The attorneys in our Digital Discrimination Practice Group are on the cutting edge of employers’ use of AI, and the discriminatory practices that emerge from its use.

Through our participation in federal hearings, the legislative process and various AI working groups, we focus on the evolving risks associated with employers’ use of algorithmic technologies – including applicant tracking, the use of “games” to gather psychometric data on candidates, and the use of unsupervised machine learning to source and recruit applicants on social media platforms.

These and other employment-focused AI tools are constantly changing (or “learning”), and consequently, our advocacy helps to ensure they remain in compliance with federal and state anti-discrimination laws.

(*Prior results do not guarantee a similar outcome.)