The Algorithm Might Hate You. Algorithms Don’t Have Feelings — But Their Bias Can Still Harm Appalachians

Getting your Trinity Audio player ready...

By Aiden Satterfield

When Fairness Isn’t Fair

While I just started my master’s studies at NYU, I am in a course on machine learning and algorithmic fairness, which opened my eyes to an unsettling reality: many algorithms we encounter aren’t as neutral as we’d hope. In fact, if you’re from a place like Appalachia or belong to a historically marginalized group (like African Americans), the “smart” systems making decisions about your life might not be favoring you at all. It’s not that the algorithm consciously or purposely hates anyone – but it can surely feel that way when the outcomes consistently work against certain communities. The culprit is bias embedded in the data and design. As researchers from the Greenlining Institute put it, “Poorly designed algorithms… threaten to amplify systemic racism by reproducing patterns of discrimination and bias that are found in the data”. In other words, an algorithm learns from history and if history is biased, the algorithm’s decisions will be too. Let’s dive into two areas where this problem hits especially hard: hiring and criminal justice.

Bias in Hiring Algorithms

Tech optimists often argue that AI hiring tools could reduce human bias, focusing only on skills and merit. The reality so far is more sobering. Take the infamous case of Amazon’s experimental hiring algorithm. This is a little older, but necessary for context, The system was trained on past résumés, and most of Amazon’s past hires were men. The result? The AI learned to prefer male candidates. Even though gender was never an explicit input, the algorithm taught itself that male applicants were more “desirable” – it literally began downgrading résumés that mentioned women’s college. Amazon’s team eventually caught this issue and scrapped the tool, but it raises a big question: how many other hiring algorithms out there are quietly reproducing biases hidden in their training data?

Consider regional and language biases as well. If you’re a job seeker from Appalachia, the algorithm might not “hate” you – but it might not understand you either. A recent study on dialect and AI found that automated systems struggle with Appalachian English speech patterns. These tech tools, like speech recognition software or AI interviewers, are often tuned to standard accents and mainstream speech. That means candidates with a strong Appalachian accent or dialect could be misheard or misinterpreted by hiring algorithms. Worse, language can be a proxy for social bias. In an August 2024 Nature study, researchers tested large language models by having them evaluate speakers of different English dialects. The findings were startling. The AI models consistently gave speakers of African American English fewer job opportunities – often failing to match them with any occupation – or shunting them toward low-prestige, non-degree jobs. This wasn’t because of qualifications, but because the dialect differed from “standard” English. Preliminary tests showed similar bias for Appalachian English speakers, Uchicago states. In short, an AI sorting through job applications or scanning video interviews might unconsciously favor someone from suburban New Jersey over someone from rural Kentucky, simply due to speech or word choice. That’s algorithmic bias at work.

Biased Algorithms in Criminal Justice

If biased algorithms in hiring are troubling, their impact in criminal justice is downright alarming. Courts and law enforcement have increasingly turned to “risk assessment” software to inform decisions on bail, sentencing, and parole. These tools are supposed to remove human prejudice by using data-driven risk scores. But in practice, they can reinforce disparities under the guise of objectivity. A famous investigation by ProPublica looked at one widely used risk scoring algorithm (known as COMPAS) and revealed a stark racial bias. The algorithm was wrong far more often for Black defendants, labeling Black people as high risk at nearly twice the rate of white people who had similar records. In plain terms, it frequently flagged Black individuals as likely reoffenders when they were not, leading to harsher treatment compared to white individuals. This was actually used in court… 

Newer research suggests that even advanced AI can carry over these biases in subtler ways. Remember that dialect study? The same experiments also probed AI “judgment” in hypothetical court cases. The AI was asked to render verdicts and even decide sentences based on testimony either in African American English (AAE) or in standard English. The pattern was chillingly predictable: when the testimony was in AAE, the AI was convicted at higher rates – 68.7% vs 62.1% – and was more likely to hand down a death sentence. Again, the content of the case hadn’t changed one bit; only the dialect did. This suggests that an algorithm can pick up biases associating certain speech patterns with guilt or danger. While no court is (hopefully) letting an AI decide verdicts, it’s a powerful warning. Any automated system used in policing or courts – from predictive policing models to voice analysis in 911 calls – might inadvertently disfavor Black Americans or people from certain regions. Appalachians, for example, have distinct linguistic patterns and often face socioeconomic stigmas; an algorithm ill-equipped to account for those factors could produce unjust outcomes for them, too.

Looking at the Road Ahead

All this evidence paints a clear picture: the algorithm isn’t actively plotting against you, but if you come from Appalachia or belong to a marginalized racial group, its impartial façade might be hiding built-in biases. These biases seep in from historical data and a tech industry that hasn’t always prioritized diversity or fairness. The consequences are very real – lost job opportunities, lower pay, unjust prison sentences – and they often hit the same communities that have faced discrimination for decades, only now with a high-tech twist. The good news is that awareness is growing. Scholars and practitioners are calling for greater transparency and accountability in algorithmic systems, and some jurisdictions are considering regulations to audit algorithms for bias As I continue my journey in algorithmic fairness at NYU, which has a Center for Responsible AI! I will remain cautiously optimistic. We can demand algorithms that are better trained and more inclusive, that understand an Appalachian drawl or an African American vernacular without judgment, and that truly level the playing field instead of tilting it further. But it will take concerted effort – from tech companies, policymakers, and everyday users – to ensure these “neutral” algorithms don’t keep quietly picking winners and losers. In the meantime, remember: if it feels like the algorithm hates you, it might just be doing what it was taught. And that is exactly what we need to change, before “smart” tech deepens old inequalities under a new disguise.

If you appreciate BBG's work, please support us with a contribution of whatever you can afford.

Support our stories

Author

Aiden Satterfield is a master’s student at New York University, where he studies Cybersecurity. A 7th-generation native of West Virginia, Aiden serves as co-editor and columnist for BBG Tech, where he explores the intersections of technology, innovation, and equity.

Read more of his work on Black By God, and support his vision to inspire diversity and innovation in West Virginia’s growing tech industry.

For more information or to connect, email aiden@blackbygod.org.