AI in Public Benefits: How It’s Hurting Low-Income Families

As artificial intelligence (AI) becomes more common in everyday life, it’s finding its way into many government systems. One of the biggest areas where AI is being used is in public benefits programs. These programs, which include things like unemployment insurance, Medicaid, and food assistance (SNAP), are meant to support people who need help the most. However, many states are now relying on AI to manage these programs, and the results have often led to severe consequences for vulnerable people.

The AI Fraud Detection Problem: A Real-Life Example from Michigan

In Michigan, the state’s unemployment system adopted an AI-based fraud detection system in 2011. The goal was to cut costs and quickly identify fraudulent claims. However, between 2013 and 2015, the system wrongly flagged nearly 63,000 cases of fraud. About 70% of these cases were later found to be false. This led to many people being accused of fraud, facing harsh penalties, and in some cases, losing everything.

In one tragic case, a woman was hit with $50,000 in fraud penalties, and she ended up taking her own life. The situation became so serious that Michigan’s University added a suicide hotline to its unemployment website. After realizing the AI system was failing, Michigan admitted that it wasn’t working and switched back to human workers to review fraud cases in 2016. By 2017, the state refunded over $21 million to people who were wrongly accused.

AI’s Growing Role in Public Benefits Across the Country

Sadly, Michigan’s experience is not an isolated one. A recent report from TechTonic Justice, a nonprofit focused on the use of AI in systems affecting low-income people, found that AI is now being used in almost every public benefit program across the United States. These systems are responsible for determining eligibility, distributing benefits, and detecting fraud.

In Medicaid, for instance, AI is used to decide who gets access to mental health services or whether someone can receive the care they need. The Social Security Administration uses AI to decide who qualifies for disability benefits. And in the Supplemental Nutrition Assistance Program (SNAP), AI helps determine how much support someone receives and can even flag potential overpayments or fraud.

However, while the intention behind using AI in these systems is often to improve efficiency and reduce costs, the reality is that AI has often made things worse for people who rely on these benefits.

How AI Leads to Delays and Denials

One of the biggest problems with AI in public benefits is that it can cause delays and denials. When AI flags a case as suspicious, it can take weeks for human workers to investigate and resolve the issue. During this time, people who rely on these benefits can be left without the support they desperately need.

Michele Evermore, a senior fellow at The Century Foundation, pointed out that when AI makes mistakes, it can create massive backlogs in cases. Human workers who are already overwhelmed by the volume of cases may feel pressure to just follow what the AI suggests, even if it’s wrong. This can lead to unfair decisions and the denial of benefits to people who are entitled to them.

The Lack of Transparency and Accountability

Another major issue is the lack of transparency in how AI makes decisions. Many people who are affected by these systems have no idea that AI is involved in the process. If they do find out, they often can’t access the algorithms or data that the AI uses to make decisions. This means that people can’t challenge or understand why they’ve been denied benefits or have had their benefits reduced.

Kevin De Liban, the founder of TechTonic Justice, has seen firsthand how AI systems have hurt people. In one example, he helped a group of people in Arkansas who had their home care hours reduced because of an AI system that made decisions about their eligibility. The AI system used strict criteria that resulted in people with severe disabilities, like those who are quadriplegic or have cerebral palsy, losing up to 50% of their care. Many of these people were left in unsafe and uncomfortable conditions.

AI in Public Benefits: The Path Forward

While it’s clear that AI has caused harm in many instances, there’s also potential for it to be used in a more positive way. For example, automating certain parts of the process, like assigning cases to workers or scheduling appointments, could help improve efficiency without negatively affecting people’s benefits.

However, De Liban argues that any changes involving AI in public benefits should be implemented carefully and tested thoroughly. The stakes are too high for vulnerable individuals, and AI should not be used in a way that risks their well-being.

As AI continues to play a larger role in public benefit systems, it’s important to keep a close eye on how it impacts real people. Governments should prioritize transparency, accountability, and fairness to ensure that these technologies don’t end up doing more harm than good.

In the end, while technology can be a powerful tool, it should never come at the expense of people’s health, livelihoods, and dignity. The real question is: can we use AI responsibly, or is it time to rethink how much control we give machines over such critical areas of life?

(Source : fastcompany.com)

Leave a Reply

Your email address will not be published. Required fields are marked *