A Report from TechTonic Justice
Inescapable AI
The Ways AI Decides How Low-Income People Work, Live, Learn, and Survive
The use of artificial intelligence, or AI, by governments, landlords, employers, and other powerful private interests restricts the opportunities of low-income people in every basic aspect of life: at home, at work, in school, at government offices, and within families. AI technologies derive from a lineage of automation and algorithms that have been in use for decades with established patterns of harm to low-income communities. As such, now is a critical moment to take stock and correct course before AI of any level of technical sophistication becomes entrenched as a legitimate way to make key decisions about the people society marginalizes.
Employing a broad definition of AI, this report represents the first known effort to comprehensively explain and quantify the reach of AI-based decision-making among low-income people in the United States. It establishes that essentially all 92 million low-income people in the U.S. states—everyone whose income is less than 200 percent of the federal poverty line—have some basic aspect of their lives decided by AI.
Key Findings
Medicaid: 73 million low-income people are exposed to AI-related decision-making in Medicaid through the eligibility and enrollment process, the determination of home- and community-based services, or, where a state uses private companies to manage the Medicaid program, the prior authorization process for medically necessary services. As a result, people are denied health insurance, home-based care needed to avoid nursing facilities, and medically necessary treatments and medicines.
Medicare Advantage: About 16.5 million low-income people are exposed to AI-related decision-making through the prior authorization processes used in Medicare Advantage programs. As a result, people are denied medically necessary treatments and medicines.
Private Health Insurance (through employers or federal subsidies): About 30.6 million low-income people are exposed to AI-related decision-making through the prior authorization processes in private health insurance. As a result, people are denied medically necessary treatments and medicines.
Supplemental Nutrition Assistance Program, or SNAP: 42 million low-income people are exposed to AI-related decision-making in SNAP through the eligibility and enrollment process or detection of alleged fraud. As a result, people are denied vital benefits needed to buy food, wrongly disqualified from the program, and falsely accused of wrongdoing.
Social Security disability benefits (SSI and Social Security Disability Insurance): About 13.8 million people, including 10.6 million low-income people, are exposed to AI-related decision-making in the Social Security Administration’s disability benefits programs through current or planned uses of AI technologies in various parts of benefit administration, including the eligibility determination process and enforcement of asset limits. As a result, people experience temporary or permanent losses of income and are wrongly accused of being overpaid benefits.
Unemployment Insurance: About 1.1 million low-income people are exposed to AI-related decision-making in the Unemployment Insurance program through the eligibility and enrollment process, identity verification practices, and the detection of alleged fraud. As a result, people are denied critical income, experience grave delays in receiving benefits, and are falsely accused of fraud.
Housing: About 16.3 million low-income households, or 39.8 million low-income people, are exposed to AI-related decision-making through landlords’ use of background screening systems, and more are exposed through rent-setting algorithms and surveillance technologies. As a result, people are denied housing, must pay higher rents than they otherwise would, and experience the pressures of being constantly watched while at home.
Employment: At least 32.4 million low-wage workers are exposed to AI in the context of work. Of these, at least 24.4 million low-wage workers are exposed to AI-related decision-making through employers’ use of AI to determine who gets hired and to surveil, manage, and evaluate them. And an additional 8 million low-wage “gig” workers who work full-time have their wages set by AI. As a result, people are denied job opportunities, fair pay, and fair working conditions.
Education (K-12): About 13.25 million low-income children are exposed to AI-related decision-making through school districts’ use of AI to determine if they are likely to drop out or engage in criminal activity, with many more exposed through the ubiquity of AI surveillance technologies. As a result, children are labeled as failures, harassed by law enforcement, and experience the pressures of being constantly watched.
Language services: About 5 million low-income people with limited English proficiency are exposed to some form of AI-based translation in government benefit offices, schools, hospitals, medical clinics, law enforcement agencies, or courts. As a result, they experience delays, denied benefits, misunderstandings about vital information, or inability to access services.
Domestic violence: At least 2 million low-income people are exposed to AI-related decision-making through police departments’ use of AI to assess the risk of further violence to survivors. In addition, countless more survivors are subjected to abusers’ use of AI to create deepfakes to coerce them and to falsify evidence in court. As a result, people experience violence not being taken sufficiently seriously by authorities, diminished ability to escape violent situations, and greater obstacles to obtaining any relief in court.
Child welfare (sometimes called “family policing”): At least 72,000 low-income children are exposed to AI-related decision-making through government child welfare agencies’ use of AI to determine if they are likely to be neglected. As a result, these children experience heightened risk of being separated from their parents and placed in foster care.