A.I. Is Deciding Who You Are

15 hours ago 7

Opinion|A.I. Is Deciding Who You Are

https://www.nytimes.com/2025/11/02/opinion/ai-privacy.html

Guest Essay

Nov. 2, 2025, 9:00 a.m. ET

Video

CreditCredit...By Mathieu Labrecque

By Maximilian Kasy

Dr. Kasy is the author of the book “The Means of Prediction: How AI Really Works (and Who Benefits).”

Imagine applying for a job. You know you’re a strong candidate with a standout résumé. But you don’t even get a call back.

You might not know it, but an artificial intelligence algorithm used to screen applicants has decided that you are too risky. Maybe it inferred you wouldn’t fit the company culture or you’re likely to behave in some way later on that might cause friction (such as joining a union or starting a family). Its reasoning is impossible to see and even harder to challenge.

It doesn’t matter that you practice safe digital privacy: keeping most personal details to yourself, avoiding sharing opinions online and prohibiting apps and websites from tracking you. Based on the scant details it has of you, the A.I. predicts how you’ll behave at work, based on patterns it has learned from countless other people like you.

This is increasingly life under A.I. Banks can use algorithms to decide who gets a loan, learning from past borrowers to predict who will default. Some police departments have fed years of criminal activity and arrest records into “predictive policing” algorithms that have sometimes sent officers back to patrol the same neighborhoods.

Social media platforms use our collective clicks to decide what news — or misinformation — each of us will see. In each case, we might hope that keeping our own data private could protect each of us from unwanted outcomes. But A.I. doesn’t need to know what you have been doing; it only needs to know what people like you have done before.

That’s why privacy can no longer be defended one person at a time. As we adapt to living with A.I. as a larger part of our lives, we need to exert collective control over all of our data, to determine if it’s used to benefit or harm us.

Back in the 2000s, as concerns around digital privacy rose, computer scientists built a privacy-protection framework called “differential privacy” that could protect individuals’ identities while still collecting data to learn about users’ patterns more broadly. Algorithms that use differential privacy work by adding a tiny bit of randomness to the data so no one can identify who exactly is in the data, all without changing the overall results.

These protections mean people might be more willing to share their data with third parties, and these differential privacy algorithms are now quite common. Apple iPhones are built with these algorithms to collect information about user behavior and trends, without ever revealing what data came from whose phone. The 2020 U.S. census used differential privacy in its reporting on the American population to protect individuals’ personal information.

Yet the patterns in the data remain — and they’re enough to guide powerful actions. The technology company Palantir is building an A.I. system called ImmigrationOS for Immigration and Customs Enforcement, to identify and track people for deportation by combining and analyzing many data sources together (including Social Security, the Department of Motor Vehicles, the Internal Revenue Service, license plate readers and passport activity) — getting around the obstacle posed by differential privacy.

Even without knowing who any one person is, the algorithm can likely predict the neighborhoods, workplaces and schools where undocumented immigrants are most likely to be found. A.I. algorithms called Lavender and Where’s Daddy? have been reportedly used in a similar way to help the Israeli military determine and locate targets for bombardment in Gaza.

In climate change, one person’s emissions don’t alter the atmosphere, but everyone’s emissions will destroy the planet. Your emissions matter for everyone else. Similarly, sharing one person’s data seems trivial, but sharing everyone’s data — and tasking A.I. to make decisions using it — transforms society. Everyone sharing his or her data to train A.I. is great if we agree with the goals that were given to the A.I. It’s not so great if we don’t agree with these goals; and if the algorithm’s decisions might cost us our jobs, happiness, liberty or even lives.

To safeguard ourselves from collective harm, we need to build institutions and pass laws that give people affected by A.I. algorithms a voice over how those algorithms are designed, and what they aim to achieve. The first step is transparency. Similar to corporate financial reporting requirements, companies and agencies that use A.I. should be required to disclose their objectives and what their algorithms are trying to maximize — whether that’s ad clicks on social media, hiring workers who won’t join unions or total deportation counts.

The second step is participation. The people whose data are used to train the algorithms — and whose lives are shaped by them — should help decide their goals. Like a jury of peers who hear a civil or criminal case and render a verdict together, we might create citizens’ assemblies where a representative randomly chosen set of people deliberates and decides on appropriate goals for algorithms. That could mean workers at a firm deliberating about the use of A.I. at their workplace, or a civic assembly that reviews the objectives of predictive policing tools before government agencies deploy them. These are the kinds of democratic checks that could align A.I. with the public good, not just private power.

The future of A.I. will not be decided by smarter algorithms or faster chips. It will depend on who controls the data — and whose values and interests guide the machines. If we want A.I. that serves the public, the public must decide what it serves.

Maximilian Kasy is a professor of economics at the University of Oxford and the author of “The Means of Prediction: How AI Really Works (and Who Benefits).”

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips. And here’s our email: [email protected].

Follow the New York Times Opinion section on Facebook, Instagram, TikTok, Bluesky, WhatsApp and Threads.

Read Entire Article
Olahraga Sehat| | | |