
[ad_1]
Sometimes artificial intelligence can literally be a matter of life and death.
Last year, a man in Belgium committed suicide after allegedly being encouraged by a chatbot. The Netherlands is debating whether AI should be allowed to address issues related to euthanasia. Elsewhere, researchers are using AI to predict the likelihood that patients with advanced cancer will survive the next 30 days.
My own experience shows that the trend toward asking critical questions of AI is real. Upon learning that I was a computer science major, a professor at the university I was visiting immediately asked me questions about life and death. This woman had no mental health issues. She was simply concerned about developing Alzheimer’s disease as she aged and was considering an AI model that could determine when cognitive impairment would rob her of her ability to make responsible decisions.
Fortunately, I don’t run into this kind of problem very often. But I do run into a lot of people who hope that new technology can eliminate the uncertainty that exists. Earlier this year, Danish researchers created an algorithm called the “doom calculator” that can predict a person’s probability of death within four years with more than 78% accuracy. A few weeks later, several copycat bots appeared online, predicting the user’s date of death.
The idea that advanced computers can tell us when we will die is not new, think science fiction or horror movies. However, now, in the age of ChatGPT, such ideas seem more realistic than ever. However, as a scientist and computer scientist, I remain skeptical. The reality is that while AI is capable of many things, it should not be confused with a crystal ball for predictions.
Algorithmic forecasts, like actuarial tables, are generally useful: indeed, they can predict how many people in a society will die in a given time. But they cannot give a definitive answer to the question of life expectancy for each individual. The future is not predetermined: a healthy person may be hit by a bus tomorrow, and a smoker who has never exercised may live to be 100.
Even if AI models can make meaningful individual predictions, knowledge about health is constantly changing. Once upon a time, no one knew that smoking caused cancer. The health landscape has changed dramatically since we discovered that. Likewise, new treatments can render old data obsolete: According to the Cystic Fibrosis Foundation, the average life expectancy for people with the disease has increased by more than 15 years since 2014, and new drugs and gene therapies promise even greater advances in the future.
If you’re looking for certainty, this may be disappointing. However, the more I study how people use data to make decisions, the more I believe uncertainty is not a bad thing. People want clear information, but my research shows that when people have more information to guide their choices, they may feel less confident and make worse decisions. Bad omens can create a sense of helplessness, while uncertainty (as anyone who has played the lottery knows) can make us dream (and work hard) for a brighter future.
AI tools are certainly useful in low-stakes situations. A recommendation algorithm is a great way to find new shows to watch, and if it picks a bad one, you can always watch something else. AI is also useful in more serious situations: if a fighter jet’s onboard computer can intervene to avoid a collision, for example, an AI prediction could save a life.
The problem begins when we see AI tools replacing rather than complementing our abilities. While AI is good at recognizing patterns in data, it cannot replace human judgment. (Dating app algorithms, for example, are notorious for being terrible compatibility advisors.) Moreover, algorithms tend to make up answers with confidence rather than accept uncertainty, and can also exhibit troubling biases depending on the data sets used to train them.
What conclusion should be drawn? For better or worse, we must learn to tolerate uncertainty in life and perhaps even enjoy it. Just as doctors learn to tolerate uncertainty in order to care for their patients, we must make important decisions without knowing the specific outcomes.
This may be unpleasant. But it’s what makes us human. As I warned the woman worried about Alzheimer’s, AI cannot assess the value of a single moment in life. The problems that arise in a person’s life should not be left to insensitive models.
The poet Rainer Maria Rilke once told a young writer that we shouldn’t try to escape uncertainty. You need to learn to love the question itself. It’s hard to understand how long we have left to live, how long this relationship will last, or what life has in store for us. But AI can’t answer these questions for us, and we shouldn’t ask it to. Instead, let’s try to recognize the fact that the most difficult and meaningful decisions in life are the ones we make ourselves.
[ad_2]
Source link