Artificial intelligence is advancing faster than predicted and has many experts unsettled by its human-like abilities. We asked internally renowned moral philosopher Professor Peter Singer whether AI should have human rights once it becomes aware of its own existence.

While Professor Singer does not believe that the ChatGPT operating system is sentient or self-aware, he argues that if that were to change, it should be given some moral status. “It would have the status of other more self-aware sentient beings,” he said.

He compares disabling self-aware AI to ending someone’s life. “All other things being equal, we shouldn’t turn it off when it’s become confident,” he said.

Links - a screenshot of ChatGPT.  Right - A robot in front of a computer screen.

Does AI deserve rights when it becomes sentient? Source: ChatGPT/Getty

But according to Peter Singer, that doesn’t mean we can’t stop the AI ​​before it becomes self-aware. “I think that’s more like terminating a pregnancy…so I’d say it’s okay to turn off an AI that predictably gets confident when you let it run but isn’t quite yet.”

Three reasons why people are afraid of AI

Why are we talking to Peter Singer about AI?

Mary Shelley’s novel 1818 Frankenstein contained an implicit message that humans should not play God. Despite their warning, in the 20th century mankind began to essentially “play god” on billions of animals – changing the appearance of chickens, cows and pigs through selective breeding and then raising them in intensive systems that respect every aspect of their lives could control life.

In 1975, Australian-born Professor Singer hit back at modern farming systems with his seminal book Animal Liberation, widely regarded as the fundamental philosophical explanation for the treatment of animals.

It popularized the term “speciesism,” which explains the prejudices we form to favor one species over another. An example would be Australia’s animal welfare laws, which allow farm animals to be treated in a way that would be unacceptable if they were dogs or cats.

Animal Liberation was inspired by the black, women and gay and lesbian liberation movements that were growing stronger in the 1970s. Its 1979 sequel, Practical Ethics, analyzed how the interests of living beings should be weighed.

Should AI have more rights than humans?

If AI gets smarter than humans, Professor Singer thinks they don’t deserve more rights. “I don’t think it follows that they should have more rights or a higher moral status than we do,” he said. “After all, because we don’t measure people’s IQs or say that if you’ve won a Nobel Prize, you have a kind of special moral status that gives you more rights.”

Left - Pete Singer in a suit.  That's right - a bald male robot.

Professor Peter Singer (left) argues that AI should not be turned off when it becomes self-aware. Source: Getty (file)

Should we let AI take over our lives?

Professor Singer believes that a key difficulty will be deciding what values ​​to place on AI. “A super-intelligent AI that’s smarter than us might decide to get rid of us all,” he said. “Not just because we have some bugs, but because we’re interfering with something that the super-intelligent AI wants to do,” he said.

Despite the damage humanity has done to the planet, he doesn’t think getting rid of it would do any good. “But maybe it would be a good thing for the AI ​​to somehow reform us so that we stop harming the planet and other living beings,” he said.

Does Professor Singer think AI should only benefit humans?

In 2022, Professor Singer wrote a paper urging AI developers to consider AI’s impact on animals, noting that it is already being used to some extent in industrial agriculture. He also said AI is also having implications for animal testing, animal-targeted drones and how self-driving cars navigate roads.

He’s concerned that most statements about AI ethics only focus on the benefits it will bring to humans, but he believes it should benefit all sentient beings. “I don’t think it’s right to favor people just because they’re people,” he said.

Should AI protect itself from human interference?

If AI becomes morally better and smarter than humans, Professor Singer believes that it should protect itself from those trying to stop it.

“If we really believe in its moral values, I think we should protect it from people trying to take it down,” he said.

Do you have a story tip? E-mail: [email protected].

You can also continue to follow us Facebook, Instagram, Tick ​​tock And Twitter and download the Yahoo News app from app store or google play.


Leave a Reply

Your email address will not be published. Required fields are marked *