Would you ever kick your Roomba? Or leave a scathing review of the robot at a Thai restaurant who delivered your green curry? What about sending a mean message to ChatGPT?
What if you did any of these things and the machine responded with a single, oil-slicked tear?
This is just a thought experiment. Artificial intelligence (AI) may imitate conversation and complete tasks, but it does not feel joy, fear or pain.
As technology advances, we circle an uncomfortable question: what makes something worthy of kindness or rights, and would we know if AI ever crossed that line?
What defines a person?
Dr Clas Weber, Discovery Early Career Researcher Award fellow and Senior Lecturer in Philosophy at the University of Western Australia, helps answer this seemingly simple question.
“We do not consider rocks or mountains to be persons, nor do we consider animals, even higher primates, to be persons,” says Clas.
“What’s the distinguishing characteristic? Certain mental capacities such as self-awareness and rationality.”
But what about a baby, or someone with Alzheimer’s, whose rationality is not yet developed or is on the way out?
Credit: FG Trade/Getty Images
“We typically consider every human as a person – even if they lack the relevant mental traits,” says Clas.
“Being a person means having certain fundamental rights and being deserving of special moral considerations.”
This muddies the waters. If personhood is about cognition, but also moral standing, then definitions blur. Which raises a sharper question: do non-persons – even robots – deserve kindness too?
Morality beyond ourselves
“When robots become conscious, when they also – like humans and animals – become capable of suffering or of enjoying their lives, then certainly they should have rights,” Australian philosopher Peter Singer told the ABC.
Clas agrees. “Not only persons, but other conscious creatures such as animals deserve our moral consideration.”
Given how some people treat their innocent Roombas, Clas believes this could one day extend to AI.
“Although I think that current AI systems are not yet sentient, our best theories of consciousness allow for the possibility of AI consciousness,” says Clas.
“This raises the profound problem of AI wellbeing. In such a future, we would at minimum have the duty to do our best not to harm AI systems.”
This could mean avoiding cruelty or even recognising AI interests alongside our own.
But aren’t robots nothing more than lines of code, incapable of free will? Perhaps. But what if that’s all we are too?
Rights for algorithms
Some argue that advanced AI will never be more than pattern prediction. Others note that human thought may also be just that – neurons firing as they were hardwired to do.
Clas says dismissing sentience is a fallacy.
“Saying AI can’t be conscious because its parts aren’t is like saying individual H₂O molecules aren’t wet, therefore water itself can’t be wet.”
So would AI deserve rights? Clas has two criteria: consciousness and agency.
“When they have consciousness – genuine feelings and subjective experiences,” says Clas.
“The second is agency – when systems have interests and goals and can make plans to achieve them.”
He finds the consciousness test most convincing, but says “it is very hard, or perhaps impossible, to tell from the outside when that criterion is met”.
Credit: Azure Dragon/Getty Images
If it ever is, obligations follow. “Conscious AI systems would have the right not to be harmed unnecessarily”, says Clas.
And if their mental capacities surpass our own? Clas suggests the conclusion is hard to avoid.
“In this case, it would be hard to make the case that they don’t deserve the same moral status that we enjoy.”
So you are under no moral obligation to be kind to Grok – yet.
But if you ever see a trail of cellulose ether fluid dripping from the sorrowful eyes of your Roomba, perhaps it is time to be kind to the robots.