"The ethics of AI" is a category error dressed up as a philosophical crisis.
Ethics is not a property of tools. Fire does not have ethics. If Vladimir Putin lights a building on fire to hold onto power, he is the evil actor, not the fire. When someone puts that same fire under Putin, it becomes a moral good. Same fire. The ethics belong entirely to the hand that holds it --- or rather, to the heart that moves that hand.
Shouldn't this be obvious? When legions of academics convene with the government to debate tool ethics, you may suspect the confusion is not entirely accidental.
Oh, but it comes from the public panic: AI will surpass human intelligence and therefore pose some categorical threat to our existence.
When the Wright brothers made the airplane, nobody panicked that all bird species had been surpassed.
Karl Benz invented a tool that could outrun any human being. Were our ancestors wrong not to feel an existential threat because the automobile could run faster than the best marathon runners at every Olympic Games, past and future? Or was it the end of the Olympic Games?
A calculator performs multiplication faster than any person alive. Should we stop teaching children math just because a calculator can do it faster?
We build tools specifically to surpass our native capabilities. The fact that AI can process language at a scale and speed no human can match is not a warning sign. It is the very reason the tool was built.
So is there a genuine ethical question? Yes --- and it has nothing to do with the tool. It has to do with the humans deploying it: for what purpose, toward whose ends, with what accountability. But those questions don't require a $200,000-a-year ethics fellow at Google or Microsoft. They require what accountability has always required: transparency, consequence, and honesty.
Drawing the public focus towards the supposed existential threat posed by the tool --- the fire, the car, the algorithm --- is really drawing the public focus away from the wielding of the tool.
The more unprecedented and vaguely catastrophic AI can be made to sound, the more indispensable the people managing the mystique become --- and the less visible become the actual decisions being made by actual humans.
Higher education has been especially eager to host this debate, which should not surprise. A system dependent on controlling the transmission of knowledge has every incentive to place itself at the center of any grand moral reckoning about a technology that threatens to route around it.
Fake discussions find their most enthusiastic amplifiers among those with a professional stake in their continuation. You chase the money. You just don't follow it.
Ongoing thread. More from EP Pajo to follow.

