AI, Luddites, and the Future – Part 2
In Part 1, I looked at four arguments that anti-AI folks give for being anti-AI. I don’t find any of them convincing. However, just because I’m not anti-AI doesn’t mean I don’t have concerns.
My biggest concern is that there will be no regulation around AI. A laissez-faire attitude towards AI would be a mistake. Deep fakes are shockingly easy to make now thanks to AI. People’s likenesses need to be protected. We also have to be on guard against the spread of misinformation. As things get more and more automated, we’ll have to establish rules and policies for liability when something inevitably goes wrong.
I’m also worried that schools don’t and won’t know how to deal with AI. And I don’t mean the fact that kids are going to cheat using AI. I mean schools need to be able to teach kids how to use AI responsibly. They need to teach kids how to double check outputs. They need to teach kids how to spot misinformation.
A third concern is that the anti-AI will be unwilling to participate in the conversation. They are so concerned with being against AI that they refuse to admit that there are legitimate ways to use it. If we only have AI boosters talking about it, it will be hard to get good regulation. We need to have people willing to use AI who will also push for regulation and reform of the industry. That’s how we’ll get more responsible AI.
Many philosophers are very concerned that we may make a sentient AI in the near future. That would raise a whole host of issues. How will we even know if an AI is sentient? Would it be morally permissible to turn off a sentient AI? Would it be morally permissible to use a sentient AI as a tool? Would we have to grant rights to a sentient AI? What level of rights? Will they be more like animals or people? It can be dizzying to think about.
One thing I’m not very concerned about is the AI apocalypse. I’m also not concerned about becoming superfluous. And I’m not concerned about losing our cognitive superiority. I don’t think an apocalypse is likely. Superfluity is dependent on your point of view. I can’t see how we will ever be superfluous from our own point of view. And I just don’t care if we are cognitively dominant or not.
The way I see it, there is some need for caution, but AI is opening up a world of opportunities. It would be a huge mistake to let fear ruin those opportunities.




Post Comment