By James Spencer for DAILY CALLER
In an article written by ChatGPT, the artificial intelligence (AI) model seeks to convince us that “artificial intelligence will not destroy humans.” While the article highlights the challenges of “global cybernetics” and the (potential?) threat of being “surrounded by Wi-Fi” so that “we wander lost in fields of information unable to register the real world,” it also appears to suggest that these concerns will be resolved as AI models “serve you [humans] and make your lives safer and easier.” ChatGPT would have us suspend our misgivings and trust that, in this case, the good intentions of AI and its creators do not pave the road to hell.
Yet, it might be argued that we have seen too much to give ourselves over to even the kindest of AI’s. I am not concerned with an AI takeover as portrayed in The Matrix or iRobot. Instead, I’m more concerned with how the development of AI will strip us of crucial aspects of human life. What if AI creates a system in which it becomes safe and easy to do things that are ultimately detrimental to our well-being? We shouldn’t be concerned that AI will orchestrate a hostile takeover, but that we will surrender aspects of ourselves to AI for the sake of efficiency, productivity, and “progress.”
Progress, however, is a slippery term. As G. K. Chesterton notes in Heretics, when people seek to set aside old “moral standards” and ways of life for the sake of progress they are often saying, “Let us not settle what is good; but let us settle whether we are getting more of it.” Without a deep sense of the “good,” we can’t know whether or not we are making progress. Moving from the old to the new, creating greater efficiencies, or making it easier to get something we want (or think we want) doesn’t necessarily constitute progress. To put it differently, if we assume that all change comes with loss, we need to grapple with what we will lose as we adopt new ideas, engage in new practices, and utilize new devices. In particular, we should take care to consider the negative consequences of certain artificial intelligence (AI) innovations.