Robert Neithart, Asst. News Editor—

A propensity to skepticism seems to be a common thread in humankind’s reception of new technology. From the Luddites of the 19th century and their resistance toward industrial advancement to the “computerphobia” of the early eighties, people have often met new technologies with some combination of skepticism, fear and anxiety. 

Now, 39 years since the release of the first Macintosh and 16 years on from the first generation of Apple smartphones, in a decade wherein many of us have lived our entire lives surrounded by technology, we find ourselves in the curious position of judging what will likely be the next great frontier: artificial intelligence. 

What separates artificial intelligence or AI as it is commonly known, from past technologies is the extent to which it can emulate skillsets once thought to be unique to humans. AI’s unprecedented ability to replicate human speech and writing, for example, has elicited skepticism from many, even those traditionally partial to technological advancement

In this respect, the majority of criticism aimed at AI seems to stem from a place of concern for human value and ability; that is to say, AI poses a threat to capacities long-considered exclusive to people. 

In late November 2022, OpenAI, a non-profit research lab based in San Francisco, released ChatGPT to the public, garnering 100 million active users in just two months

ChatGPT, short for “Conversational Generative Pretrained Transformer,” is a language-based AI model that’s trained on a diverse collection of books, articles, websites and databases, allowing it to generate unique, on-the-spot responses to any number of subjects. 

From its ability to write university-level papers, generate code and compose poetry, ChatGPT has taken the world by storm, setting a high bar for what a consumer AI model is and ought to be capable of. 

Naturally, the meteoric success of OpenAI’s model has prompted something of a technological race, with Google and other companies poised to release their own consumer AI models to the public in the coming months.

Considering the degree of interest, financial and otherwise, that has been expressed in the development of new AI models and improvement of existing ones, people should read the room and come to the logical conclusion that AI is here to stay, and in light of this reality, we ought to learn to live with the changes it will doubtless bring to our lives. 

To begin, it’s difficult to stomach the reality that skills such as writing, poetry and storytelling, long considered to be pillars of human expression, may no longer be unique to people. 

For so long, artistic expression was regarded as among the most significant distinguishing factors between people and machines; now that that line has evidently been blurred, identifying the capacities truly singular to people has become a more challenging endeavor, but one that perhaps stands as humankind’s greatest obligation in light of the forthcoming circumstances. 

Educators, lawmakers and the general public should put less effort into lobbying for the restriction of AI due to its perceived threats to traditionally human roles, but rather invest more energy into fostering fundamentally irreplaceable skills like interpersonal communication, discernment and creativity, to name a few. 

What’s been observed over the past few months is just the tip of the iceberg in the greater narrative of AI that will play out in the coming decades. As a field defined by constant progress and innovation, whatever laws or regulations passed today with respect to the limits of artificial intelligence will likely be rendered obsolete within mere years of their passing. 

Thus, we ought to pay less mind to restricting the reach and capacities of artificial intelligence–such efforts would needlessly inhibit progress, but rather invest in the development of human capital and the capacities that truly distinguish us from machines.