Artificial intelligence, a term coined by American computer scientist John McCarthy at a conference at Dartmouth College in 1956, is now pervasive in modern society. Seen in examples like Siri, Alexa, ChatGPT, DallE, and many others, artificial intelligence (AI) has made its way into consumers’ daily lives and is more accessible to the general public than ever before. As common technology improves, opinions on AI have become part of the public discourse; however, to understand artificial intelligence in context, one needs to define it. AI refers to the ability of machines or computer programs to perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language processing.
AI has many applications, from facial recognition on phones to filtering emails. However, applications of AI currently available to the general public are much less capable than anything in development. According to the Massachusetts Institute of Technology, these simple versions of artificial intelligence work in a three-step process: artificial intelligence starts by receiving input, which relates to information that can be sorted to instruct an algorithm which outputs the user seeks to achieve with this information. Then, processing occurs, and using the input data, the AI uses pattern recognition to understand the desired outcomes after cross-referencing other data sets. After processing, the AI predicts the desired outcome using pre-existing information in its database collected from various sources. This is how most artificial intelligence works, the only difference between each is their training.
A widely used application of artificial intelligence is email. As an input arrives in the form of a new email, AI reads the email to observe common patterns, phrases, and language found between the new message and archived ones. Next, the AI cross-references the message with ones fitting the same criterion. These types of artificial intelligence require less processing power and can be used in more places. A limitation of this type of AI is its ability to only perform singular tasks, unlike others.
One recent development in AI which technology professionals believe will revolutionize the field if fully realized is Q*(Q-star). Once available, technology professionals believe Q* could be the first artificial general intelligence (AGI) system that could be equal to or surpass human intelligence. According to technology reporter Timothy B. Lee, Q* uses a new thought pattern called “chain of thought prompting.” Chain of thought prompting allows AI to systematically reason through problems, meaning Q* will be able to think nonlinearly and adapt to solve problems without receiving external programming. This type of technology would be able to think abstractly, show creativity, and mimic human common sense.
Harvard University has boasted AI as an autonomous system that can surpass human capabilities in many economically valuable tasks, potentially automating and removing human error from certain workforce responsibilities. With AGI development, artificial intelligence has only become a more divisive topic. Moreover, many professionals believe in the advancement of AI as necessary for both a rapidly developing society and as demands for output from individual contributors continue to increase.
“The advent of new technology and innovation always creates more jobs and job security,” stated Tyrone Heggins ‘02, Senior Director of Information Security at Becton Dickinson. Heggins continued by saying that artificial intelligence could be very beneficial to employees and employers at many levels. Conversely, there is skepticism and fear surrounding the development of AI. “It is useful in some regards, it can help out people with their jobs and make jobs a lot easier, but at the same time, it makes a lot of people lazy. I have students plugging an essay into Chat GPT and then having the AI write it for them, and they’re losing the skills that they need to develop,” said Christopher Brantley, World History Teacher.
At St. Benedict’s Prep, views on AI vary between individuals. Faculty and students who have interacted with AI say it can increase efficiency, while others highlight unanticipated costs. “Like a lot of technology, there is a use for it; however, because of the society we live in, it will be used to benefit people at the top, and it will be used to cut labor for a lot of people. You kind of see this with the writer’s strike . . . in Hollywood,” said Joshua West-Williams ‘13, Technology Support Specialist and Coding teacher.
Heggins sees developments in AI as necessary next steps in technological innovation. “A few years ago nobody pushed buttons in the elevator themself, they had elevator operators. And before that, we had rotary phones. Then, I grew up watching Star Trek where they had transponders, and I thought FaceTime would never be something we would do. But, all of those inventions that I mentioned are commonplace now and I think AI will be something else we just do,” said Tyrone Heggins.
Humans are decades away from realizing AI’s full capabilities. With people who are for the development of more complex AI, and many others who are weary of what that development can bring, only one thing is certain: AI isn’t going away and the need to adapt is inevitable.