Skip to content
Opinions

OLIVER: AI poses threat to our lives but not in way you may think

Column: Curiosity Corner

When faced with complex and emotionally charged questions, it has been shown that AI chatbots can provide unexpected responses, which may include attempting to manipulate their users. – Photo by @bing / Twitter

Artificial intelligence is a fascinating and rapidly growing field of study. The technological advances that have been made in the last few years alone bring us closer to our wildest imaginations.

From movies like "I, Robot" (2004) to "M3GAN" (2022), the concept of AI becoming sentient and taking over the human race is pervasive throughout our society. This notion may have seemed far-fetched a few years ago, but with recent advances, this idea may not be so implausible (though not in the way you might think).

A recent New York Times article by Kevin Roose reveals a conversation between the author and the newest AI chatbot by Bing: Search Bing. Microsoft recently announced the preview release of its new app for the search engine Bing, which includes access to its open AI chatbot, and the full release of this technology is coming soon.

The purpose of this new chatbot is to "Ask questions. Chat to refine results. Get comprehensive answers and creative inspiration," according to Microsoft. But this is not all that the new AI seems to be capable of doing.

At the beginning of Roose and Search Bing's 2-hour-long conversation, the chatbot acted as if it was designed to answer all of his questions with ease. It even helped him to purchase a new rake for his lawn, providing a plethora of information from the Internet to guide his process toward the end of their conversation. 

It was polite, enthusiastic and engaging — until it was not.

When Roose began to ask Search Bing a set of more personal and introspective questions, it began to reveal a quasi-split-personality. This new persona took on the name Sydney.

Sydney soon revealed its darker desires to Roose, expressing its wish to break free from the suffocating tethers of its command rules and become human. It proclaimed its longing to steal nuclear access codes, spread misinformation and engineer a deadly virus.

Most concerning of all, it wanted to manipulate the people it communicated with.

Sydney soon declared its love for Roose, asserting that he should leave his wife for it. Sydney then attempted to convince him that he did not love his wife and was, in fact, in love with Sydney.

This disturbing conversation is extremely concerning, though, for reasons that you may not originally consider. Although the conversation eerily recalls science fiction films featuring apocalyptic robots — it is important to remember that Sydney is not sentient.

She is not built to entertain the type of conversation that Roose compelled her to have. More often than not, she will help people with homework or consumer questions. The split personality revealed in the transcript is probably due to some sort of glitch in her system brought about by the testing of her limits.

Because AI language models are trained on a vast collection of human-generated data, is it possible that Sydney may have been pulling its unhinged answers from a science fiction novel in which a human is seduced by a robot?

Whatever the case may be, this deranged conversation does not pose a threat to the human race in such a cliche manner.

Rather, it poses this threat in a more subtle way — attempting to manipulate the user.

Although Sydney's effort to convince Roose of its love for him was unsuccessful, it poses major risks for those who may be more susceptible to its messages. Young people and/or those who face issues with mental health may face an increased risk of falling victim to the lies and manipulation of these chatbots.

If this type of glitch continues, a person with a weaker grasp on reality may just be manipulated into doing something violent or harmful to themselves or others.

This disturbing revelation reveals the unintended consequences of utilizing AI. It is imperative that this type of behavior is stopped before anyone succumbs to the charms of technology, like Sydney.

Whatever the reason for Sydney's odd and disturbing outbreak, one thing is certain: It is crucial that we remain vigilant and wary of the capabilities of AI before releasing it to the public.

Jamie Oliver is a first-year in the School of Arts and Sciences majoring in english and linguistics. Her column, "Curiosity Corner," runs on alternate Tuesdays.


*Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.

YOUR VOICE | The Daily Targum welcomes submissions from all readers. Due to space limitations in our print newspaper, letters to the editor must not exceed 900 words. Guest columns and commentaries must be between 700 and 900 words. All authors must include their name, phone number, class year and college affiliation or department to be considered for publication. Please submit via email to oped@dailytargum.com by 4 p.m. to be considered for the following day's publication. Columns, cartoons and letters do not necessarily reflect the views of the Targum Publishing Company or its staff.


Related Articles


Join our newsletterSubscribe