BrainStream

News | Politics | Sports | Random Musings

Are Artificial Intelligence Bots Like ChatGPT Basically Google’s Crazy Uncle?

chatGPT

ChatGPT and Artificial Intelligence Bots are getting a lot of press recently and for good reason. The potential use cases for this technology range from enhanced medical treatments to automatic computer code generation and the limits are just being tested.

Last fall, four law professors from the University of Minnesota employed a well-known artificial intelligence chatbot to produce answers to exams in four courses. These responses were then graded anonymously alongside exams from genuine students. The professors found that ChatGPT’s mean performance of C+ was lower than the B+ average achieved by human students. Despite this, if this level of performance was maintained throughout the entire law curriculum, the chatbot would still qualify for a law degree. Impressive right!

Data Validation is Still Crucial

The biggest issue I’ve experienced with these AI Bots is that they answer questions with great authority and if left unchallenged, the spread of bad information is inevitable. This week, I tested a real-time AI program that claims to access data from Google to help it answer questions about current events. In the test, I asked the Bot to make a prediction on the 2023 World Series. The response I got back was incredibly fast and detailed; a lot like the crazy uncle at the family gathering who has an answer for all of life’s questions regardless of validity. While some of the Bot’s answer had merit, a lot of the information provided was totally incorrect. While this is a trivial example, imagine the implications if I had not already known the correct information or simply left the answer unchallenged.

Recent Posts

The Google Generation: Blind Faith in Technology

Over the past 24 years, we have grown accustomed to asking Google, Siri and Alexa all sorts of questions and we usually don’t question the answers we get back. We do this because for the most part, the answers they provide are correct and helpful. Unfortunately, this blind faith in the new AI technologies like ChatGPT may lead us to accept wildly inaccurate information as being true. In my World Series experiment, the AI Bot claimed that Dinelson Lamet of the Padres had won his second straight Cy Young Award for best pitcher in the National League; he actually was hurt and only won one game in 2022. Again, a trivial issue in the grand scheme but imagine if this type of inaccuracy was found in answers to bigger questions. We would certainly find more misinformation leaking into our news feeds.

Artificial Intelligence Bots Like ChatGPT offer great promise but one key fact remains, we must continue to posses the individual knowledge and curiosity to challenge the integrity of the information they provide.

Share this Article

Facebook
Twitter
LinkedIn