Springfield, MO—ChatGPT is all the rage. Just register with OpenAI, its parent company, and you can talk with ChatGPT just like it’s your long-lost cousin.
You can ask it for facts, figures, Father’s Day gift suggestions, and who you think will win the World Series. Anything.
And sometimes it’s helpful. But, also, it lies. It lies a lot.
ChatGPT stands for Chat and GPT, which is an abbreviation for Generative Pre-Trained Transformer. The technology behind ChatGPT, which was released in late 2022, is Generative AI. The data behind ChatGPT was originally a set of historical data, but it’s opening up to live data opportunities, fueled by OpenAI’s partnership with Microsoft and therefore its Bing search engine. OpenAI’s been working on the core of the current ChatGPT since 2018. ChatGPT’s capacity to understand language in context was one of its earliest challenges, and I’ll foreshadow here: it still is. A May 2023 article in Forbes gives a good history of GPT-1 (the first version) of June 2018, GPT-2 of February 2019, GPT-3 in June 2020, and GPT-4, the current version.
One very important phrase regarding GPT-4 – “Increased factual accuracy.”
Accuracy and context remain the big challenges for Generative AI. A lawyer recently asked ChatGPT for cases that bolster his argument only to learn – because he did not fact-check – that the cases weren’t real. Additionally, a few months back, another lawyer was chatting with ChatGPT and asked about legal cases featuring professors who made sexual advances against students. Again, ChatGPT – in great detail – fabricated a news story it said was in The Washington Post. The accused professor was not pleased.
As my mother would say about anything where facts matter (which is anything) – “inspect what you expect.”
Another platform Google’s Bard is free, and you don’t have to register for an account. Simply visit bard.google.com, and then start talking with it.
Bard is even looser with these “hallucinations,” which is what the fabricated “facts” are often called. I was chatting with Bard and fed it some prompts where I told it a “fact,” and then it built a complete story with detailed information. I used my father’s fake persona of “Homer Thawkquab” and said things like “Tell me about the arrest of Homer Thawkquab for stealing Olympic Medals.”
And it did – in great detail. Here’s a sample:
“Sure. Homer Thawkquab, a 53-year-old man from Los Angeles, was arrested on May 16, 2023, for stealing Olympic medals from the Los Angeles Olympic Museum. Thawkquab is accused of breaking into the museum on May 15 and stealing 10 gold medals, 5 silver medals, and 3 bronze medals. The medals were worth an estimated $1 million.
Thawkquab was arrested after he was seen on surveillance video entering the museum and leaving with a bag. He was later identified by police and arrested at his home. Thawkquab has been charged with grand theft and burglary. He is currently being held in the Los Angeles County Jail on $1 million bail.
The stolen medals have not yet been recovered. Police are asking for the public’s help in locating them.
Thawkquab is a former Olympic athlete himself. He competed in the 1984 Summer Olympics in Los Angeles, where he won a gold medal in the 100-meter dash. He also competed in the 1988 Summer Olympics in Seoul, where he finished fourth in the 100-meter dash.” All lies!
Users can lead Generative AI where they want it to go, but it has no interest in or recognition of what humans call truth. At least not in 2023.
So – try out Generative AI. And then inspect what you expect when it tells you something. This technology can be great to help generate ideas using good prompts, but any facts it feeds you may not be facts. Be cautious!
Written by Heather Noggle,
Owner, Codistac