Even though ChatGPT can produce some convincing looking outputs, as shown in this case, it is still capable making things up or getting things wrong, so here are some ways to check the facts quickly for yourself. 

One recent example of how ChatGPT may be the subject of a defamation lawsuit after it reportedly created completely incorrect (and highly damaging) content is in the case of Brian Hood, the mayor of Hepburn Shire, 120km northwest of Melbourne, Australia. Mr Hood reported being told by members of the public that ChatGPT had (falsely) named him as a guilty party in a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s. In reality, Mr Hood had been the whistleblower in the scandal, hadn’t been convicted in a crime, and hadn’t served time in prison as claimed in ChatGTP’s output about the scandal.

How Can ChatGPT Get Things Wrong? 

As an AI language model, ChatGPT uses algorithms to generate responses based on patterns and associations learned from substantial amounts of data. ChatGPT is a machine learning AI language model and not a human being, so it may not always provide perfect responses. As OpenAI says: “Current deep learning models are not perfect. They are trained with a gigantic amount of data created by humans (e.g. on the Internet, curated and literature) and unavoidably absorb a lot of flaws and biases that long exist in our society.” 

How ChatGPT Could Get Things Wrong 

Clearly, it’s possible for ChatGPT to give out incorrect/inaccurate information and even OpenAI’s boss Sam Altman has said in interviews that he thinks regulators and society should be involved with the new generative chatbot technology to guard against potentially negative consequences. Some of the main ways that ChatGPT can get things wrong include: 

– Limited knowledge.  ChatGPT’s knowledge is based on the data it was trained on, which has a cut-off date of September 2021. If there is new information or developments that have occurred after that time, ChatGPT may not be aware of them. This situation may, however, soon improve when ChatGPT uses plugins, such as a web browsing plugin, to bring its answers up to date. 

– It is also possible that, since ChatGPT relies on the context of the conversation to generate appropriate responses, if the context is unclear or ambiguous, ChatGPT may generate a response that is irrelevant or incorrect. 

– The data that ChatGPT was trained on may contain biases that can affect the responses it generates. For example, if the training data contains a disproportionate amount of biased information on a particular topic, ChatGPT may generate responses that are biased towards that topic. Critics have also noted gender biases and other biases may be present which could skew answers. 

Can’t Verify The Accuracy Of Its Answers 

It’s important to note also that ChatGPT does not have the ability to verify the accuracy of the information it provides. Therefore, as ChatGPT says itself, it’s always a good idea to fact-check any information received from ChatGPT with other sources to ensure its accuracy. 

What Are The Best Ways To Fact Check? 

To ensure that any ChatGPT outputs that you use are accurate, there are several ways to fact-check information, including: 

– Cross-checking with other reputable source such as news articles or academic publications. If the information matches up across multiple sources, it is more likely to be accurate. 

– Looking for supporting evidence such as statistics or quotes from experts. This can help you verify the accuracy of the information provided. 

– Checking the credibility of the source for the information it provides, e.g. looking for information about the author, the publisher and the publication date to ensure the source is reputable and up to date. 

– Using fact-checking websites such as Fullfact.org, Snopes, or FactCheck.org to verify the accuracy of information. These websites specialise in investigating and verifying information to ensure that it is accurate. 

– Consulting with experts in the relevant field (if you’re able to). If you are still unsure about the accuracy of the information provided by ChatGPT, consider consulting with experts.  

What Does This Mean For Your Business? 

ChatGPT is certainly a time-saving tool but is also just a machine learning AI language algorithm, albeit an impressive one. As such, given the incorrect data source and/or context, it can get things wrong so it’s worth spending a little time reading its outputs and carrying out some basic fact-checking before publishing its outputs on a website or blog.

As in so many areas of business, building checks into processes can help reduce mistakes and retain quality and this is the same with using the output of generative chatbots. However, with the introduction of GPT-4 and the use of plugins, such as a web browsing plugin, ChatGPT may soon be able to produce answers that are more up-to-date and contain fewer mistakes.  

By Mike Knight

Back To Latest News

Comments are closed.