ChatGPT: Issues, Pitfalls and Caveats

ChatGPT has a blend of unique problems that have blurred the lines between human error and algorithmic hiccups. In order to complement ChatGPT, it’s important to understand ChatGPT’s inherent flaws and current limitations. The end goal is to learn how to utilize the new technology in a way where the output from “ChatGPT + You” is greater than the sum of its parts.
Written by
Kim Le
Published on
March 10, 2024

ChatGPT: the new Wikipedia

When Wikipedia came out and became popular seemingly overnight particularly with students throughout the US, its biggest critics claimed that Wikipedia was inaccurate and lacked the citations necessary to be a quality source of information. Wikipedia was plagiarism and a hodgepodge of misinformation. At the time, the established intellect preferred the Encyclopedia, the 26 set hard covered books that were only irregularly updated. Nowadays, the information on Encyclopedias goes out of date by the time the books go to print. In comparison, the administrators and editors of Wikipedia have judiciously added citations, backlinks, and callouts for incomplete or potentially inaccurate information on an ongoing basis. Today, Wikipedia’s information is superior than most alternative options.

The Verdict is still out on ChatGPT

Sound familiar? Today’s criticism of ChatGPT isn't too different. ChatGPT’s LLM is being condemned as building a platform off of blatant plagiarism riddled with generalizations and inaccuracies. In time, however, ChatGPT will also address these critical flaws amidst the lawsuits it’s facing against its own critics and challengers. Yet technology moves forward.

good robot vs evil robot

The verdict on whether ChatGPT is good or evil is still unknown. In the mean time, users need to be equipped to deal with the inherent flaws of this nascent technology. As early adopters began to integrate these models into their workflows, they find themselves navigating through a minefield of potential issues. Therefore, the end goal is to learn how to utilize the new tool in a way where the output from “ChatGPT + You” is greater than the sum of its parts.

What’s wrong with ChatGPT?

ChatGPT, the subject of both adoration and anxiety among its users, has brought forth a blend of unique problems that have blurred the lines between human error and algorithmic hiccups. In order to complement ChatGPT, it’s important to understand ChatGPT’s inherent flaws and current limitations.

Problems with ChatGPT, fake news, not fact checking, poor quality content

1. Blatant Plagiarism

One of the most pressing issues with AI-generated content is the blatant plagiarism. Universal Music Group and New York Times have both sued OpenAI on their infringement of copywrite laws [2]. It doesn’t help that ChatGPT has been known to outright copy text from its training data and present it as original content, despite claiming that the LLM only learns from the materials. Frequent users can manually verify outputs and likely will catch evidence of the direct copy and paste from the tool. This is the most challenging aspect for adoption of ChatGPT and similar tools.

Tools like Jasper and ChatGPT have introduced plagiarism checkers to review the content produced on those platforms. Unfortunately, the plagiarism checks that the AI tools provide also works inconsistently depending on how much of the content is fed for review. For example, when a full article is run through a plagiarism check, the checker turns up multiple articles because there’s sufficient text to get a match. Whereas, when snippets of paragraphs are checked for plagiarism, the checker can turn up nothing. This can be dangerous for users who are relying on the plagiarism tools to prevent misconduct.

2. Misinformation Mixed with Information

ChatGPT can and tends to blend correct information with incorrect information. This isn’t a case of mere odds or mishaps; it’s a design flaw that stems from how language models like ChatGPT are trained. The models are fed a wide variety of texts, some of which contain inaccuracies. Regurgitating these errors is just as likely as presenting the truth, creating a discordant mix of fact and fiction in its responses.

This makes it incredibly difficult for reviewers to catch errors. Common methods of review would be spot checking a handful of facts in the write up. However, that means that unless every aspect is checked, some will not be caught. Even plagiarism checkers, won’t be able to call out misinformation copied from elsewhere that has been embedded and paraphrased with facts.

3. Lack of Citations

Adding to the mix of fact and fiction as well as blatant plagiarism, there’s no systematic way to verify ChatGPT’s output. There are no citations.

The model does not possess the ability to back up its statements with sources, rendering its output nothing more than an authoritative assertion. This absence of a validation framework can lead to the propagation of misinformation, undermining the very authority we seek in AI-generated content.

Furthermore, if a user wanted to verify the content being received from ChatGPT or dive deeper into a specific topic presented by ChatGPT, there’s no way to pursue further research.

4. Generalized Content

ChatGPT and most LLMs excel in presenting information at a general level. The human language is by design ambiguous allowing the text to be interpreted in many ways. LLMs have learned that ambiguity effectively. Whether it’s a reflection on the training data or the model itself, ChatGPT’s results can feel as fluffy as the average writer or longwinded as Charles Dickens. Furthermore, the more specifics are required, the more likely the results are gibberish. If conciseness is a hallmark of a good writer, then ChatGPT requires manual editing and prompt experimentation to get to superior results.

Best Ways to Make ChatGPT Work for You

Knowing the flaws of ChatGPT is an excellent starting point on how to address them. The next imperative is to devise and implement strategies that can mitigate these risks and enhance its value and reliability.

1. Have a General Understanding of How LLMs Work

Researchers have been working on developing Large Language Models (LLM) for over five decades, even as early as a century ago, before realizing the results of today. Most people using ChatGPT, however, know little to nothing about its history. The greatest drawbacks of the technology’s users are users’ lack of understanding. They don’t know how the models were developed and how they work in the first place. Having a conceptual understanding of how LLMs are created is the best way to learn how best to put the technology to work.

Like learning math or science, the history of the field sheds light on why the field has become the way it has today. Learning the historical development of LLMs can help explain why LLMs have certain limitations instead of just inferring what the limitations are. This more in-depth understanding can help users anticipate how the LLMs can fail while using it.

2. Getting Good at Prompts

With tools like ChatGPT, the adage “garbage in, garbage out” is particularly resonant. Therefore, the better the prompts, the better the results. Knowing which aspects to be specific in and which aspects to allow ChatGPT to fill in the blanks. Crafting precise, unambiguous cues that guide the AI towards accurate responses is a pivotal strategy in yielding better, more useful outputs.

Not surprisingly, many companies leveraging LLMs have also recognized the importance of prompts, giving rise to the prompt engineer. Demand for prompt engineers have exploded with cash salaries going as high as $500,000 per year before equities and other non-cash compensations. Getting good at prompts can help you better leverage ChatGPT, and also might land you a job.

3. Always Verify with a Plagiarism Checker

While ChatGPT might falter in the provision of citations, ChatGPT and similar tools are building in plagiarism checkers to help guard against its blatant plagiarism. Plagiarism will likely remain LLMs’ greatest design flaw, so fact checking is a must.

By fact-checking the AI’s output using reliable sources, users can add a layer of authenticity to the information. This hybrid approach of AI-generated content validated by human intelligence offers a potent antidote to misinformation.

4. Be a Subject Matter Expert

This is counterintuitive, but the best way to leverage ChatGPT is to already be a domain expert on the area you’re using the generative AI tool for. Experts in a domain can easily verify the accuracy of the output without extensive research. In a lot of ways, a user checking ChatGPT’s output is no different than a teacher grading a student’s test or homework. The more knowledgeable and fluent the teacher is on a subject, the more likely the teacher can correctly grade the student’s results without an answer key (even if the student is smarter).

If you’re not an expert, start to practice fact-checking the AI’s output using reliable sources. This is where human oversight comes into its own. The human users can add in proper citations where there was none. This hybrid approach of AI-generated content validated by human intelligence offers a potent antidote to misinformation.

Human + AI: Greater than the Sum of its Parts

AI language models like ChatGPT are not without their complications. Nor are they the panacea for all content needs. However, with a blend of technological and individual adjustments, we can transform these models into valuable, productive tools that align with ethical and quality content standards. The hybrid of human and machine carries the sci-fi promise of advancing our capabilities. ChatGPT has excited so many because of its potential to fulfill that promise. Human and technology can be better together.

Citations:

[1] https://www.windowscentral.com/software-apps/chatgpt-wrote-this-headline-about-the-latest-openai-news-publisher-lawsuit-hits-company-over-alleged-content-plagiarism-legal-battle-ensues

[2] https://www.theatlantic.com/technology/archive/2024/01/chatgpt-memorization-lawsuit/677099/

[3] https://www.forbes.com/sites/lanceeliot/2023/02/26/legal-doomsday-for-generative-ai-chatgpt-if-caught-plagiarizing-or-infringing-warns-ai-ethics-and-ai-law/?sh=29f9ff5a122b

The Eight Twenty Newsletter
No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.
Read about our privacy policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

DISCLAIMER: Information on this site is for educational purposes only. LeHerring LLC does not provide, legal, accounting, tax or investment advice. Although care has been taken in preparing the information provided to you, we are not responsible for any errors or omissions, and we accept no liability whatsoever for any loss or damage you may incur. Always seek financial and/or legal counsel relating to your specific circumstances as needed for any and all questions and concerns you now have or may have in the future.

We cannot guarantee your success, nor are we responsible for any of your actions. Our role is to support and assist you in reaching your own goals, but your success depends primarily on your own effort, motivation, commitment, and follow-through. We cannot predict, and we do not guarantee, that you will attain a particular result.

AFFILIATES: From time to time, we may promote, affiliate with, or partner with other individuals or businesses whose programs, products, and services align with ours. In the spirit of transparency, we want you to be aware that there may be instances when we promote, market, share or sell programs, products, or services for other partners. In exchange, we may receive financial compensation or other rewards.