top of page
  • Writer's picturelorignite

We Know ChatGPT Can Be Wrong: Here's Why You Should Stick Around

Ever ask ChatGPT about the time? Well, Tim Bornholdt, a Software Architect and Entrepreneur, did just that. He asked, "It's 11:32am. How many minutes are between now and 11:54am?" ChatGPT's answer? A whopping 82 minutes! Even Morris Day and the Time would shake their heads at that one!



Now.. let's put this wrong answer into perspective.

  • Yes, it's a bit troubling but even the best of us have our off days, and AI is no exception.

  • The wrong answers don't have to invalidate the countless correct ones.

  • Imagine ditching your favorite restaurant because one day they got your order wrong.


I bring this up because I've recently heard a few folks say they tried ChatGPT once and it was wrong and they have no intention to go back.


First off, to those who've tried and walked away, major props for giving it a whirl! Sure, you bumped into ChatGPT's quirks, and yeah, sometimes it daydreams a little (those darn confabulations, aka hallucinations). But should we toss it aside and never look back? No way!


Here's the thing: those AI tools that chat back with unique answers? They're not packing their bags anytime soon. We've got to get a hang of them because, guess what? The 2023 World Economic Forum says that in the next five years, almost half of our job skills are going to change. So, unless you're about to hit the beach with a retirement cocktail in hand, we need to navigate this together.




Here's a suggestion to reframe your thinking. Every time you prompt ChatGPT, trying to expose it's flawed thinking, that's actually super valuable. Not only can you give a thumbs up or down to your answers and add comments, but you can also report your findings to OpenAI through the Help Page.


Some food for thought, just last month a couple thousand hackers gathered in Las Vegas for the annual Def Con convention. One of their goals was to find chinks in the AI armor. This exercise was supported by AI companies and the WhiteHouse. The New York Times did a piece on it. Here's a quick blurb from the article, When Hackers Descended to Test A.I, They Found Flaws Aplenty



The hackers tried to break through the safeguards of various A.I. programs in an effort to identify their vulnerabilities — to find the problems before actual criminals and misinformation peddlers did
Each competitor had 50 minutes to tackle up to 21 challenges
They found political misinformation, demographic stereotypes, instructions on how to carry out surveillance and more.

At first glance, what they uncovered was pretty unsettling. However, I tried to think about it like the annual doctor visit where they run all of those preventative checks.


If your doctor identifies early signs of health concerns, like high cholesterol, the intent isn't to alarm but to prevent. By treating these early warnings, you increase your odds of avoiding a stroke or heart attack down the road. Similarly, the hackers at Def Con were "diagnosing" potential cracks in the AI models that need to be addressed, ultimately reducing the risk of real harm by malicious users.

To wrap things up, my advice is to keep chatting, keep questioning, and if anything doesn't add up, rate or report it.


Would love to hear your thoughts. Have you tried ChatGPT, Google's Bard or Microsoft's Bing Chat? What did you think of the answers? Thrilled, satisfied or they left something to be desired. Please add your comments. Also, if you found this blog worthwhile, please like and share.


Additionally, I highly recommend following Tim Bornholdt on Linked in and if you get a chance attending or listening to his Tedx talk coming up October 12.




Just a little side note, when I asked ChatGPT to review my blog with a critical eye, I received this message. If you have any idea what flagged this content, please comment. Curious to learn more.

Help requested: What about this blog post could the content policy?
ChatGPT didn't want to proof my post.



Comments


bottom of page