top of page
  • Writer's picturelorignite

What the OpenAI Shake-Up Teaches Us About Change

Updated: Nov 20, 2023

The focus of today’s blog is change.

We live in a world where change is the only constant.

There are changes we foresee, and those we never see coming.

There are changes that happen gradually, and those that happen abruptly.

Some changes we understand the context. Some we’re left to wonder.

Case in point, Friday's breaking news that Sam Altman was out as the CEO of OpenAI. The world was stunned.

This story continued to develop over the weekend taking more unexpected twists and turns.

Here's a link to the Bloomberg News Sunday headline, OpenAI Negotiations to Reinstate Altman Hit Snag Over Board Board Role . Maybe someday there will be a TV series (documentary or docudrama) giving us a behind the scenes look at what went down at OpenAI.

For now, this whole concept of change ties well into the blog I originally planned to post before the news broke. It's all about why you need to make sure to account for change with anything you do in life including when you're using AI models and generative AI tools.

How will you continue to monitor your outputs for accuracy? What’s acceptable for when the output misses the mark? What's the risk if it's wrong?

I started thinking about this based on a task I’d had been giving to ChatGPT.

My usual request? "In the role of a New York Times Op-Ed Editor, critique this draft."

For a while, we were an incredible team. But in my experience over past few months, I’ve noticed the quality of ChatGPT edits going downhill faster than skier Lindsay Vonn.

Was it just me? Maybe. So I set out to informally survey a handful of others and found out they too were feeling like ChatGPT was starting to spin out the same tired phrases, sounding more and more like a robot and less like the human voice it once seemed to mirror so well.

Then this happened.

Out of the blue, someone liked a blog post I had written back in April of 2012. Upon receiving the notification, I couldn’t resist going back and rereading the old post titled, "Perfect Shmerfect, Is anything in life really perfect outside of baseball?".

As I was reading it a few things stood out. #1 - The topic I wrote about more than a decade ago had stood the test of time. It still doesn't make sense to strive for perfection, at least as a human in most instances. #2 - The blogs were all written long before ChatGPT existed and long before I ever used spellcheck or Grammarly.

Each blog post was unapologetically me – no AI-generated words or phrases.

Fast forward to 2023, where I've been using ChatGPT as my editorial sidekick. Review this, critique that, help come up with a better intro paragraph. For me, ChatGPT became like an extra strength aspirin, pop two when the brain started to hurt, and BAM, immediate writing relief.

For a while it was the perfect dose of inspiration and timely feedback to help this entrepreneur get her feet wet writing about some topics she was just learning about. It helped provide courage having a robot review and critique my work before I clicked publish.

But what started as a great mix of human and robot collaboration, was now starting to give me a headache.

Did I build up a tolerance to the creative medicine generative AI injected? I'd specifically tell ChatGPT to exclude words and phrases like, “Deep Dive”, “Delve Deep”, “Game-Changing”. No luck. ChatGPT continued to sneak those words in.

DALL·E: A cartoon-style illustration of a human character sitting at a vintage desk. On one side of the desk is an old-fashioned typewriter producing a paper
DALL·E: A cartoon-style illustration of a human character sitting at a vintage desk. On one side of the desk is an old-fashioned typewriter producing a paper

What was I doing wrong? Why might this be happening.

I asked ChatGPT for help explaining it in a way that everyday people could understand. And I found the below explanations to be quite helpful.

  1. Model Training and Updates:

    1. Think of AI like a smartphone app that gets regular updates. Sometimes a new update can change how your favorite feature works. For instance, imagine if after an update, your photo app started editing pictures in a style you're not used to. Similarly, updates to AI models like ChatGPT can change how they respond, and they might start giving you different types of answers than before.

  2. Adaptive Learning Limitations:

    1. Imagine teaching your dog a new trick, but instead of learning to do it better over time, your dog keeps doing the trick the exact same way, even if it’s not perfect. AI models learn from interactions, but they might not improve or vary their responses like a human or a pet would. If you keep asking similar questions, the AI might keep giving you the same type of answers.

  3. Data Saturation:

    1. Consider a chef who only cooks with the same set of ingredients every day. Eventually, all the dishes start to taste similar. If you keep asking an AI the same type of questions, it's like using the same ingredients; the AI's responses might become repetitive and less creative, like a chef running out of new recipes.

  4. Increased Usage and Common Phrasing:

    1. Imagine a popular new slang term everyone starts using. Soon, you hear it everywhere, and it begins to lose its uniqueness. As more people use AI tools, the AI picks up on common phrases and starts using them more, much like how popular slang words get overused.

  5. Feedback Mechanisms:

    1. If you’re learning a new skill, like playing the guitar, getting different types of feedback helps you improve. If everyone tells you the same thing, you might miss out on other ways to get better. For AI, varied and constructive feedback is important. Without it, the AI's ability to come up with different and better responses might not grow, just like a guitar player might not improve without diverse feedback.

To wrap things up, it's important for all of us to remember that nothing is static. Everything in life is evolving from tech company leadership, to the stock market, to football team rosters. Who would have predicted Joshua Dobbs joining the Vikings this season, coming off the bench and immediately leading the team to two straight victories.

Life obstacles and innovations challenge us to adapt, rethink our approaches and in general to be open to what's next. Change, after all, isn't just about the new paths we tread. It's how we evolve while walking them.

Would love your thoughts on all of this and how we can make change less scary!

If you enjoyed this blog, please click the heart to like it and share with your friends and family.

And if you have any questions you would like to ask, or something you are curious about, please don't hesitate to reach out for a 30 minute discovery call.


bottom of page