Scroll to read more

If you haven’t familiarized yourself with the latest generative AI tools as yet, you should probably start looking into them, because they’re about to become a much bigger element in how we connect, across a range of evolving elements.

Today, OpenAI has released GPT-4, which is the next iteration of the AI model that ChatGPT was built upon.

OpenAI says that GPT-4 can achieve ‘human-level performance’ on a range of tasks.

“For example, it passes a simulated bar exam with a score around the top 10% of test takers; in contrast, GPT-3.5’s score was around the bottom 10%. We’ve spent 6 months iteratively aligning GPT-4 using lessons from our adversarial testing program as well as ChatGPT, resulting in our best-ever results (though far from perfect) on factuality, steerability, and refusing to go outside of guardrails.

Those guardrails are important, because ChatGPT, while an amazing technical achievement, has often steered users in the wrong direction, by providing fake, made-up (‘hallucinated’) or biased information.

A recent example of the flaws in this system showed up in Snapchat, via its new ‘My AI’ system, which is built on the same back-end code as ChatGPT.

Some users have found that the system can provide inappropriate information for young users, including advice on alcohol and drug consumption, and how to hide such from your parents.

Improved guardrails will protect against such, though there are still inherent risks in using AI systems that generate responses based on such a broad range of inputs, and ‘learn’ from those responses. Over time, nobody knows for sure what that will mean for system development – which is why some, like Google, have warned against wide-scale roll-outs of generative AI tools till the full implications are understood.

But even Google is now pushing ahead. Under pressure from Microsoft, which is looking to integrate ChatGPT into all of its applications, Google has also announced that it will be adding generative AI into Gmail, Docs and more. At the same time Microsoft recently axed one of its key teams working on AI ethics – which seems like not the best timing, given the rapidly expanding usage of such tools.

That may be a sign of the times, in that the pace of adoption, from a business standpoint, outweighs the concerns around regulation, and responsible usage of the tech. And we already know how that goes – social media also saw rapid adoption, and widespread distribution of user data, before Meta, and others, realized the potential harm that could be caused by such.

It seems those lessons have fallen by the wayside, with immediate value once again taking priority. And as more tools come to market, and more integrations of AI APIs become commonplace in apps, one way or another, you’re likely to be interacting with at least some of these tools in the very near future.

What does that mean for your work, your job – how will AI impact what you do, and improve or change your process? Again, we don’t know, but as AI models evolve, it is worth testing them out where you can, to get a better understanding of how they apply in different contexts, and what they can do for your workflow.

We’ve already detailed how the original ChatGPT can be utilized by social media marketers, and this improved version will only build upon this.

But as always, you need to take care, and ensure that you’re aware of the limitations.

As per OpenAI:

“Despite its capabilities, GPT-4 has similar limitations as earlier GPT models. Most importantly, it still is not fully reliable (it “hallucinates” facts and makes reasoning errors). Great care should be taken when using language model outputs, particularly in high-stakes contexts, with the exact protocol (such as human review, grounding with additional context, or avoiding high-stakes uses altogether) matching the needs of a specific use-case.

AI tools are supplementary, and while their outputs are improving fast, you do need to ensure that you understand the full context of what they’re producing, especially as it relates to professional applications.

But again, they are coming – more AI tools are appearing in more places, and you will soon be using them, in some form, within your day-to-day process. That could make you more lazy, more reliant on such systems, and more willing to trust in their inputs. But be wary, and use them within a managed flow – or you could find yourself quickly losing credibility.