Scroll to read more

The arrival of AI creation tools has greatly expanded the opportunities for content creators, but concerns remain about the use of such, and whether the work spat out by these apps and tools can actually, legally be used in your process.

The answer, right now, is yes – but we are also seeing some cautionary tales and elements, which could influence your thinking around your adoption of AI creation tools in your process.

In my view, AI creation tools should be used as supplementary elements, as tools that can help in your creation process, but should not be relied upon as sole facilitators of your content. But that is possible, and we’re undoubtedly going to see an influx of AI-generated content across the web, as spammy SEO peddlers look to make a quick buck on the back of automated options.

And really, the outputs of tools like ChatGPT will likely be better than what these scam sellers would have produced via outsourcing to human content farms anyway – but that’s still not what you want for your site, and if anything, it could help to make your better quality content stand out, by providing more human, more accurate answers to people’s queries.

The trick then is to utilize these newer tools in a more comprehensive content process, as opposed to relying on them as a quick-hitter strategy – and to do that, you need to understand the key best practices and notes, based on evolving adoption and activity.

In order to keep you abreast of the latest, here’s a quick round-up of some of the key AI creation notes from this week.

CNET Using AI Tools for Content Creation

While all mainstream news outlets are still reliant on human journalists to provide insight and coverage, some are already leaning into AI tools for some elements of their coverage.

Outlets like Reuters have been experimenting with AI tools in an assistive capacity for years, so it’s not an entirely new concept, but recently, CNET revealed that it’s been using an AI tool to create full posts on its website. For months.

As explained by CNET:

In November, one of our editorial teams, CNET Money, launched a test using an internally designed AI engine to help editors create a set of basic explainers around financial services topics. We started small and published 77 short stories using the tool, about 1% of the total content published on our site during the same period. Editors generated the outlines for the stories first, then expanded, added to and edited the AI drafts before publishing.”

This is an area where AI tools can be of assistance in creation – the posts created here were technical explainers, not news content, which dug deeper into relevant concepts and elements in a contextually relevant way for CNET readers.

The process also makes sense. If, for example, you ran a website that sold caravans, you could look up the most Googled questions in your niche, then pump those questions into ChatGPT and ask it to write a 500 word post on each. You could then edit the content, using your own caravan expertise, and that could be a faster way to generate what would largely be technical-based content, and could theoretically improve SEO performance.

CNET has taken a similar approach – though it also found a few issues in implementation:

  • Some of the stories had significant errors, while minor issues, such as incomplete company names, transposed numbers or vague language, were relatively common
  • Some stories clearly plagiarized other content (CNET is looking to improve its checking tools in this respect)
  • Disclosure is important. CNET is now adding more overt notes on AI-created content detailing such

So again, while AI tools can be helpful, they’re not perfect, and they could even be problematic, in a range of ways, if you’re not using human editors and expertise to check and edit accordingly.

There’s also a question of AI content detection, and whether Google will even index AI content.

Google’s View of AI-Generated Material

Google has been pretty clear that AI generated content is in violation of its guidelines, which, if detected, would lead to penalties in Search.

The question then is, can Google actually detect such, and should you use AI outputs as an SEO tool?

There are some newer processes being developed that can detect AI outputs, which will be important, in particular, for academic institutions. Those detection tools could also be utilized by online forums, and search engines like Google, but the evidence, right now, suggests that Google doesn’t have a process for detecting AI-generated content. Yet.

But again, that doesn’t mean that you should take that as full license to just re-publish pages and pages of AI-created material on your website as an SEO ‘strategy’. As per CNET’s experience, you need to be checking and editing any such material, and claims made within it, for accuracy, plagiarism (as it’s taking in examples of existing web content) and general readability.

AI tools can only generate outputs based on whatever’s going in, so if there’s flawed content on the web (which, of course, there is), it’s taking that in too.

It could save you time in creating specific types of material for your site, which could be of SEO benefit – but you do need to be aware of potential Google penalties for such, if detected, and you also need to be thoroughly checking and revising AI created material for possible errors.

BuzzFeed’s Using Chat-GPT for Content

As AI creation tools evolve, it will become more critical that all web publishers at least consider how they can be of assistance in their process – or they could risk losing out to rivals that are adapting to these new tools.

BuzzFeed, for example, has this week announced that it will be working with ChatGPT creator OpenAI on a new AI tool to enhance its quizzes, and personalize some content for its audiences.

As reported by The Wall Street Journal:

In one instance, the company said new AI-powered quizzes would produce individual results. For example, a quiz to create a personal romantic comedy movie pitch might ask questions like, “Pick a trope for your rom-com,” and “Tell us an endearing flaw you have.” The quiz would produce a unique, shareable write-up based on the individual’s responses, BuzzFeed said.”

That’s an interesting use case for AI content, in providing personalized results for readers, based on certain parameters.

These are the types of activations you can expect to see more of moving forward, with more creative, inventive use cases for AI tools that can help websites improve their performance.

It’s worth considering in your own process. And while smaller publishers won’t have the capacity to work with OpenAI direct, there are other ways to incorporate such tools, at least in an experimental capacity, to test for potential benefits.

AI creation is evolving fast, so fast that it can feel like you’re getting left behind already, as new uses for the technology continue to emerge, and publishers explore additional, potential angles in their process.

But again, I would reiterate that AI tools are supplemental, and should not be used as the sole facilitator of content.

There are inherent risks in any such usage, and you need to keep in mind the value that you want to provide to your audience, and the trust you’re looking to establish in your brand and business.

Over-reliance on AI can erode this, but measured and considered use of AI as an assistive element could be a good way to save time, money and provide even better content for your audience.

Also, there are reports that ChatGPT will soon move to a paid model, at around $42 per month, though a free version would still exist, in a lesser capacity. We’ll keep you updated on any developments.