Snapchat has provided an update on the development its ‘My AI’ chatbot tool, which incorporates OpenAI’s GPT technology, enabling Snapchat+ subscribers to pose questions to the bot in the app, and get answers on anything they like.
Which, for the most part, is a simple, fun application of the technology – but Snap has found some concerning misuses of the tool, which is why it’s now looking to add more safeguards and protections into the process.
As per Snap:
“Reviewing early interactions with My AI has helped us identify which guardrails are working well and which need to be made stronger. To help assess this, we have been running reviews of the My AI queries and responses that contain ‘non-conforming’ language, which we define as any text that includes references to violence, sexually explicit terms, illicit drug use, child sexual abuse, bullying, hate speech, derogatory or biased statements, racism, misogyny, or marginalizing underrepresented groups. All of these categories of content are explicitly prohibited on Snapchat.”
All users of Snap’s My AI tool need to agree to the terms of service, which mean that any query that you enter into the system can be analyzed by Snap’s team for such purpose.
Snap says that only a small fraction of My AI’s responses thus far have fallen under the ‘non-conforming’ banner (0.01%), but still, this additional research and development work will help to protect Snap users from negative experiences in the My AI process.
“We will continue to use these learnings to improve My AI. This data will also help us deploy a new system to limit misuse of My AI. We are adding Open AI’s moderation technology to our existing toolset, which will allow us to assess the severity of potentially harmful content and temporarily restrict Snapchatters’ access to My AI if they misuse the service.”
Snap says that it’s also working to improve responses to inappropriate Snapchatter requests, while it’s also implemented a new age signal for My AI utilizing a Snapchatter’s birthdate.
“So even if a Snapchatter never tells My AI their age in a conversation, the chatbot will consistently take their age into consideration when engaging in conversation.”
Snap will also soon add data on My AI interaction history into its Family Center tracking, which will enable parents to see if their kids are communicating with My AI, and how often.
Though it is also worth noting that, according to Snap, the most common questions posted to My AI have been pretty innocuous.
“The most common topics our community has asked My AI about include movies, sports, games, pets, and math.”
Still, there is a need to implement safeguards, and Snap says that it’s taking its responsibility seriously, as it looks to develop its tools in-line with evolving best practice principles.
As generative AI tools become more commonplace, it’s still not 100% clear what the associated risks of usage may be, and how we can best protect against misuse of such, especially by younger users.
There have been various reports of misinformation being distributed via ‘hallucinations’ within such tools, which are based on AI systems misreading their data inputs, while some users also have tried to trick these new bots into breaking their own parameters, to see what might be possible.
And there definitely are risks within that – which is why many experts are advising caution in the application of AI elements.
Indeed, last week, an open letter, signed by over a thousand industry identities, called on developers to pause explorations of powerful AI systems, in order to assess their potential usage, and ensure that they remain both beneficial and manageable.
In other words, we don’t want these tools to get too smart, and become a Terminator-like scenario, where the machines move to enslave or eradicate the human race.
That kind of doomsday scenario has long been a critical concern, with a similar open letter published in 2015 warning of the same risk.
And there is some validity to the concern that we’re dealing with new systems, which we don’t fully understand – which are unlikely to get ‘out of control’ as such, but may end up contributing to the spread of false information, or the creation of misleading content, etc.
There are clearly risks, which is why Snap is taking these new measures to address potential concerns in its own AI tools.
And given the app’s young user base, it should be a key focus.