Hey, you know how there’s been that rumor for years that Facebook is actually listening in to your private conversations, in order to serve you ads based on what you discuss in real life?
Yeah, that’s definitely not true, and doesn’t actually happen. But Facebook’s parent company Meta may soon utilize a somewhat related process, by logging user ‘voice prints’ as a means of authentication, and potentially more, as per a recently filed patent.
As reported by Patent Drop, Meta’s filed a new patent entitled ‘User Identification with Voiceprints on Online Social Networks’.
And for the most part, its straightforward enough – as Meta explains:
“A social-networking system may record and analyze a user’s voice to determine a digital voiceprint for the user. Privacy settings may allow users to opt in or opt out of having the social-networking system record or analyze the user’s voice, or having the social-networking system determine the voiceprint for the user. The user’s privacy settings also may specify that such voiceprints may be used only to facilitate voice-input purposes (e.g., to send voice messages, to improve voice recognition in order to use voice-operated features of the online social network).”
So, like the biometric ID elements on your device, Meta’s looking to enable voice ID, which could enable a range of additional in-app functionalities, while also providing a higher level of security over password-based login.
Seems okay, no big deal here.
Then there’s this:
“A client system associated with the social-networking system may detect one or more people speaking, and the people speaking may be identified as users based on comparison of their voices to voiceprints stored by the social-networking system. Upon identifying one or more of the people as users of the social-networking system, the social-networking system may provide customized content to the identified users based on their social-networking information. The customized content may be personalized to match the interests of the identified users, and may include advertisements, news feeds, push notifications, place tips, coupons, or suggestions.”
In other words, if, when using this new process, Facebook were to also hear another person speaking at the same time as someone logs in with Voice ID, it would then try to identify if that second person is also a registered Facebook user via voice analysis. And if it finds a match, it could look to show both users content – including ads – based on their preferences.
So Facebook would be listening, and showing you ads based on such – not based on your conversations, as such, but on who you are, and the people you’re interacting with in real life.
It’s not as advanced as the ‘always listening’ myth, and it’s unlikely to lead to totally uncanny ad matches that’ll freak you out as a result. But it would be another way for Meta to gather information on your activity and interests, by tuning into background sounds, and trying to find relevant contextual matches that align with other associated users’ interests, and potentially location-based sounds that can be associated with a particular place.
Though it seems a little messy. What if you use voice ID when you’re, say, at a concert, and the mic picks up some random passerby for audio matching. Now you’re getting totally unrelated ads displayed in-stream, just because somebody happened to be near you at the time.
I’m not sure that would be overly effective, but Meta obviously has bigger plans in store, likely aligned to its AR glasses, with another element of the patent also pointing to voice commands in its apps.
“For example, a user at a client system, such as a smartphone, may establish a voiceprint by speaking several words or phrases into a microphone of the smartphone, which may record the user’s speech as audio input. A voiceprint may be generated based on the audio input and stored in the data store as the user’s voiceprint. Subsequently, when that user speaks a voice command such as ‘play music’ into a smartphone or other client system, the voice command may be compared with the user’s voiceprint to identify the user as the speaker. The smartphone may then perform an action associated with the command using the user’s identity, e.g., playing music from the user’s music library.”
Meta’s current Ray Ban Stories glasses already respond to voice commands, which Meta expanded back in May, with users able to reply to incoming WhatsApp, Messenger, and text messages just by speaking.
In this respect, this new patent likely relates to the expanded use of voice for the next generation of its wearables push, which would mean that the additional data input gleaned from such is likely also being established with this in mind.
So it would be less about showing you more ads on Facebook, and more related to shared interests in AR, based on what your friends are also engaging with, what they’re doing, where they are.
So while it sounds a bit creepy, it seems to be more of a data insurance policy from Meta, to ensure it keeps gathering audience insights across these evolving surfaces.
To be clear, Facebook is not listening to your real life chats, and it won’t be in future either. But it may soon have a new way to understand who you’re hanging out with, and where, in addition to device proximity, check-ins, etc.
Maybe that’s still creepy enough, but it makes sense, with a view to more people interacting with its apps via wearable devices.
You can read the full ‘User Identification with Voiceprints on Online Social Networks’ patent filing here.