Facebook and Instagram have a problem. Well, they have many, many problems, but one of the ones they feel like addressing is “celeb-bait ads and impersonation.” According to a new post from parent company Meta, the way they’re going to try solving this is through the use of facial recognition technology. Again. Woo.
In the lengthy post, Meta explains that the biggest impact of these new tools will be an expanded effort to stop scam accounts from impersonating celebrities. If you’ve used Facebook in the last year or so, you’ve probably encountered friend suggestions for attractive celebrities, which are obvious fakes that can be identified by their paparazzi photos and deliberate misspellings of their names.
Now when Meta spots these impersonations, which are typically shilling spam or attempting to phish info out of unwary users, it’ll employ facial recognition to compare them to their relevant celebrity’s (real) Facebook or Instagram account. It’ll be expanded to advertising, too, though the system will be automated (like almost everything on social media).
Fine. That seems like a worthwhile application of this problematic tech. But what about if someone manages to hack your legitimate Facebook or Instagram account? Or you forget your password and lose access to your email account? Currently, you need to plead your case to Meta’s support team by uploading some form of official ID. But the company is now testing a system where you can upload a video selfie instead, with facial recognition comparing you to your stored photos.
The demonstration shows the user tilting their head at various angles to give the tool access to their entire face for scanning. Meta says that these videos will be “encrypted and stored securely,” never posted publicly, and immediately deleted along with facial recognition tech once the process is completed.
This isn’t Facebook’s first brush with facial recognition tech. It previously used a more basic system to automatically tag users in photos and videos, but shut down the opt-in tool in 2021 after privacy concerns were raised. This new implementation of the system is far less broad and more pointedly targeted at safety.
That said, I wouldn’t blame users for being skeptical of pretty much anything Meta and Facebook do at this point. After trying for years to disengage with Facebook’s systems, I reluctantly returned earlier this year to try and engage with some local communities… only to be immediately flooded by a mountain of AI-generated garbage.
A never-ending deluge of AI-generated “cozy cabins,” Dodge Power Wagons, and pets and people just on the other side of the uncanny valley assaulted my news feed on a daily basis. I tried in vain to report these accounts… but with millions and millions of likes and shares, the majority of which I’m guessing were less than human, they just kept coming.
So forgive me if I’m less than confident in Meta’s ability to stem this tide of social media impersonation, let alone its actual intention to protect users. Maybe once the company shows as much interest in keeping AI bullshit off my app screen I’ll have a little more faith in its dedication to authenticity. Since “deepfakes” are also easier to implement now, I have serious doubts about any automated system’s capability to reliably distinguish between real people and scammers on a video.