Bumble added an option that will allow users of the dating app to report fake profiles that are using AI-generated images and videos.
Users, when reporting a profile, can now choose “Fake profile” and then select the option “Using AI-generated photos or videos”, a report from TechCrunch said.
Other reporting options on the app include inappropriate content, underage users, scams and the use of someone else’s photos, among others.
The dating app hopes the move will deter people from using AI-generated images to lure users.
(For top technology news of the day, subscribe to our tech newsletter Today’s Cache)
AI-generated photos on dating apps are a common method used by threat actors to lure victims into sharing personal information which can then be used to launch targeted attacks.
The new reporting options follows the launch of a tool by the platform which uses AI and human moderation to detect and remove fake profiles. Bumble claims since the launch of the tool, named “Deception Detector”, there was a 45% drop in users reports of spam, scams and fake profiles.
While the dating platform is working to contain the misuse of AI, its founder and executive chairman, Whitney Wolfe Herd, in a tech conference earlier this year floated the idea of an AI dating concierge that could assist users with dating while reducing the need to talk to actual people.
month
Please support quality journalism.
Please support quality journalism.