LONDON: Imagine a pair of glasses that could tell you the name and address of anyone you met. Harvard students Caine Ardayfio and AnhPhu Nguyen made them a reality in just four days.
The students adapted Ray-Ban Meta smart glasses and computer software to create spectacles utilising existing face recognition technology to identify people in real time, underscoring the potential for augmented reality (AR) in future.
The glasses were able to deliver someone’s name in just under two minutes using an existing public facial recognition search engine and artificial intelligence (AI). This has raised concern about the unforeseen risks of AI and mixing of existing technologies.
“We wanted to use it as a platform to make people aware of technology and how to protect themselves,” Ardayfio told the Thomson Reuters Foundation in a video interview from Harvard.
The pair developed a shared interest in augmented AR – the overlaying of digital information in the real world – after they met two years ago, culminating in the glasses project which went viral in September after Nguyen posted about it on X, formerly Twitter.
"It was pretty self-evident that the technology we had created was fairly powerful and had a lot of complications for privacy," Ardayfio said.
The science students used existing cameras on either side of the frame of Ray-Ban Meta smart glasses to send an Instagram Live stream to a computer. A computer program then took screenshots of the stream and uploaded them to PimEyes, a website that scrapes images from the internet to identify people for a fee.
Nguyen told the Thomson Reuters Foundation the aim was to raise awareness and that they had no plans to commercialise their project.
Facial recognition glasses are not entirely new – Clearview AI partnered with the Air Force in 2022 to develop similar technology at border points – but it has not yet reached consumer technology.
When the students' project went viral, digital rights experts raised concerns about this new potential use of facial recognition tech, particularly for women and minority groups and in countries like the United States.
The United States has no federal law specifically governing the use of facial recognition technology in the public or private sector, even though some states have their own privacy legislation governing the tech.
Illinois has a Biometric Information Privacy Act that regulates the collection and dissemination of biometric data such as fingerprints or retinal scans, while in Massachusetts, the use of facial recognition for law enforcement is restricted.
Threat to women
Once the students had passed the images through PimEyes, they fed the results through ChatGPT to identify the person's name. That name was then put through a public records search engine to identify home addresses.
UK privacy non-profit Big Brother Watch has described PimEyes as stalkerware – software that can be used for cyberstalking – a claim rejected by PimEyes. Giorgi Gobronidze, the head of PimEyes told the Thomson Reuters Foundation that such accusations were "baseless" and reflect "a significant misunderstanding" of their technology.
Donald Bell, policy counsel at independent watchdog Project On Government Oversight (POGO) said the "ability to collect massive data – billions of images – and then use that technology to essentially identify anybody is problematic" and that women and minority groups could be especially vulnerable.
Revealing the name and address of any woman on the street could open the door to stalking, he said. Facial recognition technology has a poor track record identifying people with darker skin, leading to instances of wrongful arrest.
"There are no guardrails. There is no real federal privacy law that protects against (facial recognition). It is still a Wild West," Bell said.
Amanda Manyame, a digital rights advisor for human rights NGO Equality Now, said smart glasses pose numerous concerns for women's safety including non-consensual data collection, the potential to use photos for deepfakes, and stalking.
She said the Harvard students' experiment exposed some of these risks.
"Policymakers need to see the harm that could be caused by AI because they don't fully understand why you need safety at the design level," she said.
‘Wild West’
The students said that if they had full control of the tech stack including the camera feed or a database of information, which would be available to larger technology companies such as Meta, recognition would take ten seconds.
Using ChatGPT is also inefficient, but fine-tuning the large language model or building specialised tools would make the process faster, they said.
These improvements could come in the near future, either from tech giants or copycat companies that want to make similar products with existing hardware, they said.
"I think definitely people will try and recreate this," Ardayfio said, adding that there was "a possibility that some people will use this for bad purposes".
Technology giants such as Meta, which owns Facebook and Instagram, and Snap, which owns messaging app Snapchat, are developing AR glasses, believing them to be the next stage in development after smartphones.
Snap told the Thomson Reuters Foundation that its AR glasses, Spectacles, did not use facial recognition. It did not comment on whether it would allow third-parties to develop identity recognition technology using the glasses, like the students did.
A Meta spokesperson said in an e-mail that the students "are simply using publicly available facial recognition software on a computer that would work with photos taken on any camera, phone or recording device".
The spokesperson did not comment on whether Meta had tested any form of identity recognition technology with its products or smart glasses, or whether Meta would allow such technology to be compatible with its upcoming AR glasses, Orion.
Gobronidze said the Harvard students had violated PimEyes terms of services and that their accounts had been blocked. He said developers had introduced additional safeguards to "prevent future abuse".
"It is essential to highlight that the responsibility for such content lies with the publishers, not the search engines indexing it," he said, criticising Ardayfio and Nguyen for misusing the technology.
"They have not only demonstrated their point but have inadvertently shown how easily available tools can be misused."
Asked about Gobronidze's comments, the students said their "goal with this was simply to raise awareness about these technologies."
Manyame warned that AI advances and future interaction between existing systems could create risks that have not been previously considered or legislated against.
For Bell, a general prohibition on facial recognition technology would safeguard against the growing risks.
"(Smart glasses) take it to another level where any individual has the ability to track the intimate details of someone in real time," he said. – Thomson Reuters Foundation