“Police barred from using facial recognition in San Francisco, but that’s not the end of the story.”
“Playing devil’s advocate: Facial recognition has a place in our very near future.”
“Facial recognition has room for improvement, but will make our lives easier in due time.”
“Beyond the headlines: Facial recognition is not as scary as you think.”
These are a list of headlines that churned through my head as I decided on a title for this piece. In an era in which people only read headlines—59 percent of all links shared on social media aren’t clicked on—how we choose to title our articles ultimately frames the opinions that readers adopt.
Ever since police and local government agencies were barred from using facial recognition technology in San Francisco, overly sensationalized headlines have been popping up all over the news. Many of the headlines I scoured through frame facial recognition technology as a menacing, diabolical force drawing us closer to an Orwellian dystopia. While legitimate concerns about facial recognition technology do exist, there is a fine line between pushing an agenda and rolling out irresponsible journalism.
These headlines err on the irresponsible side either by failing to tell the complete story or inciting unwarranted fear about facial recognition.
“San Francisco bans facial recognition technology”
This headline is misleading because San Francisco did not outright ban facial recognition, as the headline implies. The newly enforced regulation, known as the “Stop Secret Surveillance Ordinance,” applies specifically to the use of facial recognition technology by police and local government agencies. People who just read headlines will see this and assume the worst when businesses, for example, showcase facial recognition for consumer use cases.
“Amazon shareholders join the chorus of critics worried about facial recognition technology”
This article was released in light of increasing criticism against the licensing of Amazon Rekognition to government and law enforcement agencies. Despite valid concerns about facial recognition in law enforcement, this headline is misleading. To pass a proposal, Amazon shareholders need to win 50% of the votes. Proposal 1, to stop sales of facial recognition technology to government customers, only received the support of 2.4% of shareholders. Proposal 2, to carry out independent human rights assessment of Rekognition, also failed, but with a more respectable 27.5% of shareholders voting in its favour. The headline makes it seem as if Amazon shareholders are overwhelmingly on board with critics of facial recognition technology, which, based on the results of the vote, they are not.
“Amazon encourages police to use untested facial recognition technology”
Diction is important, especially in journalism. In this headline, the word “untested” is loosely tossed around and does not appear at all in the article itself. “Untested” implies that facial recognition technology being used by police have never been screened before, which is untrue. I think most people would agree that untested commercial technology of any kind would be highly irresponsible. A more accurate word for this headline would be “unregulated.”
On the flip side, these headlines do a good job articulating concerns about facial recognition without provoking unbridled panic.
“The NYPD uses altered images in its facial recognition system, new documents show”
This headline points to the questionable use of facial recognition technology by the New York Police Department and raises questions about the lack of legal regulations that exist. This is a valid concern as it was discovered that some NYPD officers abused the facial recognition system by altering photos or using non-suspect images to produce a match—unreliable inputs result in unreliable outputs.
“Bias in facial recognition technology needs to be fixed”
Facial recognition technology needs improvement
In spite of the exaggerated headlines floating around, there is legitimate concern that facial recognition systems in their current state feed racial discrimination. The crux of this problem is not per se with facial recognition AI itself, but with the data used to train these systems. By nature of availability, facial recognition technology is primarily trained on datasets in which white men are grossly overrepresented. The consequence is false positives, particularly in trying to identify women and people with darker skin tones. Toss this inherently biased facial recognition system into a country where racially-fueled police brutality is endemic—of course there’s going to be backlash.
However, these existing concerns about bias in facial recognition will drive data companies to invest more time and money into R&D and push the boundaries of what is possible. Already, innovation is happening. For example, IBM recently released an annotated dataset of 1 million human faces, called Diversity in Faces (DiF), with the goal of encouraging impartiality and accuracy in facial recognition technology. IBM conducted research into the many dimensions across which facial features can stretch, not only age, gender, and skin tones, but also more quantifiable aspects of structure such as face symmetry, facial contrast, and the length or width of facial attributes.
The current state of consumer-driven use cases
As a digital innovation studio, TTT Studios is tasked with building the software that does the heavy lifting. While data companies continue to diversify the set of available data for training facial recognition AI, we are focused on harnessing facial recognition to improve user experience in consumer use cases through our platform, Amanda AI. Below is a list of several consumer-centric applications of facial recognition that the tech community has engineered.
China has long been a leader in leveraging facial recognition technology for consumer use cases. Marriott International announced their partnership with Alibaba last summer to launch facial-recognition kiosks that allow guests to pull-up their reservations and check-in by simply scanning their faces. This speeds up the check-in time from three minutes to one, and liberates the hotel concierge for other duties.
First we got Interac Flash. Then we got mobile Interac Flash. In China, they got “Smile to Pay.” In 2017, KFC China teamed up with Ant, a subsidiary of Alibaba, to launch the first physical store to accept payments using facial recognition technology.
Amanda AI was at this year’s SingularityU Canada Summit, an event where innovators come to showcase technologies that impact human lives around the globe. Our Amanda AI Events solution—an event sign-in tool that employs facial recognition technology—greeted attendees as they entered the venue. Integrated into the online event registration was the option for attendees to include a self portrait that would enable them to use our facial recognition system at check-in. Thanks to significant opt-in for our facial recognition tool, we were able to streamline the check-in process by taking pen, paper, and long wait times out of the equation. In addition, Amanda AI Events allowed event organizers to manage attendance metrics and automatically print event name badges on the spot.
In total, there were 1,200 attendees that made it into the summit with essentially no line-up. For an event of this scale, it was quite remarkable to see the efficiency that facial recognition brought to the table.
Office virtual assistant
We have also designed and built an office virtual assistant that employs facial recognition for access control and visitor management. The Amanda Office solution can greet and sign-in office visitors, prompt them to sign non-disclosure agreements, and integrate with the office calendar system and messaging service to notify staff when visitors arrive. Amanda AI virtual assistant is currently being piloted at several corporate offices, including the KPMG Ignition Centre.
These are just a few examples of how facial recognition technology can be leveraged for consumer use cases and how our team at Amanda AI is leading the way. There is so much potential for it in even more applications and industries including shoplifting prevention in retail and safety management at construction sites.
Compliance with privacy guidelines
There’s an elephant in the room that we’ve yet to address. As we continue to develop consumer use cases that harness facial recognition technology, privacy concerns become a priority. By virtue of its novelty, there are very few regulations that govern responsible facial recognition data management. It is largely up to companies who use the technology to hold themselves accountable.
General Data Protection Regulation (GDPR)
The GDPR is a set of regulations that govern data protection and privacy for citizens of the EU and EEA. These guidelines apply to any firm that conducts business with citizens of the EU and EEA. The general idea is that privacy should be the default setting on the internet. The GDPR gives individuals governance over tenets such as the right to request erasure of personal data, the right to not have personal data be collected and processed for secondary uses without explicit consent, and the right to easily obtain and transfer personal information. The GDPR identifies facial recognition data as biometric data, which is classified under “sensitive personal data.” The rules are very stringent and prohibit the collection of biometric data unless one of the conditions here apply.
Personal Information Protection and Electronic Documents Act (PIPEDA)
Canada also has its own law around data privacy, known as the PIPEDA. It is not as rigorous as the GDPR, but could be subject to revisions that clarify how facial recognition data should be handled in the near future thanks to a 10-principle digital charter that the federal government recently unveiled. The charter does not have any legal standing, but the federal government promises to implement it to future legislation and regulation.
Our policy at Amanda AI
Adoption is bound to increase
Facial recognition is nearing the chasm. As use cases continue to emerge, more businesses will realize the potential improvement that facial recognition brings to the user experience. As training datasets continue to diversify and privacy regulations continue to mature, facial recognition technology will become safer and more secure. Facial recognition data is no different than any other sensitive data stored by companies whose services you already use.
For the laggards and non-adopters, there will always be the option to opt-out of facial recognition and employ the status quo alternative—similar to airline passengers who have the option to choose between physical and digital (and soon facial recognition) boarding passes.
Like any other tool, facial recognition is a double-edged sword. We would be doing consumers a disservice by banning it rather than trying to reforge a safer and more secure variant. Our developers at Amanda AI are always looking to build technical solutions to business problems. If you have any questions about integrating AI into your business operations workflow, hit us up!