London Or San Francisco – Who Is On The Right Track With Facial Recognition?

Is It Safe To Use Facial Recognition Technology?

Decades ago people watched the wonders of facial recognition merely in fiction movies. But the fact is, the technology came into existence in the 1980s, way before many sci-fi movies brought it to people’s eye. Now when we are close to the year 2020, facial recognition has become a ubiquitous technology. You can find people using it on smartphones to unlock their devices or to authorise high-risk transactions.

However, going beyond the overwhelming personal experience, the application of facial recognition for smart city safety is wavering in a controversial atmosphere. There are some smart cities that praise technology while few of them are concerned about ethics. So, we have picked up some of the movements taking place in different smart cities. This can help provide a glimpse of the technology’s use in the future.

London Is Concerned About The Ethics Of Policing

The Metropolitan Police (the Met) in London has been testing facial recognition technology to understand its effectiveness and potential in assisting police operations. The technology automatically scans faces from a crowd passing through a camera in the public places and then cross-checks it on a ‘watch-list’ specially selected on police databases.

Following this, a six-month investigation was conducted by an independent advisory panel. The Ethics Panel report studied the Met’s use of the technology to make sure the trials progress in a manner that maintains public trust.

Going further, the report found that images which seem to match those on the watch-list are first checked by police officers. And, if a credible match is found, an operational decision is taken to speak to the person recognised. The report also affirmed that the recording is retained for 30 days until a technical assessment is executed before it is deleted. The report reached a conclusion that on the grounds of legality, there is a lack of clarity for the use of technology and its regulation. Therefore, the panel has recommended that the Met should publish its viewpoint on the legality of its use before any further trials.

There is no national commissioner or framework for the use of facial recognition. So, the panel concluded that the Met should work closely with the relevant commissioners to ensure proper overlooking of its use.

In addition, the panel provided a list of recommendations on the basis of their view that the trials would be more effective with public support. This includes:

  • Citizens looking for information related to trials should be able to find it easily on the Met’s website.
  • The Met should inform the citizens of the questions the trials are meant to address and why public involvement is important.
  • The perceptions of bias against certain communities should be minimised by selecting trial sites.
  • The Met should assert that declining to be scanned would not be taken as grounds for suspicions when informing the citizens about the trials.
  • Citizens should be informed about where and when the technology will be used and how it will engage with them.

Furthermore, the panel has commissioned a public opinion survey on the citizen’s views on technology. The findings will be published later this year.

As per the Met, some of the listed recommendations have already been implemented. They are working on developing a proper legal and ethical framework to support the use of facial recognition.

San Francisco Is Almost Against The Use Of Facial Recognition

Although San Francisco is the innovation district of the US; yet, facial recognition technology does not seem to impress the city. In January 2019, the San Francisco lawmaker introduced legislation that could make the city the first in the US to ban the use of facial recognition technology.

The bill recognised as “Stop Secret Surveillance Ordinance” explains that the downsides of the technology surpass its benefits. It also states that technology will worsen racial injustice and endanger our capability to live free of constant government monitoring.

The allegations related to “racial injustice” talk about some of the questionable aspects of facial recognition, especially when it comes to misidentifying African-Americans and other people of colour and women.Law enforcement is bolstering the use of facial recognition as already seen with ‘Rekognition’ – facial detection software developed by Amazon. Hence, if the bill is passed into law, the city will ban the purchase or use of comparable facial recognition technology.

On the other hand, China is in favour of using technology to enhance smart city services. The country is embracing facial recognition to support public services.

Why The Controversy?

The area of biometrics and its potential intrusiveness in the smart city life is a bit complicated. Since the 9/11 attack in 2001, the US federal governments have hugely invested in facial recognition technology with the aim to exterminate possible criminal conduct. But, the subject of mass surveillance – which may go wrong – can lead to misidentification of innocent people as potential criminals – ruining the lives of people.

For example, the flaws of facial recognition technology were seen in The Boston Marathon bombings. The technology could not match surveillance footage to database images available even when the database contained the images of the suspects. While a sophisticated facial recognition technology can drastically improve a smart city’s security system, its weaknesses are intimidating.

Here are some points that show how technology can make a mistake.

How Technology Can Make A Mistake In Face Recognition?

1. Camera Angle – The camera angle has a huge impact on whether a face gets processed or not. For accurate face match, the technology requires multiple angles including 45 degrees, frontal and profile. Besides, things like facial hair or a hat could be an obstruction. Hence, if databases are constantly updated, it can prevent failures.

2. Image Quality – The quality of the image can affect the way the facial recognition algorithms work. The image quality of scanning video is low compared to a digital camera. Even an HD video, at best, can provide 1080p which is equivalent to 2MP in a digital camera. Similarly, the face angle can also affect the results.

3. Image Size – If a camera captures the target from a distance, it can detect 100 to 200 pixels on a side, which provides a small size image. So, when the face-recognition algorithm finds a face in the image or a video, the size of the face compared with the database size may affect the results. Moreover, scanning an image for varying face sizes is a processor-intensive activity.

4. Processing and storage – HD videos occupy a significant amount of disk space, even though its resolution is low. Processing every frame of video needs tremendous efforts, so generally, only a fraction (10-25%) is actually run through the facial recognition software. To reduce the overall processing time, agencies can use clusters of computers. But, even adding computers requires considerable data transfer over a network, which can be adhered to input-output restrictions. This results in limiting the processing speed.

What’s Next?

As every technology has improved over time, facial recognition technology will not be an exception. When the technology improves, and we have higher-definition video cameras, we will be able to deploy the technology for meaningful and accurate results. With the advancement, computer networks will also get capable of transferring more data, making the processors operate faster. Also, facial-recognition algorithms will get better at picking out faces from an image and match it with the database. And, obstruction like a hat or sunglasses will be easily defeated.

So, what do you think? Should smart cities keep using the technology for safety and security by developing a legal and ethical framework like London? Or should they ban the use until it reaches full maturity?

Share