The horror of the events at the marathon in Boston 2 days ago is still very raw. People are united in their sympathy for the victims and their families, their revulsion of these despicable acts and their solidarity in not succumbing to terror. The FBI vows to “…go to the ends of the Earth to find the bomber” with President Obama openly stating the “…heinous and cowardly…” event to be “…and act of terror”.
The investigation into the bombing is in its nascent phases, with the Boston Police Commissioner Ed Davis admitting that they are dealing with the “…most complex crime scene that we have dealt with in the history of our department.” Still, authorities are already honing in on crucial evidence and beginning to release details; BBC news reports that a source close to the investigation told AP news agency that the bombs consisted of explosives placed in 1.6-gallon pressure cookers, one with shards of metal and ball bearings, the other with nails, and placed in black bags that were left on the ground. Images of what appear to be a trigger mechanism have already been released.
Forensic investigators have a long and daunting task ahead of them with countless hours of CCTV footage to pore over, and some people are already suggesting that the application of face recognition technology can play a crucial role in identifying potential suspects. However CCTV footage, especially from older systems that have not been specifically configured for the task, is notoriously unreliable as a source for face recognition.
Perhaps more useful at an event attended by so many, most of whom will have been carrying and using mobile phones and cameras, is the footage acquired by members of the public. Images and video captured by these high-quality devices will potentially be of much greater use than CCTV and authorities have appealed for people to turn in photographs and videos they have taken in the hope that they will contain useful intelligence. Much of this media will already have been uploaded to public sites such as Facebook and YouTube.
An Automated Media Processing Cloud
A solution to automate the processing of this staggering amount of media to quickly and efficiently unlock actionable intelligence is required to save significant time and human capital. The ability to automate this would allow the more efficient application of resources as well as massively speed up a time-critical investigation.
However, the need goes far beyond the simple application of face recognition technology.
What is needed is a server-based system that can process vast amounts of media quickly to transform files from mobile phones, flash memory devices, online sources, confiscated computers and hardrives and video surveillance systems into searchable resources. This would enable forensic investigators to work more efficiently and effectively by automatically finding, extracting and matching faces from very large collections of media to discover, document and disseminate information in real-time.
Such a powerful video and photograph processing architecture should automatically ingest, process, analyse and index hundreds of thousands of photographs and videos in a centralised repository to glean associations in a cloud environment. Instrumental would be the ability to:
Automatically find, extract and index faces to enable the biometric and biographic searching of media.
Create and manage watchlists of people of interest via a web-based interface.
Find all instances of photos and videos where a person of interest has been seen.
Quickly review and process media to identify, locate, and track persons of interest, their associates and their activities.
Discover, document and view associations between people of interest, their activities and networks.
Finally, a public-facing interface to such a system would enable members of the public to upload their media in a self-service manner to enable quick and ready access by the authorities to this raw data for automatic processing.
Across Europe, governments and law enforcement agencies are increasingly impotent in their ability to combat a deterioration in public safety. The economic crisis that is increasingly fueling public disorder is also paralysing our police and intelligence agencies with draconian budget cuts.
Having previously invested heavily in infrastructure, these agencies have at their disposal huge volumes of data in the form of media, but have no way to unlock the potential intelligence bonanza it contains. Vast sums are being spent allocating experienced and expensive human capital to rote tasks of watching countless of hours of media in the hope of randomly finding useful information.
A solution to automate this processing to quickly and efficiently unlock actionable intelligence from this staggering amount of data is required. The potential to improve public safety whilst simultaneously enabling the more efficient use of our public finances is huge.
There has been an explosion in digital media. Law enforcement and intelligence agencies have amassed large collections of video and photographs from multiple sources that are stored in multiple file formats. There is a need to automate the processing of this raw data to turn it into actionable intelligence to enable you to “connect the dots”.
Discover how solutions available from Allevate can dramatically save you time and help you to operate more efficiently by applying data mining principles to digital media:
Automatically find and match faces from huge stores of videos and photos.
Identify individuals from watchlists and track them across multiple videos.
Extract faces from video and automatically cross-reference with all other video.
Associate multiple videos and photos based upon their active content and the individuals they contain.
Apply enhanced link analysis to identity an individual across multiple video sources.
Automatically build links between different individuals based on their associations in media, whether they be known or unknown.
Automatically and graphically display web-based drill down link analysis diagrams.
Determine “Pattern of Life” analysis for specific individuals and flag deviations from the norm.
Manage and access your entire video and photo repository from a single web interface. (automatically transforming multiple video formats)
Apply powerful analytical tools to your digital media content.
Work more efficiently. Get more results. Exploit the masses of raw media from multiple sources to create actionable intelligence with less manpower.
The accuracy of face recognition has increased dramatically. Though biometric technologies have typically been deployed by governments and law enforcement agencies to ensure public, transport and border safety, this improvement in accuracy has not gone unnoticed by retailers and other commercial organisations. Niche biometric companies are being snapped up by internet and social media behemoths to further their commercial interests, and retailers and other enterprises are experimenting with the technology to categorise customers, analyse trends and identify VIPs and repeat spenders. Whilst the benefits to business are clear and seductively tantalising, it has been impossible to ignore the increasing murmurs of discontent amongst the wider population. Concerns over intrusion of privacy and the constant monitoring of our daily lives threaten to tarnish the reputation of an industry which has endeavoured to deliver significant benefit to society through improved public safety. Can the industry be relied upon to self-regulate? Will commercial enterprise go too far in their quest to maximise profits? How far is too far? How can organisations ethically make use of face recognition technology to increase efficiencies and drive revenue, whilst respecting and preserving privacy and maintaining the trust of their clientele and society?
Having previously written on the subject of the application of face recognition in airports as applied by law enforcement and border control, this article looks at the increasing exploitation of the technology for commercial advantage. As well as contrasting the different use-cases defined by commercial exploitation versus public safety applications, this article also touches upon the very different agendas of those using the technology and the privacy issues that arise.
Most people accept that the reality of the world today necessitates certain inconveniences and intrusions. We tolerate and increasingly expect surveillance technology to be deployed wisely in situations where there is demonstrable benefit to public safety, such as at transport hubs, large gatherings, public events or areas of critical national infrastructure. The key factor behind such tolerance is comprehension; we understand the reasoning behind these uses and the benefits to ourselves, namely our safety. Though we don’t necessarily like it, we generally accept it.
However, it has been difficult to avoid the increasing coverage in the media of the use of face recognition by commercial organisations. The single most common term that is bandied about in reference to these deployments tends to be “creepy”. The technology being deployed is very often similar, if not identical to, the technology deployed for public safety applications. So precisely what is it about this use of technology that people are averse to?
In order to understand this, it is useful to consider in each case who people perceive benefit from the system. In the case of public safety, the people perceived to benefit are us; the citizens. In the case of commercial use, people perceive the commercial organisation deploying the technology as the beneficiaries. In this scenario the term “benefit” generally means profit, either by increasing revenues or decreasing costs. Often there is a general distrust within society of large corporations profiting from the exploitation of the populace, and this is especially true in times of prolonged economic difficulty. This is additionally complicated by the fact that our biometric traits are viewed as being something that are intrinsically ours and that are a constituent part of our definition.
3 Examples: Uses to Reduce Cost and Generate Revenue
It hasn’t taken long for business minded technology companies to devise a whole range of new uses of face recognition, all focussed on delivering bottom line business benefit. An important characteristic of face recognition is that it is only useful if you have something to match a photograph (probe) against, whether it is another photograph, or a database of photographs (reference set). It is the management, control of access to and often the creation of these reference sets that generate the most privacy concerns.
Let us briefly discuss some of the manners in which the technology is currently being deployed.
3.1 Efficiently Identifying Customers and Staff
This perhaps is the most traditional use of biometrics within commercial organisations. The ability to positively identify people, whether they are your staff or increasingly your customers, is absolutely necessary for the day-to-day operation of business and indeed society. Biometrics can be applied to ensure identity in a more cost-effective and positive manner, thereby introducing efficiencies into the business. It is an unfortunate reality that staff are responsible for a significant amount of theft. Adopting biometric technology can eliminate password theft and help mitigate the risks of identity sharing, thereby reducing fraudulent and unauthorised transactions and ensuring relevant personnel are physically present at the time of a transaction. Additionally, customers can be identified positively before conducting transactions. Cashless payments provide numerous efficiency opportunities by allowing elimination of cash and credit cards at point of payment altogether.
3.1.1 Privacy Considerations
These examples are usually only possible with the consent and approval of the individuals in question. Customers typically register for a biometric payment system, for example, in order to realise a benefit offered by the enterprise. The enterprise in turn must satisfy the customer that their biometric reference data will be kept and managed securely and only for the stated purpose.
The advent of face recognition provides new manners in which you can identify your customers, for example from CCTV cameras as they enter shops or as they view public advertising displays. It is when these activities are performed without the individual’s knowledge or consent that concerns arise.
3.2 Identifying Who is Entering Your Premises
These solutions are designed to integrate with existing surveillance systems; faces are extracted in real-time from a CCTV video feed and matched against a database of individuals. When the system identifies an individual of interest it can raise an alert that can be responded to rapidly and effectively, or log where and when the individual was seen for the formation of analytical data.
This can be used to provide valuable real-time or analytical intelligence to organisations, such as:
Notification of the arrival of undesirables, such as banned individuals or known shoplifters.
Notification of the arrival of valued or VIP customers.
Collation of behaviour data of known customers, such as how frequently they visit, which stores they visit and integration with loyalty programmes.
3.2.1 Privacy Considerations
There are a number of potential issues with regards to privacy that need to be considered here, most notably:
How is the reference set obtained? Who is in it?
Do you have the permission of the individuals in the reference set?
How are the photographs in the reference set stored and secured?
Are the members of the reference set aware of how and when their photos will be searched?
Are the people crossing the cameras aware that their photos are being searched against pre-defined reference sets?
What action is taken if a probe image matches against the reference set? What are the implications of a match or a false match?
What is done with the probe images after searching the reference set? Are they discarded or stored?
The number of possible uses of this functionality and resulting business benefits are too large to enumerate here, but very careful consideration must be made with regards to the proportionality of the solution when measured against the requirement. Additionally, the views and considerations of the individuals whose images you are verifying, both the people within the reference set and the people whose faces you are sampling as probe images, should be well understood and considered; approval should be sought for inclusion into a reference set.
3.3 Analysing How People Moving Through Your Premises
Face recognition can also be used to determine how people move through premises, such as a department store. Understanding peak and quiet times is essential to enable sufficient and efficient staffing and resourcing. Raising alerts to manage unforeseen queues is critical for ensuring customer satisfaction.
Face recognition applied to CCTV can timestamp when individuals are detected at known camera locations, thereby providing highly accurate information on people flows such as:
How long on average does it take to move between two or more points?
(such as from the entrance of a store to a checkout or exit)
What are the averages flow times across the day and when are the peaks?
How does this vary with the time of day?
This can be used to determine how people typically move through the premises, and how long on average they linger in specific areas. You can also analyse this data across different age and gender demographic categories.
3.3.1 Privacy Considerations
Importantly, no person identifying information is recorded. There is no interest in identifying who the individuals moving through the premises are or in taking any specific action on any specific individual. There is no need to search against any pre-defined reference sets.
However, there are some issues you should consider when deploying such systems:
Biometric matching of people crossing the cameras still occurs. The probe photos are matched against other anonymous people that have previously crossed the cameras.
You should carefully consider how long this data will be retained for matching, (generally hours) and the nature of the premises being monitored.
Generally the privacy considerations of this application are minimal.
3.4 Building Databases of People Visiting Your Premises
As previously mentioned, face recognition is only useful if you have images to match against. Previous examples have dealt with matching the faces of people crossing the camera against known databases of individuals. A potentially far more valuable practice to enterprise is to dynamically build reference databases consisting of the people who cross the camera. Unfortunately, this is also the practice that riles the populace the most and is rife with potential privacy intrusions.
The increase in the use of CCTV cameras has led to an ever increasing volume of archived video footage. The intelligence in this footage typically remains inaccessible unless appropriately analysed and indexed. Such systems can be used to populate databases of “seen” individuals, thereby enabling searching for specific people of interest to determine if, when and where they have been present. This then allows the collation of data such as how frequently individuals visit your premises, how long they stay and when was the last time the individual visited your premises, as well as which of your locations any individual frequents and which is the most common.
If this functionality is combined with the ability to search and cross- reference against databases of known individuals, for example a subscribed customer database, this can then allow you to build very valuable analytical data on specific individuals thereby enabling you to predict future behaviour and market more specific services and products.
3.4.1 Privacy Considerations
Tread very carefully. Some of the most vocal opposition to the application of face recognition technology results from the capture of biometric data of potentially large numbers of people without their knowledge or consent, especially if the people are then identified and profiled against existing databases. In many jurisdictions around the world, the retention of such data may be in contravention of privacy legislation.
3.5 Analysing Who is Viewing What to Target Your Advertising
There have been many examples in recent months of retail and advertising organisations using technology to determine the approximate age and gender of people entering premises or viewing advertising walls. Though not technically face recognition, it is still worth mentioning here as often the distinction between the two uses is blurred. The premise is simple: such solutions can count the number of people watching an advert at any given time, and even estimate their age, dwell time, sex and race. While providing invaluable information for the advertiser, it can also allow them to dynamically change the adverts in real time to more appropriately target the demographic of the current viewer(s). Such solutions are increasingly being deployed in Japan and it is only a matter of time until they are more widely considered in Europe and North America.
3.5.1 Privacy Considerations
The key consideration here is that this form of technology is not actually identifying anybody or extracting personally identifiable information. There does appear to be some opposition to this, though none of it very vocal or serious. It is difficult to see any infringement of privacy and often may be advantageous to the consumer as advertising may be more specifically tailored to their needs.
3.6 Matching People on Your Premises with Social Media Accounts
Both Google and Facebook have acquired face recognition technology companies over the past year. Facebook’s users, for example, publish over 300 million photos onto the site every day, thereby making Facebook the owner of the largest photographic database in the world.
Facebook is already trialling a new service called Facedeals which enables its users to automatically check in at participating retail sites equipped with specially enabled cameras. In order to entice users to participate, the participating retailer can offer special deals to Facebook users when they arrive. The flow of information can be bi-directional. Such automatic check-in data coupled with the users’ manual checkins can be used by Facebook to hone their profile of individuals allowing them to target users with more relevant advertising. The system is entirely voluntarily, and the reference sets searched by retailers only contain photos of users who have opted into the service.
High levels of accuracy now attainable by face recognition algorithms.
Ubiquity of social networking with its inherent large photographic databases.
Availability of cheap computer processing and the advent of cloud computing.
…coupled with the fact that “face recognition occupies a special place [within the family of biometrics in that] it can be surreptitiously performed from a distance, without subject cooperation and works from ordinary photographs without the need for special enrolment…” is “ … creating an environment … that threatens privacy on a very large scale…”.
One of the main premises of the paper is that this issue “… will require the active cooperation of social media providers and the IT industry to ensure the continued protection of our reasonable expectations of privacy, without crippling use of this powerful technology”.
5 Can All This be Done Ethically? (What About Privacy?)
Can organisations ethically make use of face recognition technology to increase efficiencies and drive revenue, whilst respecting and preserving privacy and maintaining the trust of their clientele and society?
Reputable organisations such as the Biometrics Institute have gone so far as to publish invaluable privacy charters to act as a “…good executive guide operating over a number of jurisdictions…” which should be reviewed and seriously considered before any deployment of biometric technology.
Some of these fundamental principles are outlined below within context of the subject matter of this article and specifically within the context of commercial use of the technology. These will not necessarily apply when discussing matters of public safety, law enforcement and national security.
A fundamental principle of privacy concerns the limitation of the collection of data to that which is necessary. Organisations should not collect more personal information than they reasonably need to carry out the stated purpose. Biometric data by its very nature is sensitive and absolute assurance must be provided that it will be managed, secured and used appropriately. However, a key consideration in the use of this technology should be proportionality; is the collection of such sensitive data justified for the benefit realised?
5.2 Educate and Inform
People on the whole generally resent not being informed, especially in matters that involve them. History is littered with IT projects that have failed because key stakeholders were not involved from the outset, were not sufficiently informed and whose buy-in to the process was not obtained. Customers are one of the most important stakeholders and these issues are even more critical when dealing with their personal and biometric data.
There is a very interesting video on YouTube that illustrates this point very nicely. It is filmed by a man with a camera walking around filming random strangers without explanation. The reaction is predictably always negative and sometimes hostile. The message the video is trying to make is obvious: most people do not approve of being videoed, so why do we so readily accept surveillance cameras? The message that comes across is actually clearer: People object when they do not understand intent, purpose or benefit to themselves. The cameraman offered no messages of explanation of his intent, even when challenged. Objection was guaranteed.
5.3 Be Truthful and Accurate when Describing the Business Purpose and Benefit
As part of the process of informing, organisations should also be direct and open in disclosing not only the existence of the systems, but the scope, intent and purpose of the solutions. Why are you utilising an individual’s biometric data? What benefit does it serve? What is the scope of the use of this data?
Importantly stay well clear of “scope creep”. All too often it is tempting to start using data once you have it for other than the stated intended purpose for which it was collected. Such endeavours will inevitably lead to loss of trust.
5.4 Provide Benefit to the Customer
Simply understanding the scope, purpose and intent of a system generally will not be sufficient to garner acceptance of the system. While people are generally astute enough to realise that businesses are in the business of making money, they’ll want to know what is in it for them. What is their benefit?
An example with which most of us will be familiar are grocery store loyalty or “club” cards. Whilst we all understand the objective of the grocery store is to profile and analyse our spending in order to better market to us, a majority of us still subscribe in order to receive the enticements and benefits on offer.
Within the context of face recognition, Facebook’s Facedeals programme demonstrates this principle nicely. Users understand the benefit to Facebook and the retailer, yet they still may choose to opt in to the programme because there is a clear and discernible benefit for them to do so as well, namely targeted discounts and offers at retail outlets.
This is also affirmed by a survey in 2012 by IATA which finds that “… most travellers are receptive to the idea of using biometrics within the border control process.” Why? Because there is clear and discernible benefit to them in the form of a more efficient passenger process and increased levels of security.
5.5 Seek Consent and Operate on an Opt-in Principle Where Appropriate
Biometric enrolment into such systems should not be mandatory. Individuals should be allowed the ability to opt-in, with an opt-out status being the default. Clearly this is not always feasible when considering people in public places the crossing cameras. However, if they are being identified against reference sets, the individuals in the reference sets should be there only with consent. Automatic enrolment into reference sets or biometric databases should involve the consent and approval of those enrolled.
Importantly, people should not be penalised should they choose not to opt-in; they should still be allowed a mechanism of transacting and conducting their business.
The accuracy of face recognition has increased dramatically. Retailers and other commercial organisations are investigating ways to exploit this technology to increase revenues, improve margins and enhance efficiency. Social media companies own the largest photographic databases in existence and are under pressure from shareholders to find ways to monetise these assets. As these explorations gather pace, so does the discontent of privacy advocates.
This article has outlined a number of ways face recognition can be used by enterprise and highlights potential privacy issues. Is it possible to ethically use face recognition technology and respect privacy? This will only be possible if enterprise maintains the trust and respect of its customers. Open and honest discourse is the best manner in which to achieve this. This should be accompanied by delivering real benefit to all parties involved in a manner that also empowers the customer; nobody should be forced to enrol into biometric systems or be disenfranchised from refusing to do so.
How far is too far? History has shown that there is no absolute answer to such questions. The exact location of the line to be crossed is always a factor of and changes with the times we live in. History has also shown, especially as it pertains to technology, that it is next to impossible to put the genie back into the bottle once released. It is now the collective responsibility of all to ensure the proper and ethical use of this technology in a manner that delivers the maximum benefit. This will require the active cooperation of social media, enterprise, the IT industry and civil liberty groups to ensure the continued protection of our reasonable expectations of privacy without crippling the use of this powerful technology. In the end, the people have the loudest voice. If enterprise crosses the line, customers will pass judgement with their wallets.
7 About the Author
Carl is the founder of Allevate Limited (http://allevate.com), an independent consultancy specialising in market engagement for biometric and identification solutions. With over 20 years’ experience working in the hi-technology and software industry globally, he has significant experience with identification and public safety technologies including databases, PKI and smartcards, and has spent the past 10 years enabling the deployment of biometric technologies to infrastructure projects. Carl started working with biometrics whilst employed by NEC in the UK and has subsequently supported NEC’s global and public safety business internationally.
Residing in the UK, Carl was born and raised in Canada and holds a Bachelor of Science Degree on Computer Science and Mathematics from the University of Toronto.
Especially interesting is the view of the combination of biometrics over CCTV with artificial intelligence and behavioral recognition, as this does appear to be the way things are moving.
I agree that biometrics, and especially face recognition, can provide huge benefit to society. I also agree that there is a certain level of concern and distrust by large swathes of the population, some of it well-founded, and some of it based on misperception and incorrect knowledge.
In either case, I think it is dangerous to simply dismiss these concerns and objections simply because we feel “we know best”. I believe society can be much better off with the well placed and controlled use of this technology, but I also believe that we should be working with the civil liberties groups rather than fighting them. Ultimately, these systems need to be accepted if they are to succeed, and in order for this to happen, the public has to better understand the benefit to themselves, and have trust in the people using them.
I am a firm believer in the use of biometric technology to further public safety and efficiency.
However, a key consideration in the use of this technology should be proportionality; is the collection of such sensitive data justified for the benefit realised?
Biometric data by its very nature is sensitive and absolute assurance must be provided that it will managed, secured and used appropriately. Given this, the consent of those whose data will be captured should be sought, and the use of such systems should not be mandated without such consent (with caveats for government, law-enforcement and public safety deploments).
Minors, by definition, are unable to supply consent, so the responsibility to do so (or to withhold consent) must fall upon the parents AFTER they have been given the opportunity to ensure they are satisfied that their child’s data is appropriately safeguarded and all privacy concerns have been considered within the context of the benefit to their child.
I do not believe (though I’m not a legal expert) that the person filming did anything illegal, yet people clearly took offence at his actions. The point the cameraman is obviously trying to make is why then do people so willingly accept being recorded by surveillance cameras?
I think the main point this film misses, in my opinion, is that people do not understand the purpose or intent of the cameraman’s actions, and they then assume malfeasance, which then understandably provokes a negative response.
In contrast, for the most part, most people understand the intent and purpose of a surveillance camera in a public place (such as a store or train station): to protect public safety.
The main lesson to be learnt from this (in my opinion) is the importance of education and awareness, and ensuring your users / key stakeholders are aware of proceedings and bought into the concept from the outset.
NEC Europe, leaders in biometric technology and SITA, the air transport IT specialist, announced an agreement to jointly provide an automated border control (ABC) gate solution. It incorporates sophisticated biometrics technology for use at immigration control points at airports in the European Union. The agreement comes as EU member states implement recommendations to move to self-service border control using ABC gates.
The speed and accuracy of this SITA/NEC automated border control gate helps speed up passenger flows at border control checkpoints while improving security and resource management. It incorporates face recognition, and optionally fingerprint verification, against e-passport data. Passengers can be processed through the SITA/NEC ABC gate in ten seconds or less
“SITA has significant experience in dealing with the challenges facing border control authorities around the globe and automated border control gates are recognized as a potential solution to the combined goals of improving the passenger journey and increasing border security,” said Dan Ebbinghaus, SITA Vice President, Government Solutions. “Working with NEC, our ABC gates combine SITA’s air transport industry experience and market knowledge with the fastest and most accurate face recognition software in the market. This combination will provide significant benefits to border control and airport authorities.”
ABC gates are less resource intensive as it only requires manual intervention by an immigration officer in rare cases when a match is unsuccessful. This frees up border security staff for other activities. In addition to the improved traveler experience, reduced waiting times can attract more airlines and increased revenue for the airport authority.
A core element in this ABC solution is NEC’s “NeoFace” face recognition algorithm which provides speed, accuracy and performance regardless of the database size and image quality. NEC face recognition technologies were ranked No. 1 in the MBE Still-Face Track in 2010 carried out by the National Institute of Standards and Technology (NIST), commissioned by the Department of Homeland Security.
Chris de Silva, Vice President IT Solutions, NEC, said: “NEC has a long history in innovation and with NeoFace we have extremely fast and accurate face recognition software, ideal for security applications. We have incorporated our software in a variety of security-based applications, but by integrating it into this new ABC gate, we believe it will significantly improve the efficiency of processing people through control checkpoints.”
He further added: “SITA has a wealth of experience as an IT integrator in the air transport industry and we are well-placed with our combined expertise to deliver a market-leading ABC solution across Europe.”
The Bexar County Sheriff’s Department in San Antonio, Texas used a latent examiner’s workstation from NEC to search a criminal database using a partial latent print to solve a cold case in a matter of minutes.
The accuracy of face recognition has increased dramatically. The top performing algorithm in independent evaluations by the US National Institute of Standards and Technology (NIST) is now capable of providing reliable results in real-world environments; the technology is being deployed in airports today to enable everything from automated immigration processes, improved surveillance, security and seamless passenger travel, to the gathering of valuable statistical information pertaining to passenger movements.
1.0 The Business Environment
Airports are complex environments involving multiple stakeholders with conflicting requirements:
Government and border control.
All parties must comply with all Government regulations and utilise the latest documents and passports from multiple issuing states while adhering to all security requirements.
1.1 More Passengers, Same Resources
Passenger numbers are relentlessly increasing; border crossings into the European Union by air alone are expected to increase to 720 million by 2030. The need to mitigate risk is constantly weighed against the requirement to ensure passenger mobility, whilst accurately and unambiguously identifying all those who move through this complex environment.
Biometrics is playing an ever-increasing role in response to these multi-faceted requirements.
2.1 Order of Magnitude Improvements between Subsequent Tests
Most remarkable is the rate of improvement in the accuracy of face recognition algorithms. NIST testing has demonstrated an order of magnitude of improvement in the False Reject Rates (FRR) every four years. Whilst maintaining a False Accept Rate (FAR) of 0.001, the FRR over 3 tests spanning 8 years were:
2002: FRR of 0.2
2006: FRR of 0.01
2010: FRR of 0.003
Put simply, in the latest tests, if the best performing algorithm was set so that it would not falsely match two images of different people more than once in every 1,000 attempts, it would then fail to match two images of the same person only three times in every 1,000 attempts. In contrast, the 2002 tests mismatched 20 times for every 100 attempts.
This arguably outperforms the accuracy of human beings.
2.2 Laboratory Testing versus Real World Environments
Although the above results are excellent, the controlled conditions of a laboratory environment are not representative of real-world conditions. They are a good indicator of results that may be attained when comparing photos of a similar quality taken under similar conditions (i.e. identifying 1 passport photo against a database of passport photos). However, photographs taken in an automatic border control e-gate or from a CCTV camera are not taken under the same control. These are commonly termed non-compliant captures.
Traditionally, face recognition software suffers degradation in accuracy when dealing with challenges such as variable lighting conditions or non-frontal images of the subject. Vendors that can better deal with these challenges deliver systems that perform in a consistently more reliable fashion in the field.
The latest NIST test indicates that the ability of the software to deal with the challenges of non-compliant photos has drastically increased. Face recognition software can now be reliably deployed in airport environments to deliver real and tangible business benefits.
2.3 Increased Tolerance to Angle / Pose
One way to predict how well a face recognition algorithm will perform in a real-world environment when dealing with non-compliant captures is to measure how well it performs in the laboratory with non-frontal photographs (where the subject‘s image is captured at an angle).
These lab results are an indicator as to which solutions will perform better when applying face recognition to CCTV cameras.
The recent NIST tests showed that the most accurate algorithm is highly tolerant to changes in pose. This indicates that detection rates from CCTV cameras should provide tangible benefits whilst minimising the level of false alarms.
2.4 Increased Tolerance to Time Between Photographs
Additionally, it is often the case that the reference photographs we are comparing the live captures to are not recent. For example, most passports are valid for 10 years, so it is essential that we can still maintain a high level of accuracy when verifying photographs against older reference sets.
The NIST MBE 2010 study demonstrated that the highest performing algorithm was able to maintain accuracy rates that deliver quantifiable benefits in these circumstances.
2.5 Lower Resolution Photos
It is also common for non-compliant face captures from CCTV cameras to involve photographs in which the subject’s face constitutes a small percentage of the overall frame of the picture or where the face resolution is not particularly high. This may be due to the use of a lower resolution camera or due to distance between the subject and the camera.
In December of 2011, NIST published another report entitled The Performance of Face Recognition Algorithms on Compressed Images. Although not the primary driver of this study, the results clearly show that the same top performing algorithm was able to generate the same high levels of accuracy with inter-eye distances all the way down to 24 pixels between the eyes, thereby providing another indicator of expected accuracy in real-world environments.
2.6 Real-World Results
There are numerous factors in a live deployment that need to be considered such as lighting, camera postion, distance of the subjects from the camera and the angles at which the sample photographs are taken.
Recently, a national government conducted an evaluation of an e-gate solution at an airport. As part of this evaluation, e-passport holders were invited to use the gates. A sufficient number of passengers were subsequently processed through the gates to provide proper statistical significance. Algorithms from three separate face recognition vendors were tested in the gate.
In this real-world scenario, passport photos of passengers were verified against a lesser quality livescan photo taken within the e-gate itself. The results were presented at the Biometrics 2010 Conference in London: the top performing vendor in the NIST test was able to achieve a real-world FRR rate of 1.1%. This is arguably a better result than can be obtained by a live border guard manually comparing passport photos against the passport holders.
3 Current Applications of Face Recognition in Airports
Face recognition has evolved significantly over the past decade and has now attained a level of accuracy that provides real and quantifiable business benefit to all stakeholders in an airport environment. Solutions incorporating face recognition are already being deployed today.
3.1 Automated Border Control Gates at Immigration
Many nations world-wide have deployed e-Passports which are being carried by an ever-increasing percentage of the world’s population. This enables governments to deploy Automated Border Control (ABC) gates. In EU nations for example, these gates:
are for EU passport holders only.
do not require pre-enrolment.
perform a 1:1 face verification of a live scan against the JPG on the passport chip.
In the UK these gates are being widely deployed at entry ports and seemingly form the backbone of the government’s strategy for automatically clearing EU passengers.
In Asia the three largest ABC deployments in the world (Singapore, Macau and Hong Kong) each process hundreds of thousands of passengers daily, maximising the efficiency of live border guards.
3.1.1 How it Works
The process involved in an ABC gate is fairly simple:
The passenger approaches the gate and has their passport read by the e-gate.
The validity of the data page on the passport is verified using a variety of tests.
The information in the machine readable zone (MRZ) is verified against the data read off the chip.
The passport information is sent to the appropriate government systems for the appropriate checks.
If there are any problems thus far, the passenger is re-directed to a manned border lane, otherwise …
A live photo is captured of the passenger (with appropriate liveness checks).
Face recognition is used to verify the live capture with the photograph read off the passport’s chip.
If the photo does not match, the passenger is assisted by a live border guard, otherwise…
The passenger is allowed to proceed.
In this use of face recognition:
FAR represents the percentage of passengers holding a passport that does not belong to them that are wrongly admitted.
FRR represents the percentage of legitimate passengers who are wrongly re-directed to a live border guard due to the photographs not matching.
There have been no published studies of the FAR and FRR achieved by a live border guard, but it is generally accepted that face recognition operates at a higher level of accuracy, especially when a border guard has been on operational duty for more than 2 hours or has to deal with visual verification of multiple races of passengers. Most e-gate deployments in Europe today operate with an FRR of approximately 6% set against a corresponding FAR of 0.1%.
Recently, an officer responsible for a large deployment of e-gates in an international airport indicated that in his view, most imposters attempting fraudulent entry into the country prefer to try their luck with manned border guards rather than use automated gates.
3.1.2 The Business Benefit
You don’t have to look far today to read of the burgeoning deficits of most western nations. Austerity is the order of the day. Even in light of the expected year-on-year growth in passenger numbers, budgets are being cut. More and more often, improved efficiencies introduced by the sensible deployment of technology are being relied on to address these budget shortfalls.
Border guards are highly skilled and experienced staff deployed at the front-line of our nations defences. 99% of travellers entering a country are benign. Routine checking of travel documents and verification of valid ownership are tasks that can now be better performed by technology, thereby enabling the automated egress of legitimate travellers and allowing the border guards to focus on and find the 1% of the travellers they really want to speak with. In effect, removing the haystack to reveal the needle.
It is also relevant to note that the higher the accuracy of the face recognition solution deployed, the lower the FRR realised, thereby resulting in fewer passengers redirected to a live border guard and a lower cost of total ownership.
3.1.3 An Example
Another nation that has recently trialled the deployment of 4 ABC lanes determined the following:
Without the ABC lanes, 8 manual lanes required 8 border guards.
With the ABC lanes, the same 8 border guards were able to monitor 12 lanes.
Without the 4 ABC lanes, 8 border guards oversaw the entry of 950 passengers per hour.
With the 4 ABC lanes, 8 border guards oversaw the entry of 2,400 passengers per hour.
Even with the deployment of a limited number of ABC lanes a real and tangible benefit was realised.
3.2 Trusted Traveller Systems
Most ABC solutions deployed today take one of two forms:
Non-Registered, for holders of e-Passports from authorised countries (as discussed above).
Registered, for holders of passports from countries not authorised to use the Non-Registered lanes (or holders of older passports without a chip).
Examples of the latter include the US Global Entry, Dutch Privium (collectively FLUX) and the UK IRIS systems.
As non-registered systems become more commonplace and the number of e-passport holders continues to rise, the business case for governments to provide separate free-to-use Trusted Traveller systems becomes vague. Ideally, given the limited space available in airports, the best scenario involves these passengers using the same physical e-gates as users of the non-registered systems.
Existing e-gates can be modified to accommodate holders of e-Passports from other nations. An additional step in the process flow allows the e-gate to cross-reference against a database of pre-enrolled and vetted Trusted Travellers. An additional face verification can be performed against the stored face details of the enrolled passenger.
3.3 Departure and Boarding Gates
The previous example depicts the use of biometrics to facilitate passenger processing at immigration and to introduce efficiencies to the tasks of border control officials. Airport operators and airlines are also increasingly turning to biometrics to facilitate the flow of outbound passengers through airport terminals.
Simplifying Passenger Travel (SPT) was an initiative led by airlines, airports, governments and technology providers which proposed the “Ideal Process Flow”. The goal was to combine e-passports, biometrics and network infrastructure to enable the automatic identification and processing of passengers to move them through the airport seamlessly while freeing up staff to concentrate on security threats and customer service.
While the full ambition of assigning a single biometric identifier to a passenger’s entire airport journey, from booking, to check-in, bag drop through to security and eventually boarding is yet to be realised, key elements are already being implemented by airport operators.
3.3.1 The Problem
Many airport terminals have a single common departure lounge for both domestic and international passengers. Here exists the potential for a departing domestic traveller to swap boarding cards with an arriving international traveller, thereby enabling the arriving traveller to transit to a domestic airport and bypass immigration processes.
3.3.2 The Solution
This problem can be remedied by introducing automatic gates with face recognition at the entry to the common departure area and at the gate prior to airplane boarding. The automated gate at plane boarding captures the passenger’s face and verifies it with the face captured and associated with the boarding card when the passenger entered the departure area, thereby detecting if a boarding card has been swapped.
3.4 Surveillance: Real Time Watchlist Alerts
Matching faces captured from CCTV against photographic databases has long been the holy grail of face recognition. These systems are now being deployed today.
Although the results obtained in the NIST evaluations do not reflect the results that can be obtained in a live surveillance environment, it stands to reason that solutions that incorporate the best performing algorithms will also yield the highest accuracy results when matching CCTV images against a watchlist.
3.4.1 What it Delivers
These solutions are designed to integrate with existing surveillance processes; faces are extracted in real-time from the CCTV video feed and matched against a watchlist of individuals. When the system identifies an individual of interest, it raises an alert that can be responded to rapidly and effectively.
In this application of face recognition:
FAR represents the percentage of people captured by a CCTV camera that are falsely matched against the watchlist (in essence the number of false alarms raised by the system).
FRR represents the percentage of people captured by a CCTV camera who are in the watchlist but for which no alarm is raised.
The alerting mechanism is a binary process. If the system raises too many false alarms, it will quickly be ignored by those tasked with responding to these alerts. The objective of these systems is to minimise the false alerts to a manageable level, while detecting the highest possible percentage of people moving past the cameras who are in the watchlist (true ID rate).
It is essential that expectations are set appropriately. Scenarios where thousands of cameras are scanning large crowds of people in day and night environments and from a distance to identify individuals of interest are still largely unrealistic. The best results are obtained:
Using newer high definition cameras (3-5 megapixels).
Indoors with uniform lighting or outside during daylight in the absence of specific glare.
Where people are generally facing the same direction and moving towards the camera.
In a suitable pinch-point, such as in a corridor, lane or access gate / turnstile (not large crowds of people).
Where cameras are positioned in such a manner as to minimise the angle to the face (ideally < 20 degrees).
Additionally, as the system is comparing poorer quality photos captured from CCTV, it is imperative that the highest quality reference photos are inserted into the watchlist. Systems comparing poor photos against poor photos operate at significantly reduced accuracy levels.
Even with the above considerations in mind, there exist substantial opportunities and environments in which these solutions may be deployed to deliver significant results.
3.4.3 Technical Considerations
These solutions are typically deployed in environments where large numbers of people may be crossing the cameras. As such, depending on the size of the watchlist, a very large number of face verifications need to take place. Such solutions potentially require intensive use of server infrastructure.
Typically, the main considerations that determine the server infrastructure required are:
The size of the watchlist.
(Typically, these would only contain key or significant individuals.)
The number of people moving across the camera(s).
(This represents the number of transactions or searches against the watchlist.)
The response time required in which to raise an alert.
The number of frames per second which are being captured by the cameras.
(The higher the frame rate, the more times you capture the same person walking past the camera.)
Real-time searching of an entire criminal database is not typically feasible; consideration should be taken when determining who should be inserted into the watchlist to minimise its size. Typical watchlist sizes are in the hundreds or thousands.
The two major areas of processing inherent in such a system include:
Creating biometric templates of all the faces moving across the CCTV camera.
Matching these biometric templates against the watchlist.
Of these, template creation generally requires the most CPU power and time.
Therefore very careful consideration must be given to the number of frames per second (fps) the cameras are running at. Many systems typically run at 5-10 fps. While the processing power is significantly reduced, so is the overall accuracy of the system. The lower the fps, the more likely it is that the system will throw away frames containing a high quality image of the individuals’ faces.
To obtain optimal accuracy, cameras should be running at up to 20fps. However, this will result in more images of the same person being captured, resulting in a higher level of template creations and searches.
Solutions must be designed with scalability in mind, allowing the most efficient use of server power available.
3.4.4 An Example
An example of an existing live deployment in an airport environment consists of:
Up to 10 five megapixel cameras running at 25 fps.
A peak transaction rate of 1,000 people per minute moving across the cameras.
A watchlist of up to 1,000 people.
An alert response time of 5 seconds.
Each person is captured tens of times, resulting in tens of thousands of template creations per minute and tens of millions of biometric verifications per minute.
In this environment, assuming suitable environmental conditions and positioning of the cameras, this system identifies people in the watchlist up to 90% of the time (true id rate) with only one false alarm per day. If operators are willing to accept more false alarms, the true id rate can be increased by configuring system tuning parameters and lowering matching thresholds.
Such systems are already running today.
3.5 Surveillance: Forensic Video Analysis
The increase in the use of CCTV cameras has led to an ever increasing volume of archived video footage. The intelligence in this footage typically remains inaccessible unless appropriately analysed and indexed. Reducing investigation hours when limited resources are available is essential. Such systems can be used to populate databases of “seen” individuals, thereby enabling authorities to search for specific people of interest to determine if, when and where they have been present.
3.5.1 How it Works
Faces of individuals are captured from CCTV and archived in a database.
Authorities can search the archive using a photo to determine a camera ID and timestamp.
Playback of the relevant recording can be enabled by storing pointers into the video archive.
3.5.2 Usage Example: Passengers without Documentation
One usage already deployed today is to quickly and accurately determine the point of origin of arriving passengers without documentation, such as asylum seekers.
If a passenger presents themselves to immigration without documentation and does not provide accurate or complete information about themself, authorities can capture a photograph of the person and search the database of archived faces. If cameras are placed in aerobridges to record disembarking passengers, it is then a simple process to identify on which flight the passenger arrived.
3.6 Queue Management and Flow Analysis
It is becoming increasingly important for airlines and airport operators to monitor queue lengths and passenger flows within the airport. Understanding peak and quiet times is essential to enable sufficient and efficient staffing and resourcing. Raising alerts to manage unforeseen queues is critical for ensuring passenger satisfaction as well as for ensuring that all SLAs with other stakeholders, such as airlines or government agencies, are adhered to.
A common solution thus far has involved the tracking of bluetooth enabled devices such as PDAs and smartphones which are carried by passengers. However, relatively low percentages (approximately 15%) of passengers carry such a device, let alone have the bluetooth on the device activated.
A solution that provides a much more comprehensive data set and accurate information is needed.
3.6.1 The Application of Face Recognition
Solutions using CCTV with face recognition can timestamp when individuals are detected at known camera locations, thereby providing highly accurate information on passenger flows such as:
How long does it take to move between two or more points? (such as check-in to security)
What are the averages and when are the peaks?
How does this vary with time of day?
…as well as providing invaluable insight on how passengers move through the airport:
What percentage of passengers move from security to duty free?
How many of these are male / female?
How long does the average passenger spend shopping in duty free?
How is this impacted by queue lengths?
Importantly, no specific passenger identifying information is recorded.
3.6.2 How it Works
As passengers enter an area of interest they are acquired by a camera and anonymously enrolled into the system:
CCTV cameras enabled with biometric technology are installed at appropriate areas of interest.
Passengers are automatically searched against the database of enrolled individuals.
The passenger’s anonymous record is updated with a camera number and timestamp.
The database is automatically purged as required at regular pre-defined intervals.
The system can raise the appropriate alerts as required (i.e. queues too long).
4 Privacy Considerations
Any article on face recognition would be seriously remiss without at least mentioning privacy. There are a multitude of sources available for detailed discussions on privacy versus benefit of this technology, including the views of this article’s author; readers should familiarise themselves with this issue before considering any deployment of face recognition.
5 What’s Next?
As the use of face recognition continues to be substantiated, more ingenuitive applications will be deployed. Cloud-based services will enable the transfer of expensive computing power out of the airport into shared server facilities. Face recognition will assign a passenger a single unique and transient identity during their movement through the airport, thereby allowing them to be processed by multiple applications seamlessly and effortlessly. Passenger movement through an airport environment can be tracked up to the point of their departure. Personalised way-finding solutions can guide individual passengers to their specific gate, thereby reducing flight delays and passengers who are delaying flights can be quickly and easily located.
The accuracy of face recognition has increased dramatically over the past years. The top performing algorithm in independent evaluations by the US National Institute of Standards and Technology is now capable of providing reliable results in real-world environments and the technology is being deployed today in airports to enable everything from automated immigration processes, improved surveillance and security, seamless passenger travel and the gathering of valuable statistical information pertaining to passenger movements. The number of potential applications of this technology will continue to deliver benefits in creative ways we have yet to imagine.
The business benefit is real and quantifiable.
7 About the Author
Carl is the founder of Allevate Limited (http://allevate.com), a consultancy focused on providing strategic expertise for identity projects that incorporate biometrics, automation and analytic technologies. With over 20 years’ experience working in the hi-technology and software industry, he has spent the past 10 years enabling the deployment of biometric technologies to national infrastructure projects. Carl started working with biometrics whilst employed by NEC in the UK. Allevate continues to work closely with NEC on identification projects in Europe for government, border control and law enforcement.
This is the author’s original version of a work that was accepted for publication in Biometrics Technology Today (BTT). Changes resulting from BTT’s publishing process are not reflected in this original version, and as such this article may differ from the version subsequently published in Biometrics Technology Today, VOL: 2012, ISSUE: 7, Date: July, 2012, DOI: http://dx.doi.org/10.1016/S0969-4765(12)70148-0
WIN is a non-profit organisation that provides identification services to law enforcement in: Alaska, Idaho, Montana, Nevada, Oregon, Utah, Washington, Wyoming, and California (as an interface member).
WIN has been a long-standing customer of NEC America, and this contract was re-competed last year.
The re-award of the contract to NEC is a testament to the skill and efforts of their team in Sacramento, and the quality of the NEC AFIS solutions.
Interestingly, NEC is providing this capability to WIN as a service, thereby eliminating the need for any upfront capital expenditure, and has been doing so long before “cloud” became fashionable. The solution is entirely owned by NEC and hosted in NEC data centers.
The comments that this represents a failure of biometric systems started to fly almost immediately.
“Multi-million pound eye scanners, billed as a key tool in securing Britain’s borders, have been scrapped.”
“…the technology has been beset by problems,…”
… are typical of the comments and headlines making their rounds.
I admit the gates were not perfect and did require some getting used to in order to navigate your way through quickly.
But I think the systems were far from a failure, and the reality is a little bit more subtle than the headlines may suggest.
Let’s not forget the system was originally introduced in 2004, initially as a pilot. At this time, such use of Iris technology was fairly innovative. That the footprint of the pilot was gradually extended and became a permanent system is indicative that the system was fairly well received. The fact that over 380,000 people have voluntarily enrolled (myself included) makes it difficult to argue that the system is derided.
In my opinion, the turning off of the system at these two locations is more in line with a planned phasing out of this particular solution, for some rather more mundane reasons:
The system no longer fits border-automation strategy in the UK moving forward. It has largely been replaced by the momentum to accommodate EU e-Passports holders,whose passports hold an electronic copy of their face photographic.
The initial deployment was meant to be limited, and the contract has undoubtedly been extended numerous times. A complete and expensive technology refresh (as is required) without an open and competitive re-tender would undoubtedly not rest on firm legal ground.
The business model was never well thought out. It is completely funded by the UK government and can be used by any nationality completely free of charge.
This Iris system is intended for pre-registered Trusted Travellers, who are pre-vetted before they can use the system. At point of use, it is a 1:n Iris check and no travel documents are required.
Since the system has been deployed, most European Union (EU) nations have deployed e-Passports and an ever-increasing percentage of the EU population is now carrying a chip passport. The Iris gates have been gradually been superseded by a new breed of e-Gates that:
are for EU passport holders only.
do not require pre-enrolment.
perform a 1:1 face check against the JPG on the passport chip.
These gates are now being widely deployed at UK ports of entry and seemingly form the backbone of the government’s strategy for automated passenger clearance. This is only natural, as by far the bulk of passengers entering the UK are EU citizens.
If the remaining Iris gates are end-of-life’d, this will clearly leave a hole in the border automation strategy, mainly those passengers that:
are not EU citizens.
are EU citizens but do not yet have an e-passport.
Arguably, the second of the two will become less of a problem as time passes, as holders of older passports have their passports renewed.
The former, however, will form a minority of arriving passengers, and the business case for the government to provide a free-to-use Trusted Traveler system remains vague. More likely than not, any replacement system will take the form of a paid subscription requiring a pre-enrollment with vetting.
Ideally, given the limited space available airports, the best scenario would involve these passengers using the same physical e-gates as EU passport holders.
In my view, allowing these systems to reach their end-of-life is not an argument for the failure of biometrics deployed at the border. The fact that a system that was only ever meant to have a limited deployment lasted this long and was only replaced by a government strategy that is more harmonised across EU nations, is a testament to the value this technology provides.
Thank you project IRIS, but I won’t miss you. I use the new e-Passport e-Gates now.
Turkey, Brunei, Nigeria and Poland are just some of the countries that have already announced biometric ATMs, for example. The use of biometrics at the till for payment is also on the rise.
Some cite the fact that there has not been a massive up-take in the use of biometrics in consumer facing applications as evidence that the technology does not yet function to an adequate level of performance. Every large biometric deployment deployment I have been involved in has entailed rigorous and exhaustive testing to clearly demonstrate accuracy performance against clearly and aggressively pre-defined test parameters, in real-world environments, using customer data; I don’t expect financial customers would be any less arduous.
I do agree that lab testing / data is insufficient, and solution providers who are unwilling or unable to demonstrate predictable and repeatable accuracy SLAs in real-world environments should be treated with caution.
Is a biometric system fallible? Yes. The question is, is it less fallible then existing precautions already in place, and does the deployment of such a system, in simple financial terms, demonstrate a clear ROI. Again, the answer is: Yes.
Rather, I believe that the thus far reluctance in Western societies to deploy such systems en masse for consumer identification is more due to the banks’ concern of how such systems will be perceived by their clientele; the UK populace, for example, is ever suspicious of Big Brother, their governments and large institutions.
However, these “perception barriers” are already lowering, and there is mounting evidence that public opposition, where clear benefit is realised, is eroding.
Banks are now increasingly becoming aware of the value of biometric identification, of both their internal staff and their external clientele, especially in the area of high net-worth individuals and high-value transactions, and I expect we will see many exciting developments in identification solutions for this market.
Of late there have been repeated calls on Twitter for biometric suppliers to publicly release statistics pertaining to the performance of their biometric algorithms, specifically False Accept Rates (FAR) and False Reject Rates (FRR).
Whilst not a response to those calls, this post is in part motivated by them.
Those repeatedly calling for the release of these figures know in advance that their calls will not be heeded. As they are already well-versed in the technology, they already understand the reasons why. Yet I believe they persist so they can cite the non-responsiveness of suppliers as “evidence” that the technology does not work.
Let’s examine why suppliers keep this information secret.
1. THEY DON’T
I have been involved in negotiating multiple contracts for deployment of biometric technology, ranging from large government infrastructure programmes, through to enterprise access control solutions. I can emphatically state that in every one of these instances, the customer has been absolutely fully aware of the performance metrics of the technology they are deploying, from accuracy through to HW requirements. In fact, before securing any contract, it is very common for the supplier to have to benchmark their technology on customer supplied data, and often the adherence to pre-defined accuracy SLAs is written into contract, with penalties for non-performance.
2. There is no single correct answer
Anybody versed in biometrics knows that the answer is almost always “It depends”. The accuracy is dependent upon multiple factors, many of which will be under the control of the customer, not just the supplier, such as:
- Quality of the data being matched against
- Representative population
- Environmental conditions
- Performance required
Again, required levels of accuracy will often be pre-agreed with the client, and it is often down to a matter of how much budget the client has available. Faster and / or more accurate will require more computing power, and the determination is often down to a cost benefit analysis.
3. It is Competitive Confidential Information
Accuracy of biometric technology can pose a strong competitive advantage, and suppliers often don’t want this information to be in the public domain (or more specifically, available to their competitors). Though the release of this information is often required, for example to prospective clients, it will almost always be under a non-disclosure agreement.
4. There is no Commercial Reason to do so
Suppliers, like anybody, don’t like having their time wasted. They’ll apply their resources to those who wish to engage with them seriously, and as mentioned above, they will have no problem in releasing the information as required. A car salesman will spend his or her resources on the individual who wants to buy a car, and ignore the tire kickers.
To only ever argue the facts on one side of a debate to follow a predefined agenda generally results in a loss of credibility. The irony is that people who do so often have valid concerns or issues that quite rightly should be aired and considered, but end up falling by the wayside.
These are my own personal opinions, and not necessarily the opinions of any suppliers I may happen to work with.
Without passing any judgement, I understand in part both why there may have been pressure to do so, and the government’s decision to undertake suspensions. The latter is easier to address. Whatever concerns may have existed, freedom to exercise authority cannot fly in the face of direct ministerial guidance.
Having said that, I’m sure the reasons for doing so were well intentioned, and may have resulted from trying to meet conflicting requirements, mainly ensuring:
High security and appropriate passenger screening.
Passenger throughput and avoidance of queues / delays.
While I’m not close to the environment in question, at initial glance it appears that the former requirement may have been sacrificed to an extent to ensure the latter during busy periods.
Delays and queues, in a very real and commercial sense, cost money, and it is easy to quantify exactly how much. So it appears the dilemma faced was the age old one, being : “What is an acceptable cost for increased security?”
It appears that some at least felt the benefit delivered did not warrant the disruption to existing processes. Unfortunately it also appears that the decision was taken without due process and consultation.
This situation highlights the importance of understanding the overall cost of any new security system (which invariably is significantly higher than the cost of procuring it), and the benefits it delivers. Invariably, any system will have an impact on existing workflows, and if carefully designed, should deliver an improvement in workflow in addition to an increase in security.
Biometric security: More bottom-line benefits, less James Bond
Carl Gohringer December 03, 2003
Bond movies will always be associated with state-of-the-art technology, but few of the products he uses or encounters ever make it into the real world.
A car that turns into a submarine might be nice to have or an umbrella that transforms into a rope ladder useful on the odd occasion, but their uses in everyday life are limited.
There is one exception to the James Bond rule – biometrics – the technology that uses unique, physical geometry to identify and authenticate individuals.
According to market research group Frost & Sullivan, the biometrics market will reach a phenomenal $2.05 billion by 2006 (it was valued at just $93.4 million last year).
Concrete evidence for the growth in biometrics is starting to proliferate. The Home Office has announced that it is planning to install biometrics in 10 UK airports by the middle of next year to assist immigration control. The Nationwide Building Society is running extensive biometrics tests using iris scans in place of PINs at cash machines. Most recently, the Home Secretary announced that national ID cards – to be phased in over the next five years – will incorporate biometric data access via fingerprint recognition.
However, for most organisations, there are two understandable questions that need to be answered before biometric identification will reach the boardroom agenda:
“When budgets are tight, what is the business case for investing in yet more security technology?”
“Aren’t there fundamental drawbacks with biometric technology?”
The second issue is currently the source of most controversy in the media. For years films such as Minority Report have presented a rather superficial interpretation of biometrics. Eyes have been gouged out to gain access to computer networks and “fake” or severed fingers used to access a building.
The reality is far less dramatic. As the use of biometrics becomes more common place, people will realise that the risk is no greater than being forced to reveal a password or to hand over an access swipe card. Indeed, the risk is much less, thus representing an improvement over and above the existing solution already in place. In fact, one of the key benefits of biometrics is that even if an ‘identity’ such as an access card or password is stolen, without the correct authenticating biometric, access will be denied. The same applies to the sharing of passwords, helping businesses and organisations control who can and cannot access certain areas.
In addition to the physical risk, with biometrics comes the perceived threat of ‘Big Brother’, with concerns of data compilation and movement monitoring. While there is no escaping the fact that in the wrong hands this could be the case, in reality the threat is no greater than your bank recording the cash points you have accessed, mobile phones being used to track your whereabouts, a supermarket using loyalty cards to track your spending patterns or in fact, a security company monitoring the comings and goings of staff via CCTV.
There is no doubting that to dispel the notion of a Big Brother state an education programme is needed to highlight the benefits of biometric security (e.g. the ability to protect a person’s identity, the near elimination of passport fraud and the ability to store important data without the threat of unauthorised access). However, the greatest support will be won once biometric security is fully integrated into daily processes, whether logging on to the network at work or withdrawing cash without the threat of skimming from a cash machine.
The business case for biometrics, once explained, clearly demonstrates three primary reasons as to why a business should adopt biometrics:
To improve an organisation’s security by providing positive identification of individuals accessing your premises and networks
To save large sums of money by eliminating user provisioning and password management
To increase usability and convenience to staff
What’s the point of spending a vast amount of money protecting and securing your networks if you still can’t positively identity who is accessing them? Obviously none but this is exactly what most companies are currently doing.
Standard corporate user IDs and passwords used to govern the physical and virtual access to a company and / or network tend to follow the same format. The most common being the first letter of the user’s first name and the whole of their surname for a username i.e. cgohringer for Carl Gohringer. The bottom line for a business is that IDs can generally be cracked with one or two educated guesses. So assuming there is little or no security around IDs, a company’s security depends solely on the strength of passwords.
Again, if you know a little about the people whose passwords you are trying to guess, it often does not take much to figure it out. There are plenty of available password cracking utilities easily accessible on the Internet to help you out.
The question is how big an issue are ID/password breaches? It’s difficult to be precise, but we do know that 60-70% of hacking attacks have an internal source (i.e. are conducted by people who know something about each other and for whom, ID/password theft would be relatively simple). And, to give you an idea of the financial impact, last year 39% of Fortune 500 companies suffered an electronic security breach at an average cost of $50,000.
Biometrics tackle this problem by providing a truly unique individual identifier. If access to either a building or network is controlled by a smartcard containing biometric templates, you can be sure that only the valid owner of the card will be able to access those resources. Access rights to different buildings and rooms can also be set – via the smartcard – for each individual; and with emails increasingly being used as legally binding documents, biometrics can guarantee identity by requiring the user to supply their fingerprint when digitally signing them.
Ant Allen, research director at analyst house, Gartner Group, sums up the benefits of biometric human authentication: “It is unique to the individual, not something that somebody else decides will be your password, shared secret or token. Passwords can be learnt by various means and tokens can be stolen, but biometrics cannot.”
Increased convenience, less money wasted
The ID/password combination is also inconvenient for staff and financially inefficient for companies to manage.
Just think about the number of passwords you may have to remember in a given day: the password for your office network; the number to access voicemail on your phone; the ‘unlock’ code for your PDA and so on.
Inevitably, passwords are forgotten or compromised on a daily basis, which results in the IT department being pestered for a new code. The cost of maintaining passwords is costly and with this in mind, the ROI on biometrics is commonly realised in less than a year. IT staff are then freed up to focus on other, potentially revenue-generating issues.
In place of this often forgotten, easily hacked, regularly shared password, a biometric smartcard gives employees single-sign-on access to the corporate network, which eliminates the need to remember numerous passwords and PINs and removes the cost of managing them for the IT department.
The present and future of security
The benefits of biometrics can potentially run much deeper. For example, many public sector organisations see biometrics as a useful tool for improving customer service. In a hospital environment, facial recognition can identify a patient on arrival and ensure their medical records are ready for when they arrive at reception, enabling them to be instantly directed to the appropriate ward.
However, the purpose of this piece is to examine the impact on bottom line. In this respect, the case for biometrics is extremely powerful. Not only are they an essential tool to prevent your business losing large sums of money to cyber crime, on a day-to-day basis biometrics can dramatically reduce management and administration costs.
So next time you see James Bond or Tom Cruise battling biometrics in the movies, consider their potential for saving you money and giving your business robust insurance against the financial risk of hacking.
It is becoming increasingly important for airlines and airport operators to monitor queue lengths and passenger flows within the airport. Airport operators have invested significant time and money on investigating technologies that can provide useful metrics.
Understanding your peak and quiet times is essential to enable sufficient and efficient staffing and resourcing. Raising of alerts when unforeseen queues arise is critical for ensuring passenger satisfaction, as well as for ensuring that all SLAs with other stakeholders, such as airlines or government agencies, are adhered to.
Thus far, a common solution has enabled the tracking of bluetooth enabled devices, such as PDAs and smartphones, which are carried by passengers. The obvious drawback is that only a relatively low percentage of passengers will carry such devices, let alone have the bluetooth on the device activated.
However, even a penetration rate of 10-15% can provide a large enough sample to give statistical significance. Even so, a solution that provides a much more comprehensive data set and accurate information is needed.
The Application of Face Recognition
Using CCTV integrated with face recognition biometrics enables a solution that timestamps when individuals are detected at known camera locations, thereby providing highly accurate information on passenger flow information, such as average and peak queue times:
How long on average does it take to go from Checkin to Security?
How does this very with time of day?
When are the peaks?
.. as well as providing invaluable insight on how passengers move through the airport:
What percentage of passengers move from security to duty free?
How many of these are male / female?
How long does the average passenger spend shopping?
How is this impacted by queue lengths?
Importantly, no specific passenger identifying information need be recorded, and data can be purged at regular intervals.
Airports, such as London City, are already deploying such technology.
How does it work?
As passengers enter an area of interest and are acquired by a camera, they are automatically enrolled into the system:
CCTV cameras enabled with biometric technology are installed at appropriate areas of interest.
Passengers are automatically searched against the database of enrolled individuals.
The passenger’s record is updated with a camera number and timestamp.
The data is automatically aggregated to provide real-time analysis of passenger flows and movements.
The database is automatically purged as required at regular intervals. (ie overnight)
Using face recognition for such an application can provide many tangible features, including:
Aggregated passenger flow data.
Average time to move between two or more points.
Average time staying in a specific area.
Real-time reporting information.
Reporting over specific time frames.
Historical data comparison.
Alerting mechanism (ie, queues too long)
Does not capture passenger personal details.
Passenger data is purged regularly.
There are no data protection issues.
Unobtrusive and requires no passenger interaction.
Does not require special devices, such as Bluetooth phones.
High sample set and penetration rate.
Airports are complex environments involving multiple stakeholders, often with conflicting requirements. Their efficient operation requires real-time and reliable operational data. It was only a matter of time before operators turned to advanced technologies such as face recognition in order to provide such measurable and quantifiable date.
Clearly, the more accurate the technology, the more reliable the data on which the operator is basing critical business decisions. Independent studies by NIST clearly indicate that face recognition from suppliers such as NEC is now operating at a level of accuracy to enable such decision making.
The quality of the aggregated data provided by face recognition by far surpasses that of traditional application of technologies to this problem, such as bluetooth monitoring.
Major improvements have been realised in the capture capability, enabling Iris capture on the move or from a distance. While this is not an improvement in the SDK matching per say, it has a significant influence on the matching and usability of the system.
There have been significant and drastic improvements in the quality and accuracy of matching performance in a very short period of time in the last few years. This has been demonstrated by recent NIST tests, as well as other independent testing. It is not anticipated this rate of improvement will level out any time soon; expect in the coming years further drastic improvements.
Accuracy is still continually improving, though not at the same drastic rate as face recognition, as this is a much older technology. However, areas where there are major improvements are in the automated processing of latent prints (both in automated ridge, minutiae identification, feature extraction, and in automated 10-print to latent matching). This has the potential to enable enhanced functionality at verification points, such as border crossings, by implementing functionality such as real-time watchlist checking against latent watchlists.
Multi-Biometric Record Level Fusion
Another area where developments are aiding in accuracy improvements is multi-biometric fusion, occurring at the record level. Rather than merging multiple candidate lists from multiple biometrics post search, fusing biometrics and biographics in-record has the potential to provide multi-biometric record-level scores. However, this has more of an impact in very large scale identification systems, as opposed to verification systems, or small scale databases, such as watchlist checking.
Biometric Matching as a Service
Supported by trends such as cloud computing, data center consolidation, shared infrastructure and virtualisation. See here for more.
While I understand the premise of the “Occupy” demonstrations, I can’t help but feel that they would be more effective if they were also able to propose a solution instead of simply voicing discontent with capitalism.
In my view, this is a topic that is ripe for discussion, given the current levels of indebtedness of our governments.
With the current wave of austerity sweeping the world’s nations at the moment, most programmes entailing large capital expenditure are out, unless they demonstrate significant return on investment in the same fiscal year; large government IT projects take years to re-coup investment.
Suppliers are looking at recovering this loss of business by self-financing other business models, and one that is becoming increasingly popular is selling transactional services. Basically this entails moving the up-front investment from the customer to the supplier, as well as the onus to realise the ROI over the life of the programme.
Such models are increasingly supported by trends such as cloud computing, data center consolidation, shared infrastructure and virtualisation.
In today’s economic climate, the ability to move an initial large up-front capital expenditure to a long-term annual operating expenditure spread over the life of the programme is understandably attractive to customers.
On the flip-side, these same economic conditions will make it more difficult for suppliers to structure such deals, and they will remain the preserve of the larger suppliers with pockets deep enough to weather the current economic storm.
That this business model is attractive to larger government biometric identity programmes is no surprise.
In fact, this arrangement is nothing new. The Western Identification Network (WIN) is a collaboration of eight US states, and is one of the larger criminal / law enforcement AFIS systems in existence. It is hosted, run and owned by the supplier, with the states paying for match services.
Recent advances in the accuracy of face recognition are resulting in an explosion of its use, coupled with increasingly vociferous cries from privacy advocates. The benefits from the uses of this technology are clear. But does it enable even further and easier harvesting of private information about us as individuals, without our knowledge or consent? This presentation does not attempt to analyse the adherence of face recognition to the nuances of privacy legislation. Rather, it explores the emerging trends in the application of face recognition, from law enforcement and security / surveillance, through to commercial applications, to enable each of us to form our own views on where the boundary between face recognition and privacy lies.
I’m sat in Heathrow waiting for an early morning departure for a business trip. Sipping my coffee, I look casually around trying to spot the cameras. They’re cleverly hidden. Am I being watched? Doubtful. Am I being recorded? Almost certainly.
This is a daily fact of life for most Londoners. It’s widely known that our city is one of the most heavily recorded in the world; a fact that is consistently debated and often criticized. Yet for all the discussion, the fact remains. We don’t like it, but we accept it. Why? Personally, my true dislike is more of the necessity of this fact rather than the fact itself.
Carol Midgley wrote an excellent opinion piece (The Times, Sat 27th August, 2011) entitled “I’ll pick Big Brother over a hoody every time”. I recommend a read. Though clearly biased, and seemingly designed to stoke the debate with anti-CCTV campaigners, her conclusion was simple: In the wake of the London riots, the privacy-versus-necessity debate of CCTV is now all but dead. Do I agree? Let me come back to this.
Face Recognition and CCTV
Enter Biometrics. Face recognition technology to be precise. This technology, along with the wider field of video analytics, is set to transform CCTV surveillance. Video analytics is arguably a nascent technology, but face recognition on the other hand is here. Ready to deploy. Now. A recent study by the US National Institute of Standards and Technology (NIST) demonstrated that the accuracy achieved by the first place vendor (NEC) can provide clear and measurable benefits to a range of applications, including surveillance.
It seems that every new technology brings a realisation of new benefits and efficiencies, countered by a plethora of malicious uses of the technology by the less desirable elements of our global society, quickly followed by counter-measures and protections. This is a saga that we are all already familiar with in our daily lives. Examples range from the severe and extreme of nuclear medicine versus atomic weapons, through to online credit-card shopping versus financial identity theft. I’ve recently had a credit card used for over £3,500 of illegal transactions. Though this incident was highly inconvenient and disruptive to my life, I did not hesitate to accept a replacement card. Not to do so would have unacceptably disenfranchised me from modern society.
Back to face recognition. It hasn’t taken long for business minded technology companies to devise a whole range of new uses of this technology, all focussed on delivering bottom line business benefit. Almost as quickly arrive the cries of the privacy advocates. I’ve been reading with interest the sudden explosion in main stream news over the past few months highlighting new uses of face recognition, while very carefully considering the concerns vociferously raised by the technology’s opponents. A key fact often cited is that the technology is not 100% accurate. Even an excellent identification rate of 97% can produce a significant number of false identifications and / or missed identifications in a large sample population.
Let’s take a look at some examples.
Public Safety and Policing
While sat here in the terminal waiting for my flight, I’ve already grudgingly accepted that images of me sipping my coffee are almost undoubtedly being recorded. I may not be aware, however, that when I passed through security my photograph was taken. This wasn’t immediately obvious or openly advertised, but it happened. Shortly, my photograph will be taken again when I board my aircraft and compared to the photograph taken at security. International and domestic passengers share a common departure area, and this is done to ensure boarding cards aren’t swapped, thereby potentially enabling an international passenger to transit through to a domestic airport and bypass immigration controls. On a 1:1 verification, false matches are very low. If I’m a legitimate passenger, my concern is that the two photographs do not match, for which the worst case scenario is inconvenience.
Perhaps the borders agency is also comparing my photograph against a known watchlist of suspect individuals. This nature of deployment is usually used to enhance existing procedures, and not replace them. The system will provide increased security, in turn further protecting my safety while flying. I’m OK with this. Of course, there is also the prospect of misidentifying benign travellers. Though unavoidable, as long as the number of false matches are kept sufficiently low to ensure the cost of dealing with these exceptions doesn’t obliterate the benefit realised from the system, it can be argued that the greater good justifies the inconvenience faced by the occasional innocent passenger while their true identity is verified.
Upon my arrival at my destination, I may very well be offered the opportunity to use my new e-passport to speed through immigration at one of the many shiny automatic e-Gates springing into operation. In the early stages these definitely were a great benefit, allowing me to march past the long queues of travellers and expedite my passage through the airport. No complaint from me. As long as false matches are lower than what is achieved by a live border guard (which many studies suggest they are), then security should be improved. And false matches only apply to illegal passengers travelling on a false or stolen passport. Exceptions generated by valid travellers who do not match with their passport will generate some inconvenience by necessitating they speak to a live border guard. As e-gates become more commonplace, I predict I’ll just be queuing in front of an automatic barrier instead of a manned immigration booth. However, the efficiencies achieved should enable the border guards to concentrate on more intelligence-led activities, rather than simple rote inspection of passports, thereby increasing security and putting my taxes to more efficient use.
As I move through the airport, or for that matter in any public location such as a stadium or railway station, law enforcement authorities may be using my captured image to search against a database of suspects. Does this trouble me? Let’s look at a couple of scenarios.
I’m already being recorded. If I were to commit a crime, then it is likely that the video would be retrieved and officers would try to identify me. This is already happening and I doubt anybody would argue that this is an invasion of privacy. If face recognition technology can assist them with this arduous and tedious task, perhaps by automatically trying to match my face against databases of known offenders, and saving countless hours of police time, I’m all for it. Too bad for the criminal.
(I was incensed by the meaningless violence and destruction demonstrated during the recent riots in London. Newspaper reports have indicated that the UK’s police will be examining CCTV footage for years to come in their efforts to bring the perpetrators to justice. I am absolutely in favour of anything that can be done to expedite this process and save police time.)
But as a law-abiding citizen carrying on with my own business, how do I feel about having my face automatically captured and compared against a watchlist database of “individuals of interest”? There is potential to cause disruption to an individual’s life or place them under undue suspicion if they are falsely identified. That my face is being actively processed rather than just recorded gives more cause to pause and consider.
Having done this, I am prepared to accept this use case, if the technology is operating at a sufficient level of accuracy to ensure that the chances of being misidentified while conducting my daily activities remains low. I also expect the technology to be deployed wisely in situations where there is demonstrable benefit to public safety, such as at transport hubs, large gatherings, public events or areas of critical national infrastructure.
Most people already accept that the reality of the world today necessitates certain infringements on our liberties. The introduction of technology is a key tool in the fight against crime. No system is perfect, and the potential for an undesirable outcome of a system should not always result in the abolishment of that system. Few would argue, for example, to abolish our judicial systems and close our prisons to eliminate the possibility of a miscarriage of justice. Similarly, the benefits to public safety from face recognition are too great to ignore, though we must continuously strive to minimise the false identifications.
I agree with Ms. Midgley on this one.
Most criticism that I have been reading in the press in the past view months appears to be levelled at the widening application of face recognition in business related or commercial applications, not with public safety.
My flight is about to board, so let’s continue my journey through the terminal. As I saunter to my gate, my attention is caught by an impressive advertising display; a multi-plasma video wall. It was the amazing technology that caught my attention rather than the advert itself. Just as I’m about to glance away, the sunlit beach and blue ocean depicting the under 30’s surfing holiday fades away, to be replaced by a two-for-one spectacle offer, followed by a distinguished gentleman telling me how easy it was for him to “wash that grey away”.
As I self-consciously stroke the hair at my temples, I wonder: Was this a mere co-incidence? Multiple vendors delivering solutions for advertising have announced technology that can count the number of people watching an advert at any given time, and even estimate their age, dwell time, sex and race. While providing invaluable information for the advertiser, it can also allow them to dynamically change the adverts in real time to more appropriately target the demographic of the current viewer(s). Recent reports in the Los Angeles Times (21st August 2011) suggests that this is already widely deployed in Japan, and is being considered by the likes of Adidas and Kraft in the UK and the US.
While this is not technically face recognition, it is still worth noting as much of what I have been reading has been lumping the two technologies together. The key consideration here is that this form of technology is not actually identifying anybody, or extracting personally identifiable information. This doesn’t bother me in the least. Businesses have always tried to use whatever edge they can to more tightly tailor their message to their customer’s specific needs and wants. It may even benefit me by alerting me to more relevant products or services.
What if, on the other hand, the advertiser had negotiated an arrangement with another organisation, for example a social networking site such as Facebook. If they supplied them with an image of my face, along with information on which portion of the advert caught my attention, Facebook might be able to identify me from its database of photographs, enabling them to harvest valuable information about me. While I can see this would present a huge commercial advantage to them, and whomever they chose to sell this information on to, I can only hope that the commercial damage from the backlash of incensed users would outweigh the gain.
If I have some leisure time while on my business trip, there will doubtlessly be many activities at my destination to occupy me. I may have a quiet drink in a bar, or perhaps take a punt at the tables in the local casino. And yes, face recognition technology is being used even in these places. It’s been reported that bars and clubs are using gender and age distinguishing cameras to count people in and out, and make this information available over mobile phone apps. The youth of today can now determine before they set out which establishment holds their best chance of success. While I am well beyond having any use for this particular application, I can see how this may catch on in certain demographics of society. Any reputable establishment should clearly display such technology is in use and should make no attempt to harvest or make available any personally identifying information. Are all establishments reputable?
More concerning to me is the increasing use of face recognition by social network sites. Both Google and Facebook are actively exploring uses. Automatic tagging of photographs being uploaded to Facebook is already occurring. Being inadvertently photographed while on my business trip and automatically tagged when the photographer uploads it does not appeal to me, no matter how innocuous my activities at the time may happen to be.
Recent studies published by Carnegie Melon University demonstrating the potential to use large databases of photographs on social networking sites to glean confidential information should also be a cause for concern. The younger generation of today appear more and more willing to share intimate and private details online, without any thought (in my view) of the longer term or wider ramifications of doing so. This is an issue that is much larger than face recognition, but I can understand the worry that face recognition can help to tie it all together.
Improved Benefit or Erosion of Privacy?
When I first entered the biometrics field, I was attracted by the “neatness” factor of the technology, and of the potential for it to deliver benefits to society. I have to admit I paid scant attention to privacy concerns. Over time, as the voices of privacy advocates grew louder and more numerous, I started to listen and then to actively seek out their opinions. I am still a firm believer in this amazing technology, and endeavour to play an active role in its application for the positive transformation of society. However, I am grateful for the messages and insight provided by these campaigners; they have definitely transformed my thinking, and have made me consider much more carefully the application of biometrics.
From a law-enforcement and public safety viewpoint, face recognition holds great potential to increase the security of our society. By its very nature, our government holds power over us and our society, which is why it is our responsibility to choose our governments carefully. We have no choice but to hold a certain level of trust and faith in our law-enforcement organisations. Our society today contains more checks and balances than ever before, and our politicians our more in-tune with and responsive to the public mood. If this faith breaks down, then so does society.
In commercial applications, I also believe there is the potential for significant benefit to be realised from face recognition to both the consumer and businesses, but I am more concerned about the potential for abuse. To a certain level, the market will decide if the application of the technology is appropriate or not. Ventures people don’t like will fail. However we cannot always rely on market forces, and it is our collective responsibility to speak out when the need arises. Though it often lags behind, over time legislation keeps up with the advancement of technology. As our society changes with technical innovation, so too will the rules we collectively decide to govern our society. We will settle into an equilibrium reflecting the needs and views of all. But there will be a learning curve, and we will make mistakes along the way. That’s how society works.
So, does face recognition represent an improved benefit, or an erosion of privacy? I suggest it has the potential to be both. It is everybody’s responsibility to ensure the benefit is worth the price paid. I absolutely believe we must have both the proponents of this technology and the advocators of privacy; we all have a role to play to decide how face recognition will be applied over time.
The abolishment of either the technology or the voices of those monitoring its use and advocating our privacy would be to the detriment of society.
Just before I board my flight, let me leave you with this final thought. Imagine for a moment that a loved one of yours has come to harm. The authorities can use face recognition to aide in their recovery, and / or to ensure that justice is done. Are you concerned with privacy?