Opinion


On the Recent Terror Attack in Manchester

Following the horrendous events in Manchester, our thoughts are foremost with those who have lost friends and family members and with those who will be struggling to recover from unimaginable physical, emotional and mental trauma.

It is increasingly clear that these are no longer isolated events and we must unfortunately brace ourselves to the fact that they may be becoming the new norm in our society, at least for the foreseeable future.

We must also remind ourselves that, unlike in other parts of the world where such atrocities are a daily occurrence, they are still remarkably rare here. Indeed, this week is only the third time the UK Threat Level has been raised to CRITICAL, the first being in 2006 and the second in the following year.

However, this time the UK’s security chiefs feel they have no choice other than to say another attack may be imminent. Nobody at this stage can say for sure whether the suicide bomber Salman Abedi acted alone or with the help of others.

Countering this threat environment is a hugely complicated and multi-faceted endeavour. Perhaps the most important and longest term (and consequently most difficult) element is the geopolitical: harmonising and reducing friction between culturally and religiously diverse societies, embracing this diversity and rejecting isolationism and xenophobia. Here we must trust our politicians and our voice is through the ballot box. This implies, however, an absolute responsibility as individuals to constantly embrace learning and tolerance of the world we live in so we can best instruct those we elect to high office.

Within the near to medium term of our current world reality, defining the appropriate domestic strategy and links with those of our key allies in the fight against extremism is essential. The four pillars of the UK’s Counter Terrorism Strategy (CONTEST) demonstrates a sound awareness of the complexity this entails and they deploy a range of short to long term tactics:

  • PREVENT           to stop people from becoming terrorists or supporting terrorism in the first place.
  • PURSUE         to stop terrorist attacks by detecting, prosecuting and otherwise disrupting those who plot to carry out attacks.
  • PROTECT           to strengthen protection against a terrorist attack and reduce our vulnerability.
  • PREPARE           to mitigate the impact of a terrorist attack where that attack cannot be stopped.

The effectiveness of PURSUE has been demonstrated multiple times in recent history and notably with plot-disrupting arrests in just the past few weeks. We should all be immensely proud and thankful of our security and public services. The reality, though, is that as the threat landscape becomes more fragmented and complicated, consisting of a greater number of smaller and less sophisticated plots feeding off the radicalisation of disenfranchised individuals, it becomes more and more difficult to identify and stop every attack. The importance of PREVENT has been horrendously demonstrated in the past few days and with the recent attack on Westminster Bridge.

The nature of this most recent attack, disturbingly, demonstrates the hallmarks of a more complex and orchestrated suite of activities likely conducted by multiple individuals supporting Abedi. This has predicated the Prime Minister’s decision to elevate the threat level to its highest level. The challenges faced by the police and security services in identifying and apprehending these individuals at such times, when rapidness and timeliness of response is of the essence, are vast.

This investigation is unfolding rapidly, with multiple arrests already being made. The well-rehearsed procedures and tools acquired through years of experience are paying dividends. It is also apparent that, at times such as these, any method or technique to assist over-stretched human resources conduct their activities more efficiently and more rapidly must be considered and welcomed. New and advancing technologies undoubtedly have a role to play and will help shape our ability to both prevent and respond to these threats.

We must not allow the events of the past week to shape or define us. Although of no condolence to those directly affected by these events, we must remain grateful that we live in one of the safest and freest societies in the world. Clearly, we must foster an environment where such extreme beliefs are not allowed to take hold and root them out where and when they do. But, as individuals, we must espouse and remain true to the values that have enabled our society in the first place. Openness, tolerance and above all knowledge of the world, its peoples, cultures and societies will always persevere over ignorance, isolationism, fear and hatred.

Our deepest and most sincere condolences to those who have suffered and lost this week; we cannot even begin to imagine your suffering.

Our utmost gratitude and respect to our public services, police and security services, who fight so tirelessly and put themselves at direct risk to keep us safe.


Merry Christmas from Allevate and Looking Forward to what the New Year Brings!

2013 is looking to have been a pivotal year for Allevate.

It has been a year of new challenges and new focus.

A successful business is defined by more than monetary success. It encapsulates positive social and societal impact.

Whilst it has not been easy, I firmly believe our endevours have real potential to make a difference by contributing to the safety of our society and enhancing the efficiency of our public services.

Looking back at 2013, new relationships and partnerships have been established, new friendships forged and a nascent and growing team established with synergistic enthusiasm, vision, drive and skills.

To the new team and everybody else that has supported me along the way: I’m honored and grateful for your support.

To my family: Thank you for your understanding.

We are leaving 2013 wiser but with definite and strong momentum to carry us into 2014.

I’m very much looking forward to continued hard-work and a successful 2014.

Merry Christmas and Happy New Year to all!!


Could Automating Media Processing Aid the Forensic Investigation into the Boston Marathon Bombing?

The horror of the events at the marathon in Boston 2 days ago is still very raw. People are united in their sympathy for the victims and their families, their revulsion of these despicable acts and their solidarity in not succumbing to terror. The FBI vows to “…go to the ends of the Earth to find the bomber” with President Obama openly stating the “…heinous and cowardly…” event to be “…and act of terror”.

The investigation into the bombing is in its nascent phases, with the Boston Police Commissioner Ed Davis admitting that they are dealing with the “…most complex crime scene that we have dealt with in the history of our department.” Still, authorities are already honing in on crucial evidence and beginning to release details; BBC news reports that a source close to the investigation told AP news agency that the bombs consisted of explosives placed in 1.6-gallon pressure cookers, one with shards of metal and ball bearings, the other with nails, and placed in black bags that were left on the ground. Images of what appear to be a trigger mechanism have already been released.

Face Recognition?

Forensic investigators have a long and daunting task ahead of them with countless hours of CCTV footage to  pore over, and some people are already suggesting that the application of face recognition technology can play a crucial role in identifying potential suspects. However CCTV footage, especially from older systems that have not been specifically configured for the task, is notoriously unreliable as a source for face recognition.

Perhaps more useful at an event attended by so many, most of whom will have been carrying and using mobile phones and cameras, is the footage acquired by members of the public. Images and video captured by these high-quality devices will potentially be of much greater use than CCTV and authorities have appealed for people to turn in photographs and videos they have taken in the hope that they will contain useful intelligence. Much of this media will already have been uploaded to public sites such as Facebook and YouTube.

 An Automated Media Processing Cloud

A solution to automate the processing of this staggering amount of media to quickly and efficiently unlock actionable intelligence is required to save significant time and human capital. The ability to automate this would allow the more efficient application of resources as well as massively speed up a time-critical investigation.

However, the need goes far beyond the simple application of face recognition technology.

What is needed is a server-based system that can process vast amounts of media quickly to transform files from  mobile phones, flash memory devices, online sources, confiscated computers and hardrives and video surveillance systems into searchable resources. This would enable forensic investigators to work more efficiently and effectively by automatically finding, extracting and matching faces from very large collections of media to discover, document and disseminate information in  real-time.

Such a powerful video and photograph processing architecture should automatically ingest, process, analyse and index hundreds of thousands of photographs and videos in a centralised repository to  glean associations in a cloud environment. Instrumental would be the ability to:

  • Automatically find, extract and index faces to enable  the biometric and biographic searching of media.
  • Create and manage watchlists of people of interest via a web-based interface.
  • Find all instances of photos and videos where a person of interest has been seen.
  • Quickly review and process  media to identify, locate, and track persons of interest, their associates and their activities.
  • Discover, document and view  associations between people of interest, their activities and networks.

Finally, a public-facing interface to such a system would enable members of the public to upload their media in a self-service manner to enable quick and ready access by the authorities to this raw data for automatic processing.

 


Article: Face Recognition: Profit, Ethics and Privacy 2

You can download a PDF copy of this article by clicking this link.

The accuracy of face recognition has increased dramatically. Though biometric technologies have typically been deployed by governments and law enforcement agencies to ensure public, transport and border safety, this improvement in accuracy has not gone unnoticed by retailers and other commercial organisations. Niche biometric companies are being snapped up by internet and social media behemoths to further their commercial interests, and retailers and other enterprises are experimenting with the technology to categorise customers, analyse trends and identify VIPs and repeat spenders. Whilst the benefits to business are clear and seductively tantalising, it has been impossible to ignore the increasing murmurs of discontent amongst the wider population. Concerns over intrusion of privacy and the constant monitoring of our daily lives threaten to tarnish the reputation of an industry which has endeavoured to deliver significant benefit to society through improved public safety. Can the industry be relied upon to self-regulate? Will commercial enterprise go too far in their quest to maximise profits? How far is too far? How can organisations ethically make use of face recognition technology to increase efficiencies and drive revenue, whilst respecting and preserving privacy and maintaining the trust of their clientele and society?

Having previously written on the subject of the application of face recognition in airports as applied by law enforcement and border control, this article looks at the increasing exploitation of the technology for commercial advantage. As well as contrasting the different use-cases defined by commercial exploitation versus public safety applications, this article also touches upon the very different agendas of those using the technology and the privacy issues that arise.

1  Advances in Face Recognition Technology

Face recognition is increasingly transforming our daily lives. A study by the US National Institute of Standards and Technology (NIST) in 2010 demonstrated that the technology has improved by two orders of magnitude in accuracy over 10 years and further tests currently being conducted by NIST are expected to demonstrate its continued relentless advance. Those interested in reading of these astonishing improvements are encouraged to refer to “Advances in Face Recognition Technology and its Application in Airports”, first published in Biometrics Technology Today (BTT) in July 2012, which summarises the 2010 NIST results in detail.

2  Public Safety versus Generating Profit

Most people accept that the reality of the world today necessitates certain inconveniences and intrusions. We tolerate and increasingly expect surveillance technology to be deployed wisely in situations where there is demonstrable benefit to public safety, such as at transport hubs, large gatherings, public events or areas of critical national infrastructure. The key factor behind such tolerance is comprehension; we understand the reasoning behind these uses and the benefits to ourselves, namely our safety. Though we don’t necessarily like it, we generally accept it.

However, it has been difficult to avoid the increasing coverage in the media of the use of face recognition by commercial organisations. The single most common term that is bandied about in reference to these deployments tends to be “creepy”. The technology being deployed is very often similar, if not identical to, the technology deployed for public safety applications. So precisely what is it about this use of technology that people are averse to?

In order to understand this, it is useful to consider in each case who people perceive benefit from the system. In the case of public safety, the people perceived to benefit are us; the citizens. In the case of commercial use, people perceive the commercial organisation deploying the technology as the beneficiaries. In this scenario the term “benefit” generally means profit, either by increasing revenues or decreasing costs. Often there is a general distrust within society of large corporations profiting from the exploitation of the populace, and this is especially true in times of prolonged economic difficulty. This is additionally complicated by the fact that our biometric traits are viewed as being something that are intrinsically ours and that are a constituent part of our definition.

3  Examples: Uses to Reduce Cost and Generate Revenue

It hasn’t taken long for business minded technology companies to devise a whole range of new uses of face recognition, all focussed on delivering bottom line business benefit. An important characteristic of face recognition is that it is only useful if you have something to match a photograph (probe) against, whether it is another photograph, or a database of photographs (reference set). It is the management, control of access to and often the creation of these reference sets that generate the most privacy concerns.

Let us briefly discuss some of the manners in which the technology is currently being deployed.

3.1  Efficiently Identifying Customers and Staff

This perhaps is the most traditional use of biometrics within commercial organisations. The ability to positively identify people, whether they are your staff or increasingly your customers, is absolutely necessary for the day-to-day operation of business and indeed society. Biometrics can be applied to ensure identity in a more cost-effective and positive manner, thereby introducing efficiencies into the business. It is an unfortunate reality that staff are responsible for a significant amount of theft. Adopting biometric technology can eliminate password theft and help mitigate the risks of identity sharing, thereby reducing fraudulent and unauthorised transactions and ensuring relevant personnel are physically present at the time of a transaction. Additionally, customers can be identified positively before conducting transactions. Cashless payments provide numerous efficiency opportunities by allowing elimination of cash and credit cards at point of payment altogether.

3.1.1         Privacy Considerations

These examples are usually only possible with the consent and approval of the individuals in question. Customers typically register for a biometric payment system, for example, in order to realise a benefit offered by the enterprise. The enterprise in turn must satisfy the customer that their biometric reference data will be kept and managed securely and only for the stated purpose.

The advent of face recognition provides new manners in which you can identify your customers, for example from CCTV cameras as they enter shops or as they view public advertising displays. It is when these activities are performed without the individual’s knowledge or consent that concerns arise.

3.2  Identifying Who is Entering Your Premises

These solutions are designed to integrate with existing surveillance systems; faces are extracted in real-time from a CCTV video feed and matched against a database of individuals. When the system identifies an individual of interest it can raise an alert that can be responded to rapidly and effectively, or log where and when the individual was seen for the formation of analytical data.

This can be used to provide valuable real-time or analytical intelligence to organisations, such as:retail

  • Notification of the arrival of undesirables, such as banned individuals or known shoplifters.
  • Notification of the arrival of valued or VIP customers.
  • Collation of behaviour data of known customers, such as how frequently they visit, which stores they visit and integration with loyalty programmes.

 

 

 

3.2.1         Privacy Considerations

There are a number of potential issues with regards to privacy that need to be considered here, most notably:

  • How is the reference set obtained? Who is in it?
  • Do you have the permission of the individuals in the reference set?
  • How are the photographs in the reference set stored and secured?
  • Are the members of the reference set aware of how and when their photos will be searched?
  • Are the people crossing the cameras aware that their photos are being searched against pre-defined reference sets?
  • What action is taken if a probe image matches against the reference set? What are the implications of a match or a false match?
  • What is done with the probe images after searching the reference set? Are they discarded or stored?

The number of possible uses of this functionality and resulting business benefits are too large to enumerate here, but very careful consideration must be made with regards to the proportionality of the solution when measured against the requirement. Additionally, the views and considerations of the individuals whose images you are verifying, both the people within the reference set and the people whose faces you are sampling as probe images, should be well understood and considered; approval should be sought for inclusion into a reference set.

3.3  Analysing How People Moving Through Your Premises

Face recognition can also be used to determine how people move through premises, such as a department store. Understanding peak and quiet times is essential to enable sufficient and efficient staffing and resourcing. Raising alerts to manage unforeseen queues is critical for ensuring customer satisfaction.

Face recognition applied to CCTV can timestamp when individuals are detected at known camera locations, thereby providing highly accurate information on people flows such as:

  • How long on average does it take to move between two or more points?
    (such as from the entrance of a store to a checkout or exit)
  • What are the averages flow times across the day and when are the peaks?
  • How does this vary with the time of day?

This can be used to determine how people typically move through the premises, and how long on average they linger in specific areas. You can also analyse this data across different age and gender demographic categories.

3.3.1         Privacy Considerations

Importantly, no person identifying information is recorded. There is no interest in identifying who the individuals moving through the premises are or in taking any specific action on any specific individual. There is no need to search against any pre-defined reference sets.

However, there are some issues you should consider when deploying such systems:

  • Biometric matching of people crossing the cameras still occurs. The probe photos are matched against other anonymous people that have previously crossed the cameras.
  • You should carefully consider how long this data will be retained for matching, (generally hours) and the nature of the premises being monitored.

Generally the privacy considerations of this application are minimal.

3.4  Building Databases of People Visiting Your Premises

As previously mentioned, face recognition is only useful if you have images to match against. Previous examples have dealt with matching the faces of people crossing the camera against known databases of individuals. A potentially far more valuable practice to enterprise is to dynamically build reference databases consisting of the people who cross the camera. Unfortunately, this is also the practice that riles the populace the most and is rife with potential privacy intrusions.

The increase in the use of CCTV cameras has led to an ever increasing volume of archived video footage. The intelligence in this footage typically remains inaccessible unless appropriately analysed and indexed. Such systems can be used to populate databases of “seen” individuals, thereby enabling searching for specific people of interest to determine if, when and where they have been present. This then allows the collation of data such as how frequently individuals visit your premises, how long they stay and when was the last time the individual visited your premises, as well as which of your locations any individual frequents and which is the most common.

If this functionality is combined with the ability to search and cross- reference against databases of known individuals, for example a subscribed customer database, this can then allow you to build very valuable analytical data on specific individuals thereby enabling you to predict future behaviour and market more specific services and products.

3.4.1         Privacy Considerations

Tread very carefully. Some of the most vocal opposition to the application of face recognition technology results from the capture of biometric data of potentially large numbers of people without their knowledge or consent, especially if the people are then identified and profiled against existing databases. In many jurisdictions around the world, the retention of such data may be in contravention of privacy legislation.

3.5  Analysing Who is Viewing What to Target Your Advertising

There have been many examples in recent months of retail and advertising organisations using technology to determine the approximate age and gender of people entering premises or viewing advertising walls. Though not technically face recognition, it is still worth mentioning here as often the distinction between the two uses is blurred. The premise is simple: such solutions can count the number of people watching an advert at any given time, and even estimate their age, dwell time, sex and race. While providing invaluable information for the advertiser, it can also allow them to dynamically change the adverts in real time to more appropriately target the demographic of the current viewer(s). Such solutions are increasingly being deployed in Japan and it is only a matter of time until they are more widely considered in Europe and North America.

3.5.1         Privacy Considerations

The key consideration here is that this form of technology is not actually identifying anybody or extracting personally identifiable information. There does appear to be some opposition to this, though none of it very vocal or serious. It is difficult to see any infringement of privacy and often may be advantageous to the consumer as advertising may be more specifically tailored to their needs.

3.6  Matching People on Your Premises with Social Media Accounts

Both Google and Facebook have acquired face recognition technology companies over the past year. Facebook’s users, for example, publish over 300 million photos onto the site every day, thereby making Facebook the owner of the largest photographic database in the world.

Facebook is already trialling a new service called Facedeals which enables its users to automatically check in at participating retail sites equipped with specially enabled cameras. In order to entice users to participate, the participating retailer can offer special deals to Facebook users when they arrive. The flow of information can be bi-directional. Such automatic check-in data coupled with the users’ manual checkins can be used by Facebook to hone their profile of individuals allowing them to target users with more relevant advertising. The system is entirely voluntarily, and the reference sets searched by retailers only contain photos of users who have opted into the service.

3.6.1         Privacy Considerations

Making data from social media sites available to other commercial organisations is a potential privacy minefield and should only ever be done with users’ consent. Defining these as opt-in services is exactly the right way forward. Likewise the profiling of users of social media sites based upon automatic tagging of images uploaded to those sites should be strictly controlled and only enabled on an opt-in basis. The privacy concerns over such activities have recently been very aptly illustrated by Facebook’s withdrawal of its controversial auto-tagging feature from use in Europe after pressure from privacy campaigners and regulators.

4  Social Media, Cloud Computing and Face Recognition

Dr. Joseph J. Atick of the International Biometrics and Identification Association has written a thought-provoking paper entitled “Face Recognition in the Era of the Cloud and Social Media: Is it Time to Hit the Panic Button?”. The paper raises several interesting points that merit mention here. In it Dr. Atick argues that the convergence of several trends including the:

  • High levels of accuracy now attainable by face recognition algorithms.
  • Ubiquity of social networking with its inherent large photographic databases.
  • Availability of cheap computer processing and the advent of cloud computing.

…coupled with the fact that “face recognition occupies a special place [within the family of biometrics in that] it can be surreptitiously performed from a distance, without subject cooperation and works from ordinary photographs without the need for special enrolment…” is “ … creating an environment … that threatens privacy on a very large scale…”.

One of the main premises of the paper is that this issue “… will require the active cooperation of social media providers and the IT industry to ensure the continued protection of our reasonable expectations of privacy, without crippling use of this powerful technology”.

5  Can All This be Done Ethically? (What About Privacy?)

Can organisations ethically make use of face recognition technology to increase efficiencies and drive revenue, whilst respecting and preserving privacy and maintaining the trust of their clientele and society?

The premise of “privacy-by-design” should be used to ensure that privacy is considered from the outset of any deployment of face recognition technology. In fact, the European Union’s 22-month Privacy Impact Assessment Framework (PIAF) project advises that “Privacy impact assessments should be mandatory and must engage stakeholders in the process” for all biometric projects.

Reputable organisations such as the Biometrics Institute have gone so far as to publish invaluable privacy charters to act as a “…good executive guide operating over a number of jurisdictions…” which should be reviewed and seriously considered before any deployment of biometric technology.

Some of these fundamental principles are outlined below within context of the subject matter of this article and specifically within the context of commercial use of the technology. These will not necessarily apply when discussing matters of public safety, law enforcement and national security.

5.1  Proportionality

A fundamental principle of privacy concerns the limitation of the collection of data to that which is necessary. Organisations should not collect more personal information than they reasonably need to carry out the stated purpose. Biometric data by its very nature is sensitive and absolute assurance must be provided that it will be managed, secured and used appropriately. However, a key consideration in the use of this technology should be proportionality; is the collection of such sensitive data justified for the benefit realised?

5.2  Educate and Inform

People on the whole generally resent not being informed, especially in matters that involve them. History is littered with IT projects that have failed because key stakeholders were not involved from the outset, were not sufficiently informed and whose buy-in to the process was not obtained. Customers are one of the most important stakeholders and these issues are even more critical when dealing with their personal and biometric data.

There is a very interesting video on YouTube that illustrates this point very nicely. It is filmed by a man with a camera walking around filming random strangers without explanation. The reaction is predictably always negative and sometimes hostile. The message the video is trying to make is obvious: most people do not approve of being videoed, so why do we so readily accept surveillance cameras? The message that comes across is actually clearer: People object when they do not understand intent, purpose or benefit to themselves. The cameraman offered no messages of explanation of his intent, even when challenged. Objection was guaranteed.

5.3  Be Truthful and Accurate when Describing the Business Purpose and Benefit

As part of the process of informing, organisations should also be direct and open in disclosing not only the existence of the systems, but the scope, intent and purpose of the solutions. Why are you utilising an individual’s biometric data? What benefit does it serve? What is the scope of the use of this data?

Importantly stay well clear of “scope creep”. All too often it is tempting to start using data once you have it for other than the stated intended purpose for which it was collected. Such endeavours will inevitably lead to loss of trust.

5.4  Provide Benefit to the Customer

Simply understanding the scope, purpose and intent of a system generally will not be sufficient to garner acceptance of the system. While people are generally astute enough to realise that businesses are in the business of making money, they’ll want to know what is in it for them. What is their benefit?

An example with which most of us will be familiar are grocery store loyalty or “club” cards. Whilst we all understand the objective of the grocery store is to profile and analyse our spending in order to better market to us, a majority of us still subscribe in order to receive the enticements and benefits on offer.

Within the context of face recognition, Facebook’s Facedeals programme demonstrates this principle nicely. Users understand the benefit to Facebook and the retailer, yet they still may choose to opt in to the programme because there is a clear and discernible benefit for them to do so as well, namely targeted discounts and offers at retail outlets.

This is also affirmed by a survey in 2012 by IATA which finds that “… most travellers are receptive to the idea of using biometrics within the border control process.” Why? Because there is clear and discernible benefit to them in the form of a more efficient passenger process and increased levels of security.

5.5  Seek Consent and Operate on an Opt-in Principle Where Appropriate

Biometric enrolment into such systems should not be mandatory. Individuals should be allowed the ability to opt-in, with an opt-out status being the default. Clearly this is not always feasible when considering people in public places the crossing cameras. However, if they are being identified against reference sets, the individuals in the reference sets should be there only with consent. Automatic enrolment into reference sets or biometric databases should involve the consent and approval of those enrolled.

Importantly, people should not be penalised should they choose not to opt-in; they should still be allowed a mechanism of transacting and conducting their business.

The recent decision by the UK Department of Education to prohibit schools from taking pupils’ fingerprints or other biometric data without gaining parents’ permission is a prime example of a potential backlash when such systems are made mandatory without providing any alternative mechanism of transacting. In many cases in UK schools, students were left with no mechanism of buying their school lunch unless they enrolled into a biometric system.

6  Summary

The accuracy of face recognition has increased dramatically. Retailers and other commercial organisations are investigating ways to exploit this technology to increase revenues, improve margins and enhance efficiency. Social media companies own the largest photographic databases in existence and are under pressure from shareholders to find ways to monetise these assets. As these explorations gather pace, so does the discontent of privacy advocates.

This article has outlined a number of ways face recognition can be used by enterprise and highlights potential privacy issues. Is it possible to ethically use face recognition technology and respect privacy? This will only be possible if enterprise maintains the trust and respect of its customers. Open and honest discourse is the best manner in which to achieve this. This should be accompanied by delivering real benefit to all parties involved in a manner that also empowers the customer; nobody should be forced to enrol into biometric systems or be disenfranchised from refusing to do so.

How far is too far? History has shown that there is no absolute answer to such questions. The exact location of the line to be crossed is always a factor of and changes with the times we live in. History has also shown, especially as it pertains to technology, that it is next to impossible to put the genie back into the bottle once released. It is now the collective responsibility of all to ensure the proper and ethical use of this technology in a manner that delivers the maximum benefit. This will require the active cooperation of social media, enterprise, the IT industry and civil liberty groups to ensure the continued protection of our reasonable expectations of privacy without crippling the use of this powerful technology. In the end, the people have the loudest voice. If enterprise crosses the line, customers will pass judgement with their wallets. 

7  About the Author

Carl is the founder of Allevate Limited (http://allevate.com), an independent consultancy specialising in market engagement for biometric and identification solutions. With over 20 years’ experience working in the hi-technology and software industry globally, he has significant experience with identification and public safety technologies including databases, PKI and smartcards, and has spent the past 10 years enabling the deployment of biometric technologies to infrastructure projects. Carl started working with biometrics whilst employed by NEC in the UK and has subsequently supported NEC’s global and public safety business internationally.

Residing in the UK, Carl was born and raised in Canada and holds a Bachelor of Science Degree on Computer Science and Mathematics from the University of Toronto.

You can download a PDF copy of this article by clicking this link.

 

3,976 words

 ————————————————————————————————————————

[i] http://biometrics.nist.gov/cs_links/face/mbe/MBE_2D_face_report_NISTIR_7709.pdf
Multiple Biometric Evaluation (2010) Report on Evaluation of 2D Still Image Face Recognition
Patrick J. Grother, George W. Quinn and P. Jonathon Phillips

[ii]http://allevate.com/blog/index.php/2012/07/17/advances-in-face-recognition-technology-and-its-application-in-airports/
Advances in Face Recognition Technology and its Application in Airports
Carl Gohringer,  Allevate Limited,
July 2012

[iii]http://www.ibia.org/download/datasets/929/Atick%2012-7-2011.pdf
Face Recognition in the Era of the Cloud and Social Media: Is it Time to Hit the Panic Button?
Dr. Joseph Atick
International Biometrics and Identification Association

[iv] http://www.piafproject.eu/

[v] http://www.biometricsinstitute.org/pages/privacy-charter.html
Privacy Charter
Biometrics Institute

[vi] http://www.iata.org/publications/Documents/2012-iata-global-passenger-survey-highlights.pdf
2012 IATA GLOBAL PASSENGER SURVEY HIGHLIGHTS
The International Air Transport Association (IATA)


“From grainy CCTV to a positive ID: Recognising the benefits of surveillance”

Interesting article in London’s Independent newspaper on CCTV surveillance and face biometrics.

Especially interesting is the view of the combination of biometrics over CCTV with artificial intelligence and behavioral recognition, as this does appear to be the way things are moving.

I agree that biometrics, and especially face recognition, can provide huge benefit to society. I also agree that there is a certain level of concern and distrust by large swathes of the population, some of it well-founded, and some of it based on misperception and incorrect knowledge.

In either case, I think it is dangerous to simply dismiss these concerns and objections simply because we feel “we know best”. I believe society can be much better off with the well placed and controlled use of this technology, but I also believe that we should be working with the civil liberties groups rather than fighting them. Ultimately, these systems need to be accepted if they are to succeed, and in order for this to happen, the public has to better understand the benefit to themselves, and have trust in the people using them.


UK Schools banned from fingerprinting pupils without parental consent

The UK Department of Education has announced that schools will no longer be permitted to take pupils’ fingerprints or other biometric data without gaining parents’ permission.

I am a firm believer in the use of biometric technology to further public safety and efficiency.

However, a key consideration in the use of this technology should be proportionality; is the collection of such sensitive data justified for the benefit realised?

Biometric data by its very nature is sensitive and absolute assurance must be provided that it will managed, secured and used appropriately. Given this, the consent of those whose data will be captured should be sought, and the use of such systems should not be mandated without such consent (with caveats for government, law-enforcement and public safety deployments).

Minors, by definition, are unable to supply consent, so the responsibility to do so (or to withhold consent) must fall upon the parents AFTER they have been given the opportunity to ensure they are satisfied that their child’s data is appropriately safeguarded and all privacy concerns have been considered within the context of the benefit to their child.

I absolutely applaud this move.


Does turning off the Iris system at Manchester and Birmingham represent a failure of biometrics? 2

News that the Iris biometric gates at Manchester and Birmingham airports have been turned off has been widely reported. (BBC: Eye scanners at England airports turned off, Register: Two UK airports scrap IRIS eye-scanners)

The comments that this represents a failure of biometric systems started to fly almost immediately.air travel

  • “Multi-million pound eye scanners, billed as a key tool in securing Britain’s borders, have been scrapped.”
  • “…the technology has been beset by problems,…”

… are typical of the comments and headlines making their rounds.

I admit the gates were not perfect and did require some getting used to in order to navigate your way through quickly.

But I think the systems were far from a failure, and the reality is a little bit more subtle than the headlines may suggest.

Let’s not forget the system was originally introduced in 2004, initially as a pilot.  At this time, such use of Iris technology was fairly innovative.  That the footprint of the pilot was gradually extended and became a permanent system is indicative that the system was fairly well received. The fact that over 380,000 people have voluntarily enrolled (myself included) makes it difficult to argue that the system is derided.

In my opinion, the turning off of the system at these two locations is more in line with a planned phasing out of this particular solution, for some rather more mundane reasons:

  1. The system  no longer fits border-automation strategy in the UK  moving forward. It has largely been replaced by the momentum to accommodate EU e-Passports holders,whose passports hold an electronic copy of their face photographic.
  2. As innovative as the technology was in 2004, it is now woefully out-of-date. Iris technology has moved on leaps-and-bounds in the 8 years since (as demonstrated by the Iris-at-a-distance  e-gate solutions for departing passengers at Gatwick airport). The initial investment undoubtedly has long since been written off, and the technology needs a refresh.
  3. The initial deployment was meant to be limited, and the contract has undoubtedly been extended numerous times. A complete and expensive technology refresh (as is required) without an open and competitive re-tender would undoubtedly not rest on firm legal ground.
  4. The business model was never well thought out. It is completely funded by the UK government and can be used by any nationality completely free of charge.

This Iris system is intended for pre-registered Trusted Travellers, who are pre-vetted before they can use the system. At point of use, it is a 1:n Iris check and no travel documents are required.

Since the system has been deployed, most European Union (EU) nations have deployed e-Passports and an ever-increasing percentage of the EU population is now carrying a chip passport. The Iris gates have been gradually been superseded by a new breed of e-Gates that:

  • are for EU passport holders only.
  • do not require pre-enrolment.
  • perform a 1:1 face check against the JPG on the passport chip.

These gates are now being widely deployed at UK ports of entry and seemingly form the backbone of the government’s strategy for automated passenger clearance. This is only natural, as by far the bulk of passengers entering the UK are EU citizens.

If the remaining Iris gates are end-of-life’d, this will clearly leave a hole in the border automation strategy, mainly those passengers that:

  • are not EU citizens.
  • are EU citizens but do not yet have an e-passport.

Arguably, the second of the two will become less of a problem as time passes, as holders of older passports have their passports renewed.

The former, however, will form a minority of arriving passengers, and the business case for the government to provide a free-to-use Trusted Traveler system remains vague. More likely than not, any replacement system  will take the form of a paid subscription requiring a pre-enrollment with vetting.

Ideally, given the limited space available airports, the best scenario would involve these passengers using the same physical e-gates as EU passport holders.

In my view, allowing these systems to reach their end-of-life is not an argument for the failure of biometrics deployed at the border. The fact that a system that was only ever meant to have a limited deployment lasted this long and was only replaced by a government strategy that is more harmonised across EU nations, is a testament to the value this technology provides.

Thank you project IRIS, but I won’t miss you. I use the new e-Passport e-Gates now.


Biometrics in Banking

[polldaddy poll=5808478]

Man using iris biometrics to authenticate to ATMTurkey, BruneiNigeria and Poland are just some of the countries that have already announced biometric ATMs, for example. The use of biometrics at the till for payment is also on the rise.

Some cite the fact that there has not been a massive up-take in the use of biometrics in consumer facing applications as evidence that the technology does not yet function to an adequate level of performance. Every large biometric deployment deployment I have been involved in has entailed rigorous and exhaustive testing to clearly demonstrate accuracy performance against clearly and aggressively pre-defined test parameters, in real-world environments, using customer data; I don’t expect financial customers would be any less arduous.

I do agree that lab testing / data is insufficient, and solution providers who are unwilling or unable to demonstrate predictable and repeatable accuracy SLAs in real-world environments should be treated with caution.

Is a biometric system fallible? Yes. The question is, is it less fallible then existing precautions already in place, and does the deployment of such a system, in simple financial terms, demonstrate a clear ROI. Again, the answer is: Yes.

Rather, I believe that the thus far reluctance in Western societies to deploy such systems en masse for consumer identification is more due to the banks’ concern of how such systems will be perceived by their clientele; the UK populace, for example, is ever suspicious of Big Brother, their governments and large institutions.

However, these “perception barriers” are already lowering, and there is mounting evidence that public opposition, where clear benefit is realised, is eroding.

Banks are now increasingly becoming aware of the value of biometric identification, of both their internal staff and their external clientele, especially in the area of high net-worth individuals and high-value transactions, and I expect we will see many exciting developments in identification solutions for this market.


On Biometric Suppliers Publishing Accuracy Figures 2

Of late there have been repeated calls on Twitter for biometric suppliers to publicly release statistics pertaining to the performance of their biometric algorithms, specifically False Accept Rates (FAR) and False Reject Rates (FRR).

Whilst not a response to those calls, this post is in part motivated by them.

Those repeatedly calling for the release of these figures know in advance that their calls will not be heeded. As they are already well-versed in the technology, they already understand the reasons why. Yet I believe they persist so they can cite the non-responsiveness of suppliers as “evidence” that the technology does not work.

Let’s examine why suppliers keep this information secret.

1. THEY DON’T

I have been involved in negotiating multiple contracts for deployment of biometric technology, ranging from large government infrastructure programmes, through to enterprise access control solutions. I can emphatically state that in every one of these instances, the customer has been absolutely fully aware of the performance metrics of the technology they are deploying, from accuracy through to HW requirements. In fact, before securing any contract, it is very common for the supplier to have to benchmark their technology on customer supplied data, and often the adherence to pre-defined accuracy SLAs is written into contract, with penalties for non-performance.

2. There is no single correct answer

Anybody versed in biometrics knows that the answer is almost always “It depends”. The accuracy is dependent upon multiple factors, many of which will be under the control of the customer, not just the supplier, such as:

      – Quality of the data being matched against
      – Representative population
        – Environmental conditions

     

          – Performance required
          – Budget

      Again, required levels of accuracy will often be pre-agreed with the client, and it is often down to a matter of how much budget the client has available. Faster and / or more accurate will require more computing power, and the determination is often down to a cost benefit analysis.

      3. It is Competitive Confidential Information

      Accuracy of biometric technology can pose a strong competitive advantage, and suppliers often don’t want this information to be in the public domain (or more specifically, available to their competitors). Though the release of this information is often required, for example to prospective clients, it will almost always be under a non-disclosure agreement.

      4. There is no Commercial Reason to do so

      Suppliers, like anybody, don’t like having their time wasted. They’ll apply their resources to those who wish to engage with them seriously, and as mentioned above, they will have no problem in releasing the information as required. A car salesman will spend his or her resources on the individual who wants to buy a car, and ignore the tire kickers.

      My Point

      To only ever argue the facts on one side of a debate to follow a predefined agenda generally results in a loss of credibility. The irony is that people who do so often have valid concerns or issues that quite rightly should be aired and considered, but end up falling by the wayside.

      These are my own personal opinions, and not necessarily the opinions of any suppliers I may happen to work with.


      UK BA Suspensions

      air travelThe news last week that Brodie Clarke and Graeme Kyle were suspended from the UK Borders Agency following claims that identity checks were relaxed during busy periods at Heathrow raises some interesting questions.

      Without passing any judgement, I understand in part both why there may have been pressure to do so,  and the government’s decision to undertake suspensions.  The latter is easier to address. Whatever concerns may have existed, freedom to exercise authority cannot fly in the face of direct ministerial guidance.

       

      Having said that, I’m sure the reasons for doing so were well intentioned, and may have resulted from trying to meet conflicting requirements, mainly ensuring:

      • High security and appropriate passenger screening.
      • Passenger throughput and avoidance of queues / delays.

      While I’m not close to the environment in question, at initial glance it appears that the former requirement may have been sacrificed to an extent to ensure the latter during busy periods.

      Delays and queues, in a very real and commercial sense, cost money, and it is easy to quantify exactly how much. So it appears the dilemma faced was the age old one, being : “What is an acceptable cost for increased security?”

      It appears that some at least felt the benefit delivered did not warrant the disruption to existing processes. Unfortunately it also appears that the decision was taken without due process and consultation.

      This situation  highlights the importance of understanding the overall cost of any new security system (which invariably is significantly higher than the cost of procuring it), and the benefits it delivers. Invariably, any system will have an impact on existing workflows, and if carefully designed, should deliver an improvement in workflow in addition to an increase in security.


      Biometric security: More bottom-line benefits, less James Bond 1

      Biometric security: More bottom-line benefits, less James Bond

      Carl Gohringer December 03, 2003

      Bond movies will always be associated with state-of-the-art technology, but few of the products he uses or encounters ever make it into the real world.

      A car that turns into a submarine might be nice to have or an umbrella that transforms into a rope ladder useful on the odd occasion, but their uses in everyday life are limited.

      There is one exception to the James Bond rule – biometrics – the technology that uses unique, physical geometry to identify and authenticate individuals.

      According to market research group Frost & Sullivan, the biometrics market will reach a phenomenal $2.05 billion by 2006 (it was valued at just $93.4 million last year).

      Concrete evidence for the growth in biometrics is starting to proliferate. The Home Office has announced that it is planning to install biometrics in 10 UK airports by the middle of next year to assist immigration control. The Nationwide Building Society is running extensive biometrics tests using iris scans in place of PINs at cash machines. Most recently, the Home Secretary announced that national ID cards – to be phased in over the next five years – will incorporate biometric data access via fingerprint recognition.

      However, for most organisations, there are two understandable questions that need to be answered before biometric identification will reach the boardroom agenda:

      • “When budgets are tight, what is the business case for investing in yet more security technology?”
      • “Aren’t there fundamental drawbacks with biometric technology?”

      The second issue is currently the source of most controversy in the media. For years films such as Minority Report have presented a rather superficial interpretation of biometrics. Eyes have been gouged out to gain access to computer networks and “fake” or severed fingers used to access a building.

      The reality is far less dramatic. As the use of biometrics becomes more common place, people will realise that the risk is no greater than being forced to reveal a password or to hand over an access swipe card. Indeed, the risk is much less, thus representing an improvement over and above the existing solution already in place. In fact, one of the key benefits of biometrics is that even if an ‘identity’ such as an access card or password is stolen, without the correct authenticating biometric, access will be denied. The same applies to the sharing of passwords, helping businesses and organisations control who can and cannot access certain areas.

      In addition to the physical risk, with biometrics comes the perceived threat of ‘Big Brother’, with concerns of data compilation and movement monitoring. While there is no escaping the fact that in the wrong hands this could be the case, in reality the threat is no greater than your bank recording the cash points you have accessed, mobile phones being used to track your whereabouts, a supermarket using loyalty cards to track your spending patterns or in fact, a security company monitoring the comings and goings of staff via CCTV.

      There is no doubting that to dispel the notion of a Big Brother state an education programme is needed to highlight the benefits of biometric security (e.g. the ability to protect a person’s identity, the near elimination of passport fraud and the ability to store important data without the threat of unauthorised access). However, the greatest support will be won once biometric security is fully integrated into daily processes, whether logging on to the network at work or withdrawing cash without the threat of skimming from a cash machine.

      The business case for biometrics, once explained, clearly demonstrates three primary reasons as to why a business should adopt biometrics:

      • To improve an organisation’s security by providing positive identification of individuals accessing your premises and networks
      • To save large sums of money by eliminating user provisioning and password management
      • To increase usability and convenience to staff

      Robust security

      What’s the point of spending a vast amount of money protecting and securing your networks if you still can’t positively identity who is accessing them? Obviously none but this is exactly what most companies are currently doing.

      Standard corporate user IDs and passwords used to govern the physical and virtual access to a company and / or network tend to follow the same format. The most common being the first letter of the user’s first name and the whole of their surname for a username i.e. cgohringer for Carl Gohringer. The bottom line for a business is that IDs can generally be cracked with one or two educated guesses. So assuming there is little or no security around IDs, a company’s security depends solely on the strength of passwords.

      Again, if you know a little about the people whose passwords you are trying to guess, it often does not take much to figure it out. There are plenty of available password cracking utilities easily accessible on the Internet to help you out.

      The question is how big an issue are ID/password breaches? It’s difficult to be precise, but we do know that 60-70% of hacking attacks have an internal source (i.e. are conducted by people who know something about each other and for whom, ID/password theft would be relatively simple). And, to give you an idea of the financial impact, last year 39% of Fortune 500 companies suffered an electronic security breach at an average cost of $50,000.

      Biometrics tackle this problem by providing a truly unique individual identifier. If access to either a building or network is controlled by a smartcard containing biometric templates, you can be sure that only the valid owner of the card will be able to access those resources. Access rights to different buildings and rooms can also be set – via the smartcard – for each individual; and with emails increasingly being used as legally binding documents, biometrics can guarantee identity by requiring the user to supply their fingerprint when digitally signing them.

      Ant Allen, research director at analyst house, Gartner Group, sums up the benefits of biometric human authentication: “It is unique to the individual, not something that somebody else decides will be your password, shared secret or token. Passwords can be learnt by various means and tokens can be stolen, but biometrics cannot.”

      Increased convenience, less money wasted

      The ID/password combination is also inconvenient for staff and financially inefficient for companies to manage.

      Just think about the number of passwords you may have to remember in a given day: the password for your office network; the number to access voicemail on your phone; the ‘unlock’ code for your PDA and so on.

      Inevitably, passwords are forgotten or compromised on a daily basis, which results in the IT department being pestered for a new code. The cost of maintaining passwords is costly and with this in mind, the ROI on biometrics is commonly realised in less than a year. IT staff are then freed up to focus on other, potentially revenue-generating issues.

      In place of this often forgotten, easily hacked, regularly shared password, a biometric smartcard gives employees single-sign-on access to the corporate network, which eliminates the need to remember numerous passwords and PINs and removes the cost of managing them for the IT department.

      The present and future of security

      The benefits of biometrics can potentially run much deeper. For example, many public sector organisations see biometrics as a useful tool for improving customer service. In a hospital environment, facial recognition can identify a patient on arrival and ensure their medical records are ready for when they arrive at reception, enabling them to be instantly directed to the appropriate ward.

      However, the purpose of this piece is to examine the impact on bottom line. In this respect, the case for biometrics is extremely powerful. Not only are they an essential tool to prevent your business losing large sums of money to cyber crime, on a day-to-day basis biometrics can dramatically reduce management and administration costs.

      So next time you see James Bond or Tom Cruise battling biometrics in the movies, consider their potential for saving you money and giving your business robust insurance against the financial risk of hacking.


      Occupy

      While I understand the premise of the “Occupy” demonstrations, I can’t help but feel that they would be more effective if they were also able to propose a solution instead of simply voicing discontent with capitalism.


      In the wake of the London riots, is the privacy versus security debate now all but dead?

      Allevate Presenting at Biometrics 2011

      Synopsis

      Recent advances in the accuracy of face recognition are resulting in an explosion of its use, coupled with increasingly vociferous cries from privacy advocates. The benefits from the uses of this technology are clear. But does it enable even further and easier harvesting of private information about us as individuals, without our knowledge or consent? This presentation does not attempt to analyse the adherence of face recognition to the nuances of privacy legislation. Rather, it explores the emerging trends in the application of face recognition, from law enforcement and security / surveillance, through to commercial applications, to enable each of us to form our own views on where the boundary between face recognition and privacy lies.

      Article: Face Recognition: Improved Benefit? Or Erosion of Privacy?


      Face Recognition: Improved Benefit? Or Erosion of Privacy? 6

      [polldaddy poll=5701230]air travel

      A Surveillance Society?

      I’m sat in Heathrow waiting for an early morning departure for a business trip. Sipping my coffee, I look casually around trying to spot the cameras. They’re cleverly hidden. Am I being watched? Doubtful. Am I being recorded? Almost certainly.

      This is a daily fact of life for most Londoners. It’s widely known that our city is one of the most heavily recorded in the world; a fact that is consistently debated and often criticized. Yet for all the discussion, the fact remains. We don’t like it, but we accept it. Why? Personally, my true dislike is more of the necessity of this fact rather than the fact itself.

      Carol Midgley wrote an excellent opinion piece (The Times, Sat 27th August, 2011) entitled “I’ll pick Big Brother over a hoody every time”. I recommend a read. Though clearly biased, and seemingly designed to stoke the debate with anti-CCTV campaigners, her conclusion was simple: In the wake of the London riots, the privacy-versus-necessity debate of CCTV is now all but dead. Do I agree? Let me come back to this.

      Face Recognition and CCTV

      Enter Biometrics. Face recognition technology to be precise. This technology, along with the wider field of video analytics, is set to transform CCTV surveillance. Video analytics is arguably a nascent technology, but face recognition on the other hand is here. Ready to deploy. Now. A recent study by the US National Institute of Standards and Technology (NIST) demonstrated that the accuracy achieved by the first place vendor (NEC) can provide clear and measurable benefits to a range of applications, including surveillance.

      It seems that every new technology brings a realisation of new benefits and efficiencies, countered by a plethora of malicious uses of the technology by the less desirable elements of our global society, quickly followed by counter-measures and protections. This is a saga that we are all already familiar with in our daily lives. Examples range from the severe and extreme of nuclear medicine versus atomic weapons, through to online credit-card shopping versus financial identity theft. I’ve recently had a credit card used for over £3,500 of illegal transactions. Though this incident was highly inconvenient and disruptive to my life, I did not hesitate to accept a replacement card. Not to do so would have unacceptably disenfranchised me from modern society.

      Back to face recognition. It hasn’t taken long for business minded technology companies to devise a whole range of new uses of this technology, all focussed on delivering bottom line business benefit. Almost as quickly arrive the cries of the privacy advocates. I’ve been reading with interest the sudden explosion in main stream news over the past few months highlighting new uses of face recognition, while very carefully considering the concerns vociferously raised by the technology’s opponents. A key fact often cited is that the technology is not 100% accurate. Even an excellent identification rate of 97% can produce a significant number of false identifications and / or missed identifications in a large sample population.

      Let’s take a look at some examples.

      Public Safety and Policing

      While sat here in the terminal waiting for my flight, I’ve already grudgingly accepted that images of me sipping my coffee are almost undoubtedly being recorded. I may not be aware, however, that when I passed through security my photograph was taken. This wasn’t immediately obvious or openly advertised, but it happened. Shortly, my photograph will be taken again when I board my aircraft and compared to the photograph taken at security. International and domestic passengers share a common departure area, and this is done to ensure boarding cards aren’t swapped, thereby potentially enabling an international passenger to transit through to a domestic airport and bypass immigration controls. On a 1:1 verification, false matches are very low. If I’m a legitimate passenger, my concern is that the two photographs do not match, for which the worst case scenario is inconvenience.

      Perhaps the borders agency is also comparing my photograph against a known watchlist of suspect individuals. This nature of deployment is usually used to enhance existing procedures, and not replace them. The system will provide increased security, in turn further protecting my safety while flying. I’m OK with this. Of course, there is also the prospect of misidentifying benign travellers. Though unavoidable, as long as the number of false matches are kept sufficiently low to ensure the cost of dealing with these exceptions doesn’t obliterate the benefit realised from the system, it can be argued that the greater good justifies the inconvenience faced by the occasional innocent passenger while their true identity is verified.

      Upon my arrival at my destination, I may very well be offered the opportunity to use my new e-passport to speed through immigration at one of the many shiny automatic e-Gates springing into operation. In the early stages these definitely were a great benefit, allowing me to march past the long queues of travellers and expedite my passage through the airport. No complaint from me. As long as false matches are lower than what is achieved by a live border guard (which many studies suggest they are), then security should be improved. And false matches only apply to illegal passengers travelling on a false or stolen passport. Exceptions generated by valid travellers who do not match with their passport will generate some inconvenience by necessitating they speak to a live border guard. As e-gates become more commonplace, I predict I’ll just be queuing in front of an automatic barrier instead of a manned immigration booth. However, the efficiencies achieved should enable the border guards to concentrate on more intelligence-led activities, rather than simple rote inspection of passports, thereby increasing security and putting my taxes to more efficient use.

      As I move through the airport, or for that matter in any public location such as a stadium or railway station, law enforcement authorities may be using my captured image to search against a database of suspects. Does this trouble me? Let’s look at a couple of scenarios.

      I’m already being recorded. If I were to commit a crime, then it is likely that the video would be retrieved and officers would try to identify me. This is already happening and I doubt anybody would argue that this is an invasion of privacy. If face recognition technology can assist them with this arduous and tedious task, perhaps by automatically trying to match my face against databases of known offenders, and saving countless hours of police time, I’m all for it. Too bad for the criminal.

      (I was incensed by the meaningless violence and destruction demonstrated during the recent riots in London. Newspaper reports have indicated that the UK’s police will be examining CCTV footage for years to come in their efforts to bring the perpetrators to justice. I am absolutely in favour of anything that can be done to expedite this process and save police time.)

      But as a law-abiding citizen carrying on with my own business, how do I feel about having my face automatically captured and compared against a watchlist database of “individuals of interest”? There is potential to cause disruption to an individual’s life or place them under undue suspicion if they are falsely identified. That my face is being actively processed rather than just recorded gives more cause to pause and consider.

      Having done this, I am prepared to accept this use case, if the technology is operating at a sufficient level of accuracy to ensure that the chances of being misidentified while conducting my daily activities remains low. I also expect the technology to be deployed wisely in situations where there is demonstrable benefit to public safety, such as at transport hubs, large gatherings, public events or areas of critical national infrastructure.

      Most people already accept that the reality of the world today necessitates certain infringements on our liberties. The introduction of technology is a key tool in the fight against crime. No system is perfect, and the potential for an undesirable outcome of a system should not always result in the abolishment of that system. Few would argue, for example, to abolish our judicial systems and close our prisons to eliminate the possibility of a miscarriage of justice. Similarly, the benefits to public safety from face recognition are too great to ignore, though we must continuously strive to minimise the false identifications.

      I agree with Ms. Midgley on this one.

      Commercial Applications

      Most criticism that I have been reading in the press in the past view months appears to be levelled at the widening application of face recognition in business related or commercial applications, not with public safety.

      My flight is about to board, so let’s continue my journey through the terminal. As I saunter to my gate, my attention is caught by an impressive advertising display; a multi-plasma video wall. It was the amazing technology that caught my attention rather than the advert itself. Just as I’m about to glance away, the sunlit beach and blue ocean depicting the under 30’s surfing holiday fades away, to be replaced by a two-for-one spectacle offer, followed by a distinguished gentleman telling me how easy it was for him to “wash that grey away”.

      As I self-consciously stroke the hair at my temples, I wonder: Was this a mere co-incidence? Multiple vendors delivering solutions for advertising have announced technology that can count the number of people watching an advert at any given time, and even estimate their age, dwell time, sex and race. While providing invaluable information for the advertiser, it can also allow them to dynamically change the adverts in real time to more appropriately target the demographic of the current viewer(s). Recent reports in the Los Angeles Times (21st August 2011) suggests that this is already widely deployed in Japan, and is being considered by the likes of Adidas and Kraft in the UK and the US.

      While this is not technically face recognition, it is still worth noting as much of what I have been reading has been lumping the two technologies together. The key consideration here is that this form of technology is not actually identifying anybody, or extracting personally identifiable information. This doesn’t bother me in the least. Businesses have always tried to use whatever edge they can to more tightly tailor their message to their customer’s specific needs and wants. It may even benefit me by alerting me to more relevant products or services.

      What if, on the other hand, the advertiser had negotiated an arrangement with another organisation, for example a social networking site such as Facebook. If they supplied them with an image of my face, along with information on which portion of the advert caught my attention, Facebook might be able to identify me from its database of photographs, enabling them to harvest valuable information about me. While I can see this would present a huge commercial advantage to them, and whomever they chose to sell this information on to, I can only hope that the commercial damage from the backlash of incensed users would outweigh the gain.

      If I have some leisure time while on my business trip, there will doubtlessly be many activities at my destination to occupy me. I may have a quiet drink in a bar, or perhaps take a punt at the tables in the local casino. And yes, face recognition technology is being used even in these places. It’s been reported that bars and clubs are using gender and age distinguishing cameras to count people in and out, and make this information available over mobile phone apps. The youth of today can now determine before they set out which establishment holds their best chance of success. While I am well beyond having any use for this particular application, I can see how this may catch on in certain demographics of society. Any reputable establishment should clearly display such technology is in use and should make no attempt to harvest or make available any personally identifying information. Are all establishments reputable?

      More concerning to me is the increasing use of face recognition by social network sites. Both Google and Facebook are actively exploring uses. Automatic tagging of photographs being uploaded to Facebook is already occurring. Being inadvertently photographed while on my business trip and automatically tagged when the photographer uploads it does not appeal to me, no matter how innocuous my activities at the time may happen to be.

      Recent studies published by Carnegie Melon University demonstrating the potential to use large databases of photographs on social networking sites to glean confidential information should also be a cause for concern. The younger generation of today appear more and more willing to share intimate and private details online, without any thought (in my view) of the longer term or wider ramifications of doing so. This is an issue that is much larger than face recognition, but I can understand the worry that face recognition can help to tie it all together.

      Improved Benefit or Erosion of Privacy?

      When I first entered the biometrics field, I was attracted by the “neatness” factor of the technology, and of the potential for it to deliver benefits to society. I have to admit I paid scant attention to privacy concerns. Over time, as the voices of privacy advocates grew louder and more numerous, I started to listen and then to actively seek out their opinions. I am still a firm believer in this amazing technology, and endeavour to play an active role in its application for the positive transformation of society. However, I am grateful for the messages and insight provided by these campaigners; they have definitely transformed my thinking, and have made me consider much more carefully the application of biometrics.

      From a law-enforcement and public safety viewpoint, face recognition holds great potential to increase the security of our society. By its very nature, our government holds power over us and our society, which is why it is our responsibility to choose our governments carefully. We have no choice but to hold a certain level of trust and faith in our law-enforcement organisations. Our society today contains more checks and balances than ever before, and our politicians our more in-tune with and responsive to the public mood. If this faith breaks down, then so does society.

      In commercial applications, I also believe there is the potential for significant benefit to be realised from face recognition to both the consumer and businesses, but I am more concerned about the potential for abuse. To a certain level, the market will decide if the application of the technology is appropriate or not. Ventures people don’t like will fail. However we cannot always rely on market forces, and it is our collective responsibility to speak out when the need arises. Though it often lags behind, over time legislation keeps up with the advancement of technology. As our society changes with technical innovation, so too will the rules we collectively decide to govern our society. We will settle into an equilibrium reflecting the needs and views of all. But there will be a learning curve, and we will make mistakes along the way. That’s how society works.

      So, does face recognition represent an improved benefit, or an erosion of privacy? I suggest it has the potential to be both. It is everybody’s responsibility to ensure the benefit is worth the price paid. I absolutely believe we must have both the proponents of this technology and the advocators of privacy; we all have a role to play to decide how face recognition will be applied over time.

      The abolishment of either the technology or the voices of those monitoring its use and advocating our privacy would be to the detriment of society.

      Final Thought

      Just before I board my flight, let me leave you with this final thought. Imagine for a moment that a loved one of yours has come to harm. The authorities can use face recognition to aide in their recovery, and / or to ensure that justice is done. Are you concerned with privacy?