Key Points

  • This article describes the advances in face mining technologies and aligns these with Australian data privacy law, explaining the division of facial recognition usage into five generations. The article analyses the risks and gaps in data privacy laws, particularly within the Australian context, and offers recommendations for reform.

  • Australian law regulates biometric data, including facial biometric data (FBD), as sensitive information. However, the legislation and regulatory focus are primarily on what may be termed the second-generation use of FBD; that is, for automated verification or identification.

  • Australia is undergoing a reform of its data privacy law, which may impact the regulation of FBD. The reform proposals include redefining personal information to include inferred information and creating a non-exhaustive list of personal information types. The reform also suggests updating the categories of sensitive information and considering the regulation of biometric technologies.

  • To address the challenges surrounding biometric data privacy, several solutions are proposed. These include legislation to cover all generations of facial recognition usage and future technologies. Further, a provision should be included to prevent tech companies from circumventing the law by obtaining user consent. Guidelines on data storage should be established, including a maximum retention period, encryption, and government audits for compliance. Non-compliance should result in penalties comparable to the European Union General Data Protection Regulation.

Introduction

Many things come into mind when people hear ‘facial recognition’, including but not limited to using a facial image to verify age when purchasing alcohol at self-checkouts, unlocking a mobile phone, using a security camera to flag possible shoplifters,1 tagging people on social media platforms,2 and passing through an airport’s e-passport gate. While these everyday activities highlight facial recognition biometrics applications, each one significantly differs in the approach and implications of using, storing, handling, and controlling data.

To claim that facial biometric data (FBD) may be abused and constitutes a data privacy concern is merely a truism. Indeed, concerns about this matter were discussed already in the very first issue of this journal.3 Further, the fact that data privacy laws around the world generally apply to, and provide rules for, the processing of facial biometrics is well-established. However, it is equally well established that this regulation contains grey zones, and it is frequently observed that the current regulation of FBD is inadequate.4

This article contributes to the discourse on the regulation of FBD by proposing a novel classification of the different uses of facial biometric data; we divide it into five ‘generations of use’. Having provided an overview of facial biometrics and the first three generations of facial biometric data use, we discuss and analyse the fourth- and fifth-generation use of FBD in more detail. This is because we see the use of these later generations as insufficiently explored and particularly invasive. In doing so, we point to some of the key risks that such use poses to data privacy.

We then seek to assess how well prepared the data privacy law is to address these risks, through the lens of Australia’s federal Privacy Act 1988 (Cth)—Australia’s main data privacy law. At the time of writing, Australia is in the process of reforming its Commonwealth data privacy law. This makes focusing on Australian law particularly timely and compelling.

The ‘gap analysis’ is followed by a set of recommendations. The ultimate goal of this article is to spark, guide, and facilitate, law reform in relation to what we see as one of the greatest threats to data privacy the world currently is facing.

FBD—a brief overview

The claim that the face is a mirror of the soul is more than a well-known cliché. The face has always revealed—with a reasonable degree of accuracy—matters such as age, gender, and ethnicity; that is, data that traditionally are classed as ‘personal’ under data privacy laws around the world. These types of data are ‘static’ in that they generally do not change quickly, and in the case of ethnicity not at all.

Our faces may also reveal other types of data. For example, our facial expressions may say quite a lot about how we are feeling; if we are sad, happy, surprised, concerned, angry, irritated, and so on.5 These types of data are, for obvious reasons, highly ‘dynamic’. These dynamic data may especially reveal more than we as individuals may intend or wish to reveal.

Apart from utilizing some form of face-covering, there is not much we as individuals can do to avoid our faces revealing these types of data. Furthermore, our sensitive FBD are costly and difficult to change. For instance, when an individual’s password or credit card information falls into the hands of third parties, the individual may quickly change them. However, suppose a third party can access an individual’s photo or video; in that case, the individual is unlikely to want to change their face making permanent and irreversible the personal data privacy breach.

However, until relatively recently the data revealed by our faces was arguably of comparatively limited concern from a data privacy perspective. The use of such data was in most cases: (i) location-dependent and (ii) limited in time. Only persons around us at a given moment could collect the data that may be collected from our faces. In addition, any permanent collection via photography or video was then only small scale.

This is all changing. The collection of FBD—driven by behavioural changes, technological developments, security concerns, and commercial incentives—is rapidly increasing. The pervasiveness of FBD collection for security purposes has gained a considerable degree of attention.6 In this article, we consider such collection undertaken for commercial purposes and discuss a recent high-profile matter from Australia.

The explosion of social media, and the rate at which photos showing people’s faces are being posted online have created serious concerns. Social networking has become part of everyday life as over 5.11 billion individuals have access to smartphones for accessing the internet and social media sites.7 According to Karki,8 more than 1.4 trillion images exist online. Additionally, 3.2 billion photographs are uploaded daily, creating surplus personal photos online.9 Dutton and others10 reported human behaviours transforming from listening to music online to sharing and posting personal photos online. Daily Facebook picture uploads have reached 350 million, while Snapchat records as high as 700 million average daily photo uploads.11 Over 1 billion videos and 4.5 billion photos are shared on WhatsApp alone, every day.12

All this leads us to believe that the regulation of the use of FBD has never been more important. Yet, the current discourse is hindered by a lack of nuance. The use of FBD is largely treated as one single matter, when in fact it is far more nuanced than that. In addition, our current regulation fails to address the most recent developments in how FBDs are utilized.

The five generations of FBD usage

FBD use may conveniently be divided into five different generations. The purpose of this categorization is to aid in identifying gaps in the current regulation of FBD use.

First-generation facial recognition usage: identification (who am I)

In answering the question, ‘Who am I?’, the first generation of face biometrics signifies the most ‘novel’ period of its evolution. Few biometric identifications existed during this time, largely owing to the limited scope of technological research in this area.13 The first generation of facial recognition technology (FRT), which dates back to the mid-20th Century (1963), aimed to provide a solid computational understanding of facial images that would be later used in subsequent integrations with other technologies.14 However, the complexity and diversity of face structure derailed development in this era as research took more time to understand how to encode and link different mathematically represented facial features.15 In this regard, the first generation of face biometrics can be best understood as an identification phase where machines attempt to encode faces, making it easier to link with other computational technologies.

In 1963, Woodrow W Bledsoe presented the first automated facial recognition systems report. To prove the technology’s novelty, Bledsoe outlined challenges and possible solutions to the wide variability of facial images.16 In one series of images, he showed how the identity of two different people would be wrongly perceived as similar, taking note of the rotational tilts of their heads. Later solutions to the issue of variability included eigenfaces that utilized statistical principal component analysis (PCA) to encode variation under smaller dimensions.17 First-generational FBD use, commonly referred to as facial recognition, is the initial usage of face biometrics to serve the purpose of linking a face with an existing database of identifiable persons.

It is worth noting that although commenced in an early generation, work on identifying faces has not stopped. Facial recognition technology is currently accurate at 99.97 per cent.18 This technique overtook all other biometrics, such as iris and fingerprint recognition in 2019.19

Second-generation facial recognition usage: authentication (am I who I claim to be)

Although the start of the 21st century was marked with increased machine-learning systems knowledge, facial recognition systems were only applicable in authentications, making the second-generation FBD use about answering the question, ‘Am I who I say I am?’ The success of the first-generation facial biometric technologies eventually set the path for further integration with more complex technologies. More subtly, research models were designed to depict human-like intelligence.20 From 1990, research also built on the first generation success in improving learning and facial feature extraction. The emergence of widespread uptake of the internet, and in particular social media, also propelled advances in this era, particularly with subsequent developments which allowed people to upload unlimited images to the already growing databases.21 Unlike first-generational FBD use that adopts shallow learning, second-generation FBD utilizes advanced ‘deep learning’.

Even though second-generation FBD use significantly differs from the first, most advancements involved initial model modification. However, models further applied deep learning to link the input vector with a pre-referenced image in a database.22 Such implementations are notable during the early 2000s, with evidence from Havran and others,23 who proposed independent component analysis as an improvement to the traditional PCA. The growth of machine and deep learning ideologies in second-generation FBD use also influenced FBD authentication.24 For example, Zhang and others25 proposed a deep Conventional Neural Network integrating both LNet26 and ANet27 machine learning models. Such iterative learning models allowed facial biometric technologies to manage variations according to the initially learned face features.28 Although second-generation FBD were invasive, it could not predict the biological and neuropsychological features of the facial biometric data’s owner, opening the way for third-generation facial biometric data use. The third-generation applications include unlocking doors and opening devices, using one-to-one or one-to-many comparisons.

Third-generation facial recognition usage: profiling (what am I profiled to be)

The third-generation facial biometric iteration has been described as face-to-data (‘face2data’, or F2D)29; essentially it involves using FBD to look up and link to additional data. Thus, the third-generation FBD use is about answering the question, ‘what information may be linked to, based on a face?’. More specifically, F2D refers to at least partially automated processes for accessing personal information about a person, from an external source, based on an image of that person’s face.

Typically, F2D depends on three different relatively recent developments. The first is publicly available off-the-shelf face recognition software. The second is cloud computing, which provides sufficient computing power. Thirdly, social networking sites facilitate the linking of faces to names, or alternatively, linking captured FBD to an available image online.

As outlined elsewhere, the F2D process can be broken down into six steps:

First, an image, or images, must be captured of the data subject. Second, face-recognition software must be acquired. Third, data must be acquired that link an image of the data subject to a name (such as the publicly available Facebook profiles of many Facebook users). Fourth, sufficient data processing power must be acquired (typically via cloud computing). Fifth, the data processing power must be utilised to allow the face-recognition software to match the collected image with the ‘identifying images’ collected e.g. from Facebook. Finally, having identified the data subject by name, that name can be used to search for information about the data subject—the face has been turned into personal data! While all of these steps must take place, the order in which they take place may obviously vary somewhat from case to case. Importantly, these steps may be automated in the sense of e.g. an app allowing the user to capture an image that then is automatically used in the manner outlined above.30

Acquisti and others31 carried out a pioneering experiment and proved that third parties could identify users using publicly available FBD. In the study, the authors matched users on dating sites with their respective Facebook profiles using FBD. Google’s Application Programming Interface was used to search pictures on Facebook, and using the PittPatt recognizer, the authors identified that 1 out of every 10 users could be identified on dating sites using Facebook profile FBD. An example of third-generational FBD use is the Swedish start-up Polar Rose’s ‘augmented ID’ app that allowed users to identify a user’s social media information by pointing a camera at the user’s face—only after the user opted for the service and then uploaded a photo to the service themselves.32

While the described third-generation use of FBD has been debated in academic literature for more than 10 years, the law is still to catch up, as discussed below.

Fourth-generation facial recognition usage: correlation (what am I assumed to be)

Previous generations set the pace for future artificial intelligence (AI) integration within Face Recognition Technologies. Unlike first-, second-, and third-generation FBD use, the fourth-generation FBD use attempts to confirm the cliché ‘face is a mirror of the soul’. It answers the question ‘What am I?’, implying that FBD could be used to link between body and self, for instance, ‘show me your body, I show you yourself’. In this sense, the fourth-generation use could be characterized as Face-Is-Data (FID).

The fourth-generation FBD use signifies an era of complex machine learning that deviates from previous supervised models. Instead, these models now apply machine intelligence to predict soft biological and neuropsychological information from data in face recognition technologies.33

Advanced AI now uses data mined from faces to predict genetic, biological, and neuropsychological features with claims of remarkable accuracy. Examples of predicted features are occupation,34 attractiveness, humorous, perfectionism, self-reliance, openness to change, warmth, reasoning, emotional stability, dominance, rule consciousness, liveliness, sensitivity, vigilance, abstractedness, privateness, apprehension, social boldness, sleep disorder,35 ethnicity,36 sexual orientation,37 social relations,38 kinship,39 body mass index,40 mental health disorder,41 openness, conscientiousness, extraversion, agreeableness, neuroticism,42 hypertension,43 depression, anxiety, stress,44 gender, age,45 and political orientation.46 The fourth-generational FBD use, allowing neuropsychological and biological features prediction, surges data privacy threats to heights never anticipated. According to Haamer and others,47 the fourth-generation FBD use reveals private intimate characteristics, making it more invasive than its predecessors.

Furthermore, taking likeness to be used as data through face biometrics for creating and training algorithms may constitute a dual data privacy breach. First, the unauthorised-consent usage of one’s personal profile (genetic, biological, psychological). Secondly, the unauthorised-consent usage of one’s likeness (face and personality profile replication) to be replicated in part or in full, in likeness or non-likeness.

Despite social media activities leaving a tremendous digital data footprint (in FBD) with implications for societal and personal sovereignty with regard to data privacy, it opens opportunities, resulting in the award of new patents. For instance, in February 2022, an Ivy League university located in the United States was awarded a patent for developing a machine learning methodology capable of manipulating read images to predict different social traits with remarkable accuracy for random images.48 Further, the intelligence community, such as law enforcement, adopts FBD use (including fourth-generation use) for identifying suspects and victims, as seen in the case of Clearview AI (as discussed below). Health care is another area where fourth-generation FBD use opens opportunities. For instance, Martin and others49 ascertained that facial emotion recognition deficits were common among people with schizophrenia, opening the feasibility of fourth-generational FBD use in diagnosing schizophrenia.

Having discussed the opportunities around fourth-generational FBD use, it is crucial to understand how FBD use has evolved and whether the current data privacy laws address the advancements in FBT. The following three subsections discuss the risk associated with the fourth-generational FBD use in more detail.

Fourth-generational FBD use: Associated risk

Despite FBD finding new uses in different areas, today, FBD leaves a tremendous digital, biological, and neuropsychological data footprint with implications for society and personal privacy.50 The fourth-generational FBD use goes beyond identification and authentication to invading victims’ neuropsychological and biological features. With the fourth generation, FBD are used to predict neuropsychological and biological features. Gurnani and others51 reported that FBD was used to reveal the private intimate characteristics of a person with remarkable accuracy. Wang and Kosinski52 affirmed that by using FBD, deep neural networks reveal sexual orientation—information a majority do not wish to publish—with unprecedented precision.

The rapidly changing technology around facial biometrics results in the existing data privacy laws not keeping pace with data privacy concerns.53 Considering existing FBD laws do not adequately address the third-generational FBD use, the fourth-generational FBD use could risk undermining data privacy to such a degree that it becomes impossible to exercise the right to data privacy. The Economist, in their 2017 article titled ‘Nowhere to Hide…’ explored technology’s role in data privacy and showed that deep neural networks could tell what an individual’s self-based on their body, for instance, rule consciousness from their face. Simply put, an individual’s facial image could be used to determine their consciousness. With the fourth-generation FBD use, AI could be used to target individuals with certain neuropsychological or biological features, opening opportunities for discrimination, cyberbullying, and more severe outcomes where sexual orientation or political affiliation could result in criminal charges.54

The picture portrayed in the previous section points to numerous obvious data privacy risks. Importantly, these risks are serious regardless of whether the fourth-generation FBD use is accurate or not. After all, inaccurate data may also be personal data; thus, the ‘use’ of fourth-generation FBD is the foundation of the concerns, not the accuracy of that use. Taking account of the risks associated with inaccurate fourth-generation FBD is essential for understanding the full picture.

Current data privacy laws do not adequately address FBD data collection, and this makes it possible for third- and fourth-generation FBD users to misuse personal data. Although Personally Identifiable Information is a standard for personal data privacy,55 there is no clearly established relationship between it and FBD. As most social media platforms are not anonymous, users must take care of their Private Sensitive Information to safeguard their data privacy—and this depends on the user’s perception of which information might fall into that category.

Data privacy policies remain woefully underdeveloped against rapidly advancing FBD use.56 Tightening data privacy policies around first- and second-generation FBD use is essential to addressing the data privacy challenges of later generations of FBD.

Fifth-generation facial recognition usage: replication (what is my data being used to create)

In recent years, there has been a growing focus on utilizing human face biometrics as a valuable source of data for the development and creation of AI human likeness. This section introduces the concept of Face-to-Machine (F2M) and describes the potential of leveraging state-of-the-art technology to generate algorithms that can replicate human features and behaviours. While these advances hold promise, they also raise important concerns regarding data privacy.

In the 1927 German expressionist science-fiction film Metropolis,57 based on Thea von Harbou’s novel of the same name, a captivating storyline is presented where the protagonist, Maria, falls victim to a sinister scheme. She is kidnapped and her facial features are stolen to craft a robot in her likeness known as Maschinenmensch, making it one of the earliest depictions of robots in cinema. The design of Maschinenmensch, influenced by the actress Brigitte Helm’s appearance as Maria, left a lasting impact on future robotic designs. For instance, Ralph McQuarrie’s male counterpart, C-3PO, in the iconic film Star Wars58 was shaped by Art-Deco design inspired by Maschinenmensch.

In a modern reflection of early 20th-century sci-fi, face biometrics leverage fourth-generation techniques to be used as data by AI, providing the training algorithms to create models to replicate human likeness in either mixed reality digitalized form such as avatar (current state of the art technological capability) or physically bio-engineered (future emerging technological capability). This may constitute a data privacy breach by the unauthorized-consent usage of one’s personal profile (genetic, biological, psychological) to be used as data through face biometrics for creating and training algorithms.

We are at the bio-engineering technological cusp of narrowing the gap between the indistinguishability of real and artificially generated humans. The unauthorized usage of one’s likeness (face and personality profile replication) could result in the creation of models to replicate human likeness in either mixed reality digitalized form such as avatars (current state-of-the-art technological capability). Referring to the latter, we are not yet at the stage of bio-engineering human beings, as per Blade Runner.59 However, the relationship between facial features and DNA has now been properly established meaning that in theory, the facial features of an individual could be deconstructed into its component DNA60 and subsequently reconstructed. While the latter is currently science fiction, we have entered into the digitalized human era with the development of the metaverse—where human likeness replication, consented or non-consented is already taking place.

To understand the importance of this, Harari61 discusses three major existential challenges which humanity is facing in the 21st century: nuclear war, ecological collapse, and technological disruption. While the first two challenges are well-known, he argues that the less familiar threat is the potential disruption caused by technology. Harari warns about the potential for unprecedented global inequality and the rise of digital dictatorships that can monitor and manipulate individuals on a large scale.

Harari emphasizes the equation of ‘Biological knowledge × Computing power × Data = Ability to hack humans’, indicating the ability to understand and manipulate individuals on a deep level. He discusses the risks associated with such power falling into the wrong hands and the potential loss of human freedom and control over our own lives. Harari’s speech also explores the impact of technology on decision making, where algorithms already play a significant role in various aspects of society. He argues that if sufficient biological knowledge, computing power, and data are available, the body, the brain, and life can be hacked, allowing for a deeper understanding of individuals which surpasses their self-perception. Therefore, personality types, political views, sexual preferences, and mental vulnerabilities can be comprehended to an extent that surpasses the individual’s own awareness.

Taking the steps to address these data privacy concerns of the future is a task we must engage with now.

Australian data privacy law and the risks of third-, fourth-, and fifth-generation FBD use

Australian law clearly regulates biometric data including FBD. Indeed, biometric information is classed as ‘sensitive information’ under the Privacy Act 1988 (Cth) and thus afforded extra protection. Sensitive information may only be collected with consent unless an exception applies, and more stringent requirements apply to its use or disclosure.

However, the mindset of the legislator in relation to biometric information is clearly bound to the second-generation use of FBD. As far as our discussion is concerned, the Privacy Act’s definition of sensitive information only refers to ‘biometric information that is to be used for the purpose of automated biometric verification or biometric identification’ and ‘biometric templates’62; the latter being a digital representation of biometric samples typically stored in a biometric database.

Similarly, and unsurprisingly given the text of the legislation, the Office of the Australian Information Commissioner (the key regulator) is also focused on the second-generation use of FBD. To see that this is so, one need only consider this passage from the brief webpage addressing ‘Biometric scanning’:

An organisation or agency may only scan your biometric information as a way to identify you or as part of an automated biometric verification system, if the law authorises or requires them to collect it or it’s necessary to prevent a serious threat to the life, health or safety of any individual.63 [emphasis added]

While the Privacy Act 1988 fails to provide a definition of biometric information, the Office of the Australian Information Commissioner’s webpage on ‘Biometric scanning’ states that biometric information scanning occurs ‘when an organization or agency takes an electronic copy of your biometric information, which includes any features of your: face, fingerprints, iris, palm, signature, voice’.64 First, it may be noted that this list seems incomplete as it overlooks, for example, gait.65 Further, and more on-point for our discussion, this definition does not engage with the third-, fourth-, or fifth-generation use of FBD. The result is that, while FBD falling within the second-generation use is classed as sensitive information, FBD falling within the much more sensitive third- and fourth-generation use is not. Perversely then, the Privacy Act 1988 (Cth) provides less protection for FBD falling within the highly intrusive third- and fourth-generation use than it does for the still serious, but comparatively less intrusive, second-generation use.

While third- and fourth-generation use of FBD falls outside the definition of sensitive information, it still falls within the definition of ‘personal information’:

personal information means information or an opinion about an identified individual, or an individual who is reasonably identifiable:

(a) whether the information or opinion is true or not; and

(b) whether the information or opinion is recorded in a material form or not.66

As a result, such FBD enjoys some protection under the Privacy Act—namely that bestowed upon all information classed as personal. For example, limitations are placed on the collection, use, disclosure, and transborder transfers of such data. This protection is obviously woefully inadequate given the extreme sensitivity of third- and fourth-generation use of FBD. The need for reform is obvious. However, to make matters worse, Australian law still maintains the ‘small business’ exemption meaning that most Australian businesses do not need to consider the Privacy Act. This is combined with the lack of any statutory, or common law, cause of action for serious invasions of privacy to place Australians at serious risk.

Recent cases have also stress-tested the current legislation. For example, in the Clearview AI case,67 the Commissioner concluded that Clearview AI (Clearview) violated the privacy rights of Australians by breaching its obligations under the Australian Privacy Principles (APPs). This violation occurred through Clearview’s practice of collecting sensitive facial images from publicly available websites, including social media platforms, without obtaining valid consent from the individuals involved.

The Clearview database contained a vast collection of over 3 billion images. Additionally, the Commissioner’s investigation, conducted under section 40(2) of the Privacy Act, revealed that Clearview had not taken adequate measures to establish practices, procedures, and systems that would ensure compliance with the APPs.

Furthermore, another investigation by the Commissioner found that between 15 June 2020 and 24 August 2021, the privacy of individuals, whose facial images and faceprints were collected by 7-Eleven Stores Pty Ltd, was interfered with under the Privacy Act 1988 (Cth).68 This interference occurred through the collection of sensitive information of individuals without their consent, even when it was not reasonably necessary for the respondent’s functions and activities. Such collection breached the Australian Privacy Principle (APP) 3.3.

Finally, reasonable steps were not taken by the respondent to notify individuals about the collection process, including the circumstances and purposes of the information collection. This failure to provide adequate notification was in breach of APP 5.

Ongoing data privacy law reform

At the time of writing, Australia is in the process of reforming its Commonwealth data privacy law. Importantly, there are several aspects of the law reform initiative that may impact the regulation of FBD, and the reform documents hint at a willingness to also reconsider the most fundamental aspects of the Privacy Act 1988 (Cth). Released on 16 February 2023, the Attorney-General’s Privacy Act Review Report—which builds upon 2 years of consultation and that was preceded by a 2020 Issues Paper,69 and a 2021 Discussion Paper70—notes, for example:

The widespread adoption of digital technology and the opportunities it has created for large scale collection, use and disclosure of information arising out of individuals’ communicating with each other, transacting, consuming, creating content and engaging in all manner of daily activities in digital contexts, has generated questions about how the definition [of personal information] applies to information in our economy today.71

For our purposes, the most significant reform proposal regarding the definition of ‘personal information’ relates to inferred information about an individual. As noted above, it may be said that fourth-generation use involves information being inferred from facial attributes. It is, thus, a form of inferred information. The Privacy Act Review Report proposes to amend ‘the definition of ‘collects’ to make clear that inferred information is collected at the point the inference is made’.72 If adopted, this reform proposal would provide a valuable clarification.

Furthermore, the Privacy Act Review Report proposes to include a non-exhaustive list of the types of information that can be personal information. For our purposes, it is relevant to note that, under the proposal, that list would include ‘inferred information, including predictions of behaviour or preferences, and profiles generated from aggregated information’ and ‘one or more features specific to the physical, physiological, genetic, mental, behavioural, economic, cultural or social identity or characteristics of a person’.73

The reform proposal also envisages updating the existing categories of sensitive information. However, while it is noted that biometric information that is not used for the purpose of automated biometric verification or biometric identification can nevertheless carry risks of harm,74 the only directly relevant proposal is that of clarifying ‘that sensitive information can be inferred from information that is not sensitive information’.75

Data privacy law reform: potential solutions

In addition to the formal law reform discussion noted above, some interesting proposals have already been published. Specifically targeted towards Facial Recognition Technology (FRT), the Facial Recognition Technology—Towards a model law report76 by researchers at the University of Technology Sydney suggests a number of reforms to the current legislation. The Model Law takes a risk-based approach to FRT while prioritizing human rights. It requires developers and users of FRT applications to assess the human rights risks associated with their specific application. Factors such as functionality, deployment context, accuracy, and impact on decision making are considered. This assessment, called Facial Recognition Impact Assessment (FRIA), assigns a risk rating to the FRT application, ranging from base-level to high risk. The rating can be challenged by the public and the regulator.

The Model Law imposes legal requirements, limitations, and prohibitions based on the assessed risk. Procedural requirements include registering and publicly disclosing FRIAs for transparency. Substantive requirements extend data privacy law obligations to FRT applications. A new FRT technical standard with the force of law is proposed.

High-risk FRT applications are generally prohibited, except in cases authorized by the regulator, genuine research, and specific legal rules for law enforcement and national security agencies, including a ‘face warrant’ scheme.

The report recommends empowering and providing resources to a suitable regulator, such as the Office of the Australian Information Commissioner (OAIC), to oversee FRT development and use in Australia. Further, the formation of an Australian Government taskforce on FRT is suggested. The proposed taskforce would have two main functions. First, it would collaborate with various Australian Government departments and agencies, including the Australian Federal Police, to ensure that their development and use of FRT align with legal and ethical standards. This would involve creating training programs and policy materials to support the goals outlined in the FRT Model Law. Second, the taskforce would lead Australia’s international engagement on FRT. It aims to positively influence the development of international standards and other assurance mechanisms for FRT. Additionally, it would work towards ensuring that Australian FRT laws are consistent with international law and best practices.

The aim is to harmonize the approach across federal, state, and territory jurisdictions. Overall, the Model Law aims to balance FRT usage with human rights considerations. By assessing risks, imposing legal requirements, and establishing a regulatory framework, it seeks to ensure transparent and responsible development and deployment of FRT applications in Australia.

The taskforce would provide advice to the government on streamlining the operation of Australian law in this area, which is pertinent considering that many FRT applications are developed in other countries. This could involve mechanisms for mutual recognition of impact assessments for FRT applications conducted under comparable laws in other jurisdictions or under the International Standards Organization’s auspices. These assessments should substantively apply the elements of the Model Law’s FRIA process.

These noted reform proposals are doubtlessly all important. Yet, as far as FBD is concerned, perhaps the most significant reform proposal is found in the initiative to specifically address ‘high privacy risk activities’. The Privacy Act Review Report proposes that all entities falling within the law should be required to complete a Privacy Impact Assessment (PIA) if undertaking a ‘high privacy risk activity’ (defined as ‘any function or activity that is likely to have a significant impact on the privacy of individuals’).77 Significantly, the examples of ‘high privacy risk activities’ include, for example, ‘online tracking, profiling and the delivery of personalized content and advertising to individuals’,78 and ‘the use of biometric templates or biometric information for the purpose of verification or identification, or when collected in publicly accessible spaces’.79 While it is troubling that the reference to, and focus on, ‘the purpose of verification or identification’ remains, it is at least encouraging that biometric information is covered more generally ‘when collected in publicly accessible spaces’.

The discussion of ‘high privacy risk activities’ also includes a section specifically on FBD, and the following proposal is made:

Consider how enhanced risk assessment requirements for facial recognition technology and other uses of biometric information may be adopted as part of the implementation of Proposal 13.1 [ie, the PIA requirement discussed above] to require privacy impact assessments for high privacy risk activities. This work should be done as part of a broader consideration by government of the regulation of biometric technologies.

These are promising proposals, and they align with an earlier recommendation by the Australian Human Rights Commission.80 To this may be added a number of other suggestions.

First, it is imperative to enact comprehensive biometric data privacy legislation that encompasses all existing AI and biometric technology capabilities, specifically addressing all five generations of facial recognition usage. Moreover, the legislation should be designed with enough flexibility to incorporate future technologies, ensuring its long-term relevance.

Secondly, in order to prevent technology companies from circumventing the effect of such an Australian biometric law, a provision should be included to make it illegal for them to contract out of the law’s application by seeking users’ permission to collect their personal biometrics. Any such agreement ought to be without legal effect. This approach aims to avoid situations similar to that of TikTok’s current legal circumvention of the Biometric Information Privacy Act (BIPA) and other biometric laws in the United States.

As part of the new legislation, a series of rules should be set on data storage. They should, for example, include a requirement of storing biometric data for a maximum period of 30 days, ensuring adequate encryption of the data, and enabling government audits to assess compliance. Non-compliance with the law should result in penalties comparable to those stipulated in the European Union General Data Protection Regulation (GDPR), amounting to up to 4 per cent of a company’s worldwide annual turnover.81

Foreign companies operating on Australian soil could be prohibited from biometric mining and harvesting technologies unless they have obtained a specific operating license. This regulation should encompass both physical capture devices and remote methods using electronic devices such as social media platforms. Additionally, the legislation should ensure the accountability of auditable algorithms.

For all companies, regardless of their origin, they should be obligated to store their FBD within Australian borders, subject to Australian jurisdiction and Australian law. Relatedly, it should be deemed illegal for companies to export face biometric data. By treating human personal data of Australians as a restricted export commodity, this legislation aims to protect the privacy and security of individuals’ data. While recognizing that such a data location requirement is not uncontroversial,82 it is clearly justified given the sensitivity of the type of data involved. We argue that such measures are necessary and should be implemented within the Australian national security framework to help counter the threat of foreign interference posed by the large-scale open-source acquisition of such data. This is necessary in recognition of the threat posed by the state-of-the-art advances in AI, currently making possible the enablement by malign foreign state actors performing effective surgically targeted, at the individual or societal level, cognitive warfare, and influence operations through the weaponization of algorithms.

Thus, three concluding observations may be made here: (i) the current regulation of FBD is inadequate, (ii) there is recognition that the current regulation of FBD is inadequate, and (iii) it remains to be seen how Australia tackles the pressing issues addressed in this article.

Conclusion

This article highlights the importance of regulating FBD, particularly in the context of its fourth-generation use. The rapid advancements in technology have led to increasingly invasive uses of FBD, posing significant risks to data privacy. The article categorizes FBD use into five generations, with each generation representing a different level of sophistication and potential data privacy implications.

The fourth-generation use of FBD—that we described as FID—is identified as particularly invasive and capable of revealing private intimate characteristics. This generation goes beyond identification and authentication, delving into the prediction of neuropsychological, genetic, and biological features. The risks associated with this level of data usage include heightened data privacy concerns, discrimination, cyberbullying, and other potential negative outcomes to include foreign interference. And we take this opportunity to reiterate the risks that may flow from inaccurate fourth-generation use of FBD.

The existing data privacy laws and regulations are not adequately equipped to address the challenges posed by fourth-generation FBD use. The technology is evolving at a faster pace than the legislation, leaving gaps in protection. There is a need for law reform to ensure that data privacy laws keep pace with the advancements in FBD technology and adequately safeguard individuals’ rights. The fourth-generation FBD—FID—marks the end of privacy if indeed data privacy laws fail to address the second-generation FBD use data privacy concerns. Though both are invasive, the fourth-generation FBD is more invasive than the earlier generations of FBD use. The second-generation FBD use answers the question, ‘Am I who I say I am?’. In contrast, both the third- and the fourth-generation FBD could be said to answer the question ‘What am I?’, and consequently goes beyond using one’s body to answer the question about themselves. The third-generation use does so by linking externally sourced data to a face (profiling), while the fourth-generation use does so by making assumptions, based on predetermined models, directly based on facial features (correlations).

We have briefly introduced the 5th-generation use of facial recognition, which focuses on replication and the creation of AI human likeness. While this technological advance holds promise, it simultaneously raises significant concerns regarding data privacy and the potential for unprecedented global inequality and the rise of digital dictatorships.

The article recommends updating the meanings of the existing FBD-related data privacy terminologies and creating new terminologies around FBD use to ensure data privacy laws match FBT and FBD advances in collection and use.

Most importantly, we have emphasized the importance of developing clear definitions and terminology related to FBD via the distinctions we have drawn between the five generations of use. This will hopefully help provide a shared language for experts, policymakers, corporations, lawyers, and the public. This clarity can facilitate discussions and enable effective regulation and protection of data privacy rights.

In conclusion, the regulation of FBD is of utmost importance due to its increasing invasiveness and potential data privacy risks. It is crucial for policymakers and legislators to address the gaps in current regulations, develop clear definitions and terminology, and ensure that data privacy laws keep pace with the rapid advancements in FBD technology. Only through comprehensive and up-to-date regulation can individuals’ data privacy rights be protected in the face of evolving FBD use.

Conflict of interest

The authors declare that they have no conflict of interest.

Footnotes

1

Elias Wright, ‘The Future of Facial Recognition is Not Fully Known: Developing Privacy and Security Regulatory Mechanisms for Facial Recognition in the Retail Sector’ (2019) 29 Fordham Intell Prop Media & Ent LJ 611.

2

Yale Omer and others, ‘What is a Face? Critical Features for Face Detection’ (2019) 48 Perception 437.

3

Omer Tene, ‘Privacy: The New Generations’ (2011) 1 Int Data Priv Law 15. The topic has since re-emerged to various degrees in a range of excellent articles such as: Frederik JZ Borgesius, ‘Personal Data Processing for Behavioural Targeting: Which Legal Basis?’ (2015) 5 Int Data Priv Law 163; N Ni Loideain, ‘Cape Town as a Smart and Safe City: Implications for Governance and Data Privacy’ (2017) 7 Int Data Priv Law 314; Andreas Häuselmann, ‘Fit for Purpose? Affective Computing Meets EU Data Protection Law’ (2021) 11 Int Data Priv Law 245; Nadezhda Purtova, ‘From Knowing by Name to Targeting: The Meaning of Identification Under the GDPR’ (2022) 12 Int Data Priv Law 163; and Catherine A Jasserand, ‘Avoiding Terminological Confusion Between the Notions of “Biometrics” and “Biometric Data”: An Investigation into the Meanings of the Terms from a European Data Protection and a Scientific Perspective (2016) 6 Int Data Priv Law 63.

4

See eg, AHRC, Human Rights and Technology (Final Report, March 2021).

5

Chia-Yuan Hsu, Lu-En Lin and Chang Hong Lin, ‘Age and Gender Recognition with Random Occluded Data Augmentation on Facial Images’ (2021) 80 Multimed Tools Appl 11631.

6

Marcus Smith and Seumas Miller, ‘The Ethical Application of Biometric Facial Recognition Technology’ (2021) 37 AI Soc 167.

7

Chen Yang, ‘Research in the Instagram Context: Approaches and Methods’ (2021) 7 Soc Sci Res 15.

8

Bijay Karki, ‘Open-Source Photogrammetric Tools for 3D Urban Modelling—A Case Study Using Mobile Phone Images’ (Geoinformatics Master thesis, Alto University 2022).

9

Stuart A Thompson and Charlie Warzel, Twelve Million Phones, One Dataset, Zero Privacy’ in Kirsten Martin (ed), Ethics of Data and Analytics (Taylor Francis, Oxford, 2022) 161-169.

10

William H Dutton, Grant Blank and Darja Groselj, ‘Cultures of the Internet: The Internet in Britain’ (Oxford Internet Survey 2013 Report, Oxford Internet Institute) <https://oxis.oii.ox.ac.uk/wp-content/uploads/sites/16/2014/11/OxIS-2013.pdf> accessed 22 August 2023.

11

Fenghua Li and others, ‘Hideme: Privacy-Preserving Photo Sharing on Social Networks’ (2019) in IEEE INFOCOM 2019 –IEEE Conference on Computer Communications (Paris, France 29 April - May 2019) 154.

12

Mohammad Irfan and Sonali Dhimmar, ‘Impact of Whatsapp Messenger on the University Level Students: A Psychological Study’ (2019) 6 Int J Res Anal Rev 572.

13

Jiewen Xiao and others, ‘Taxonomy and Evolution Predicting Using Deep Learning in Images’ (2022) arXiv <https://doi-org.libproxy.ucl.ac.uk/10.48550/arXiv.2206.14011> accessed 21 August 2023.

14

Kelly Gates, The Past Perfect Promise of Facial Recognition Technology (Arms Control, Disarmament and International Security Occasional Paper, Institute of Communications Research University of Illinois at Urbana-Champaign, 2004) <https://hdl-handle-net.libproxy.ucl.ac.uk/2142/38> accessed 21 August 2023.

15

Yongsheng Gao and MKH Leung, ‘Face Recognition Using Line Edge Map’ (2002) 24 IEEE Trans Pattern Anal Mach Intell 764.

16

Lila Lee-Morrison, ‘A Portrait of Facial Recognition: Tracing a History of a Statistical Way of Seeing’ (2018) 9 Philos Photogr 107.

17

SK Shiji ‘Biometric Prediction on Face Images Using Eigenface Approach’ (2013) in IEEE Conference on Information Communication Technologies (Thuckalay, India, 11-12 April 2013) 104. Matthew Turk and Alex Pentland, ‘Eigenfaces for Recognition’ (1991) 3 J Cogn Neurosci 71.

18

William Crumpler, ‘How Accurate are Facial Recognition Systems—And Why Does It Matter?’ (Centre for Strategic and International Studies 14 April 2020) <https://www.csis.org/blogs/strategic-technologies-blog/how-accurate-are-facial-recognition-systems-and-why-does-it> accessed 21 September 2023.

19

Abd El Rahman Shabayek and others, ‘3D Deformation Signature for Dynamic Face Recognition’ (2020) in ICASSP 2020–2020 IEEE International Conference on Acoustics, Speech and Signal Processing (Barcelona, Spain 4-8 May 2020).

20

Kelly Gates, The Past Perfect Promise of Facial Recognition Technology (Arms Control, Disarmament and International Security Occasional Paper, Institute of Communications Research University of Illinois at Urbana-Champaign, 2004) <https://hdl-handle-net.libproxy.ucl.ac.uk/2142/38> accessed 21 August 2023.

21

Camille Paloque-Bergès and Valerie Schafer, ‘Arpanet (1969-2019)’ (2019) 3 Internet Hist 1.

22

Shervin Minaee and others, ‘Biometric Recognition Using Deep Learning: A Survey’ (2019) arXiv <https://doi-org.libproxy.ucl.ac.uk/10.48550/arXiv.1912.00271> accessed 21 August 2023. Ignacio Serna and others, Iyad Rahwan, ‘Algorithmic Discrimination: Formulation and Exploration in Deep Learning-Based Face Biometrics’ (2019) arXiv <https://doi-org.libproxy.ucl.ac.uk/10.48550/arXiv.1912.01842> accessed 21 August 2023.

23

C Havran and others, ‘Independent Component Analysis for Face Authentication’ (n.d.) <https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=19024782154718f925c2f17a99206e25b609225b> accessed 14 June 2023.

24

Soad M Almabdy and Liamaa A Elrefaei, ‘An Overview of Deep Learning Techniques for Biometric Systems’ in Aboul Hassanien, Roheet Bhatnagar and Ashraf Darwish (eds), Artificial Intelligence for Sustainable Development: Theory, Practice and Future Applications (Springer, New York, 2021) 127-170. <https://doi-org.libproxy.ucl.ac.uk/10.1007/978-3-030-51920-9_8> accessed 21 August 2023.

25

Zhanpeng Zhang and others, ‘Learning Social Relation Traits From Face Images’ in Proceedings of the IEEE International Conference on Computer Vision 3631 (Santiago, Chile, 7-13 December 2015) <https://doi-org.libproxy.ucl.ac.uk/10.48550/arXiv.1509.03936> accessed 21 August 2023.

26

LNet is engineered to locate the entire face region within an image. Its training regimen is distinguished by its weakly supervised approach, relying solely on image-level attribute tags, simplifying the data preparation phase. This is in contrast to many contemporary methods that necessitate detailed face bounding boxes and landmark positions. The LNet training incorporates a two-pronged approach: an initial pre-training phase classifying several general object categories, ensuring robust handling of diverse backgrounds and clutter, followed by a fine-tuning phase employing attribute tags. This fine-tuning equips LNet with the prowess to discern even subtle variances, distinguishing human faces from analogous patterns, like a cat’s face.

27

In tandem with LNet, ANet takes over once the face region is earmarked. Its primary function is to extract intricate face representations from this demarcated region, thereby predicting the face’s attributes. The training of ANet mirrors LNet’s depth, commencing with a pre-training phase where it learns to classify a number of face identities, enabling it to tackle complex facial variances. Subsequent fine-tuning with attribute tags sharpens its accuracy. To enhance prediction accuracy, SVM classifiers are utilized, with results being an average of SVM scores across all patches. In essence, the inter-operability between LNet and ANet offers a robust solution: LNet zeroes in on the face region, after which ANet extrapolates and predicts the associated face attributes.

28

Muhammad Z Khan and others, ‘Deep Unified Model for Face Recognition Based on Convolution Neural Network and Edge Computing’ (2019) 7 IEEE Access 72622.

29

Author: citation blinded for review.

30

ibid 21.

31

Alessandro Acquisti, Laura Brandimarte and George Loewenstein, ‘Privacy and Human Behavior in the Age of Information’ (2015) 347 J Sci 509.

32

ibid.

33

Phillip Terhörst and others, ‘On Soft-Biometric Information Stored In Biometric Face Embeddings’ (2021) 3 IEEE Trans Biom Behav 519.

34

Wei-Ta Chu and Chih-Hao Chiu, ‘Predicting Occupation from Images by Combining Face and Body Context Information’ (2016) 13 ACM Trans Multimed Comput Commun Appl 1

35

Asghar T Balaei and others, ‘Automatic Detection of Obstructive Sleep Apnea Using Facial Images’ (2017) in 2017 IEEE 14th International Symposium on Biomedical Imaging (Melbourne, Australia, 18-21 April 2017) 215.

36

Sarfaraz Masood and others, ‘Prediction of Human Ethnicity from Facial Images Using Neural Networks’ in Suresh C Satapathy and others (eds), Data Engineering and Intelligent Computing (Advances in Intelligent Systems and Computing 542, Springer, New York, 2018) 2017-226.

37

Yilan Wang and Michal Kosinski, ‘Deep Neural Networks are more Accurate than Humans at Detecting Sexual Orientation from Facial Images’ (2018) 114 J Pers Soc Psychol 246.

38

Xin Guo and others, ‘Social Relationship Recognition Based on a Hybrid Deep Neural Network’ (2019) in 2019 14th IEEE International Conference on Automatic Face & Gesture Recognition (Lille, France, 14-18 May 2019).

39

Abdelhakim Chergui and others, ‘Deep Features for Kinship Verification from Facial Images’ (2019) in 2019 International Conference on Advanced Systems and Emergent (Hammamet, Tunisia, 19-22 March 2019) 64.

40

Chong Y Fook and others, ‘Investigation on Body Mass Index Prediction from Face Images’ (2021) in 2020 IEEE-EMBS Conference on Biomedical Engineering and Sciences (Langkawi Island, Malaysia, 01-03 March 2021) 543.

41

Yoga AI Nanda and Bety W Sari ‘Naïve Bayes Algorithm Implementation to Detect Human Personality Disorders’ (2020) 17 Techno Nusa Mandiri 9.

42

Alexander Kachur and others, ‘Assessing the Big Five Personality Traits Using Real-Life Static Facial Images’ (2020) 10 Sci Rep 8487.

43

Lin Ang and others, ‘A Novel Method in Predicting Hypertension Using Facial Images’ (2021) 11 J Appl Sci 2414.

44

Satyajit Nayak and others, ‘Estimation of Depression Anxieties and Stress through Clustering of Sequences of Visual and Thermal Face Images’ (2021) in 2021 IEEE 18th India Council International Conference (Guwahati, India, 19-21 December 2021).

45

Ahmad B Hassanat and others, ‘Deep Learning for Identification and Face, Gender, Expression Recognition Under Constraints’ (2021) arXiv <https://doi-org.libproxy.ucl.ac.uk/10.48550/arXiv.2111.01930> accessed 21 August 2023.

46

Michal Kosinski, ‘Facial Recognition Technology Can Expose Political Orientation from Naturalistic Facial Images’ (2021) 11 Sci Rep 100.

47

Rain E Haamer and others, ‘Review on Emotion Recognition Databases’ in Gholamreza Anbarjafari and Sergio Escalera (eds), Human-Robot Interaction—Theory and Application (InTech, Rijeka Croatia, 2018) 39-63.

48

Alexander Todorov and others, ‘Social Attributions from Faces: Determinants, Consequences, Accuracy, and Functional Significance’ (2014) 66 Annu Rev Psychol 519.

49

David Martin and others, ‘Systematic Review and Meta-Analysis of the Relationship Between Genetic Risk for Schizophrenia and Facial Emotion Recognition’ (2020) 218 Schizophr Res 7.

50

Ryan Steed and Aylan Caliskan, ‘Image Representations Learned with Unsupervised Pre-Training Contain Human-Like Biases’ (2020) arXiv <https://doi-org.libproxy.ucl.ac.uk/10.48550/arXiv.2010.15052> accessed 21 August 2023.

51

Ayesha Gurnani and others, ‘Saf-Bage: Salient Approach for Facial Soft-Biometric Classification-Age, Gender, and Facial Expression’ (2019) in 2019 IEEE Winter Conference on Applications of Computer Vision (Waikoloa Village, USA 7-11 January 2019) 839.

52

Yilan Wang and Michal Kosinski, ‘Deep Neural Networks are More Accurate than Humans at Detecting Sexual Orientation from Facial Images’ (2018) 114 J Pers Soc Psychol 246.

53

Elias Wright ‘The Future of Facial Recognition is Not Fully Known: Developing Privacy and Security Regulatory Mechanisms for Facial Recognition in the Retail Sector’ (2019) 29 Fordham Intell Prop Media & Ent LJ 611.

54

Elias Aboujaoude, ‘Protecting Privacy to Protect Mental Health: The New Ethical Imperative’ (2019) 45 J Med Ethics 604.

55

Chien-Cheng Huang, Kwo-Jean Farn and Frank Yeong-Sung Lin, ‘A study on information security management with personal data protection’ (2011) in 2011 IEEE 17th International Conference on Parallel and Distributed Systems (Tainan, Taiwan 7-9 December 2011) 624.

56

Kevin Sack, ‘Patient Data Posted Online in Major Breach of Privacy’ New York Times (Manhattan, New York, 8 September 2011) <https://www.nytimes.com/2011/09/09/us/09breach.html> accessed 22 August 2023.

57

Fritz Lang (Dir) and Thea von Harbou (writ), Metropolis [film] (Universum Film AG (UFA) 1927).

58

George Lucas (Dir, writ) Star Wars [film] (Lucasfilm Ltd 1977).

59

Ridley Scott (Dir), Hampton Fancher (writ) and David Peoples (writ), Blade Runner [film] (Ladd Company, Shaw Brothers and Blade Runner Partnership 1982).

60

Ricky S Joshi and others, ‘Look-Alike Humans Identified by Facial Recognition Algorithms Show Genetic Similarities (2022) 40 Cell Rep 111257.

61

Yuval Harari, ‘How to Survive the 21st Century [Speech]’ (World Economic Forum Annual Meeting, 2020, January 21, Davos, Switzerland) <https://www.weforum.org/agenda/2020/01/yuval-hararis-warning-davos-speech-future-predications/> accessed 22 August 2023.

62

Privacy Act 1988 (Cth), s 6.

63

Office of the Australian Information Commissioner (OAIC), ‘Biometric Scanning’ (Office of the Australian Information Commissioner (OAIC) website) accessed 22 August 2023.

64

ibid.

65

Rama Chellappa, Ashok Veeraraghavan and Narayanan Ramanathan, ‘Gait Biometrics, Overview’ in Stan Z Li and Anil K Jain (eds), Encyclopedia of Biometrics (Springer, New York, 2009) 628.

66

Privacy Act 1988 (Cth), s 6.

67

Commissioner initiated investigation into Clearview AI, Inc. (Privacy) [2021] AICmr 54 (14 October 2021). See also: Clearview AI Inc and Australian Information Commissioner [2023] AATA 1069 (8 May 2023).

68

Commissioner initiated investigation into 7-Eleven Stores Pty Ltd (Privacy) (Corrigendum dated 12 October 2021) [2021] AICmr 50 (29 September 2021).

69

Australian Government Attorney General’s Department, ‘Privacy Act Review: Issues Paper’ (Attorney General’s Department 2020).

70

Australian Government Attorney General’s Department, ‘Privacy Act Review: Discussion Paper’ (Attorney General’s Department 2021).

71

Australian Government Attorney General’s Department, ‘Privacy Act Review Report’ (Attorney General’s Department 2022) 31.

72

ibid 33.

73

ibid 38.

74

ibid 55.

75

ibid 57.

76

Nicholas Davis, Lauren Perry and Edward Santow, ‘Facial Recognition Technology: Towards a Model Law’ (Human Technology Institute, The University of Technology Sydney 2022) <https://www.uts.edu.au/sites/default/files/2022-09/Facial%20recognition%20model%20law%20report.pdf> accessed 22 August 2023.

78

ibid.

79

ibid.

80

Sophie Farthing and others, ‘Human Rights and Technology Final Report’ (Australian Human Rights Commission 2021) 116.[AQ]

81

Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1, art 83.

82

See further: Author: citation blinded for review.

This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reproduction and distribution of the work, in any medium, provided the original work is not altered or transformed in any way, and that the work is properly cited. For commercial re-use, please contact journals.permissions@oup.com