iProov Highlights Role of Biometrics in Combatting Deepfakes and Misused Generative AI

Generative AI, when used correctly, can be a great tool that aids innovation. In the wrong hands, however, it can be incredibly harmful, especially with the emergence of deepfakes. A new survey from iProov, the biometric solutions provider, has revealed the attitudes on the threat of generative AI and deepfakes across the UK, US, Brazil, Australia, New Zealand and Singapore.

The Good, The Bad, and The Ugly, is a global survey commissioned by iProov that gathered the opinions of 500 technology decision makers from the UK, US, Brazil, Australia, New Zealand and Singapore on the threat of generative AI and deepfakes.

Hearing from 500 technology decision makers, the survey titled The Good, The Bad, and The Ugly, revealed that 47 per cent of firms have encountered a deepfake. Seventy per cent believe these fakes using generative AI will have a big impact on their organisations.

Andrew Bud, founder and CEO, iProov

“We’ve been observing deepfakes for years but what’s changed in the past six to twelve months is the quality and ease with which they can be created and cause large scale destruction to organisations and individuals alike,” said Andrew Bud, founder and CEO, iProov.

“Perhaps the most overlooked use of deepfakes is the creation of synthetic identities which because they’re not real and have no owner to report their theft go largely undetected while wreaking havoc and defrauding organisations and governments of millions of dollars.”

What regions are most impacted?

Europe (53 per cent) and Latin America (53 per cent) are the regions most likely to encounter a deepfake according to the survey. They are followed by Asia Pacific (51 per cent) and North America (34 per cent). Despite not encountering that many deepfakes (comparatively), over seven in 10 (71 per cent) North American firms feel deepfakes will have an impact on their organisations.

A similar sentiment was shared by 81 per cent of Asia Pacific (APAC), 72 per cent of European and 54 per cent of Latin American (LatAm) firms.

AI: friend or foe?

While organisations recognise the increased efficiencies that AI can bring, these benefits are also enjoyed by threat technology developers and bad actors.

Sevnety-three per cent of firms are implementing solutions to address the deepfake threat. Unfortunately, confidence is low with the study identifying an overriding concern that not enough is being done by organisations to combat them. More than two-thirds (62 per cent) worry their organisation isn’t taking the threat of deepfakes seriously enough.

The survey shows recognition by organisations that the threat of deepfakes is a real and present threat. They can be used against people in numerous harmful ways including defamation and reputational damage but perhaps the most quantifiable risk is in financial fraud.

Here they can be used to commit large-scale identity fraud by impersonating individuals in order to gain unauthorised access to systems or data, initiate financial transactions, or deceive others into sending money on the scale of the recent Hong Kong deepfake scam.

The stark reality is that deepfakes pose a threat to any situation where an individual needs to verify their identity remotely but those surveyed worry that organisations aren’t taking the threat seriously enough.

Combatting deepfakes

A variety of tools have been developed by organisations to combat identity fraud and deepfakes. However, the survey also identified other means of attack fraudsters are still using. Sixty-four per cent of respondents identified password breaches as a common means of attack. Meanwhile, ransomware (63 per cent) and phishing/social engineering attacks (61 per cent) also continue to be methods used by criminals.

Interestingly, deepfakes were not the most prevalent concern but tied for third place with phishing, with 61 per cent.

Despite the fears of the tech, many organisations across the globe still view generative AI as good. They recognise that generative AI is innovative, secure, and reliable, and helps them to solve problems. They view it as more ethical than unethical and believe it will have a positive impact on the future.

As a result, 83 per cent of firms have been able to increase their budget in programmes that encompass the risk of AI. Additionally, most have introduced policies on the use of new AI tools.

Introducing biometrics

One way organisations are combatting deepfakes is with biometric solutions. Many firms have turned to facial and fingerprint biometrics depending on the task to protect user data and organisations from deepfake attacks.

For example, the study found organisations consider facial to be the most appropriate additional mode of authentication to protect against deepfakes for account access/log-in, personal details account changes, and typical transactions.

It’s clear from the study that organisations view biometrics as a specialist area of expertise with nearly all (94 per cent) agreeing a biometric security partner should be more than just a software product.

Organisations surveyed stated that they are looking for a solution provider that evolves and keeps pace with the threat landscape with continuous monitoring (80 per cent), multi-modal biometrics (79 per cent), and liveness detection (77 per cent) all featuring highly on their requirements to adequately protect biometric solutions against deepfakes.

Bud continued: “Despite what some might believe, it’s now impossible for the naked eye to detect quality deepfakes. Even though our research reports that half of the organisations surveyed have encountered a deepfake, the likelihood is that this figure is a lot higher because most organisations are not properly equipped to identify deepfakes.

“With the rapid pace at which the threat landscape is innovating, organisations can’t afford to ignore the resulting attack methodologies and how facial biometrics have distinguished themselves as the most resilient solution for remote identity verification.”

The post iProov Highlights Role of Biometrics in Combatting Deepfakes and Misused Generative AI appeared first on The Fintech Times.