EU Launches AI Act: Offenders Must be Made an Example Of Explains Industry

The European Union‘s latest AI Act, published yesterday, reflects the forward-thinking and innovation-driven attitude of the region as it looks to regulate the way organisations develop, use and apply artificial intelligence (AI).

First proposed in 2020, the regulators aim to govern the AI space in Europe by establishing the level of risk AI has on a company based on how it is used. The European Union has created four different categories in the AI Act that firms will fall into: minimal risk, specific transparency risk, high risk, and unacceptable risk.

Firms that fall into the first category are those that use AI for things like spam filters. These systems face no obligations under the AI Act due to their minimal risk to citizens’ rights and safety. Specific transparency risk involves AI systems like chatbots. In this case, firms must clearly disclose to users that they are interacting with a machine, especially if things like deep-fakes, biometric categorisation and emotion recognition systems are being used.

In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

The high-risk factors take place when firms use AI for risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity.

Any signs of unacceptable risk will result in the service being banned. This will take place when there is a threat to the fundamental rights of people.

The majority of rules of the AI Act will start applying on 2 August 2026. However, prohibitions of AI systems deemed to present an unacceptable risk will already apply after six months, while the rules for so-called General-Purpose AI models will apply after 12 months.

How does the AI Act fit in with current regulation?

AI is an undeniable part of nearly every ecosystem now. The level of automation that it brings completely outranks the manual work and resources previously needed to complete tasks. But as the AI Act comes into play, different organisations are responding to how it will integrate with existing regulations.

Unveiling the extent of this, Moody’s, the data, intelligence and analytical tools provider, set out to find out how organisations are preparing for the change. Entity verification was identified as one of the key factors for greater trust and accuracy using AI.

According to Moody’s study, more than a quarter (27 per cent) of respondents see entity verification as critical for improving AI accuracy in risk and compliance activities. An additional 50 per cent say it has value in enhancing accuracy. Hallucinations have the potential to hinder compliance processes, where assessing the whole-risk picture and thoroughly understanding who they are doing business with are essential.

Interestingly, the report also found that AI adoption in risk and compliance is on the rise. Eleven per cent of organisations contacted by Moody’s via the study are now actively using AI—an increase of two per cent since Moody’s last examined the adoption of AI in compliance in 2023. Furthermore, 29 per cent of respondents are currently trialling AI applications – an eight per cent. increase on Moody’s findings last year.

Are firms ready?

As evident from Moody’s findings, AI adoption is on the rise, meaning more organisations will need to align with the AI Act. So how is the fintech responding to this rise and the impact of the new regulation?

Ramyani Basu, global lead, AI and data at Kearney

Ramyani Basu, global lead, AI and data at Kearney, the management consulting firm: “While some elements of the EU AI Act may seem premature or vague, significant strides have been made for open source and R&D.

“However, development teams must ensure that their AI systems comply with these standards – or risk hefty fines of up to seven per cent of their global sales turnover. Equally, the introduction of the new regulation means that organisations and internal AI teams will have to proactively consider how the new rules will not just impact the deployment of AI products or solutions, but the development and data collection, too.

“Teams working across different regions may initially struggle to realign their AI strategies due to varying tech standards in Europe. That being said, embracing the EU AI Act’s guidelines not only minimises these challenges and risks, but also unlocks opportunities for these businesses in new markets. While compliance might seem daunting at first, teams that adapt to the new regulations effectively will find it a catalyst for growth and innovation.

“A particularly positive aspect of the regulation is its empowerment of end-users. The Act not only allows EU citizens to file complaints about AI systems, but they can also receive explanations on how they work. This transparency is critical to building confidence in the technology, especially given the immense amount of data being shared.”

Sending a message to offenders

Jamil Jiva, global head of asset management at Linedata

Jamil Jiva, global head of asset management at Linedata, the global software provider, compares the new AI Act to the General Data Protection Regulation (GDPR) and how some firms not abiding by the Act need to be made an example of.

“The EU showed through GDPR that they could flex their regulatory influence to mandate data privacy best practices to the global tech industry. Now, they want to do the same with AI.

“With GDPR, it took a few years for the big tech companies to take compliance seriously, and some companies had to pay significant fines due to data breaches. The EU now understands that they need to hit offending companies with significant fines if they want regulations to have an impact.

“Companies who fail to adhere to these new AI regulations can expect large penalties as the EU tries to send a message that any company operating inside their jurisdiction should comply with EU law. However, there is always a question around how you can enforce borders on the internet, with VPNs and other workarounds making it difficult to determine where a service is delivered.

Customers will set the standard

“I believe that industry standards around AI will be set by customers as companies are forced to self-regulate their practices to align with what their clients accept as ethical and transparent.

“To ensure that they are operating within acceptable standards, companies should start by distinguishing between AI as a sweeping technology, and the countless possible use cases. Whether AI usage is ethical and compliant will depend on what a model is being used for, and what data is used to train it. So, the main thing global tech companies can do is provide a governance framework that ensures that every different use case is both ethical and practical.”

A step in the right direction

Steve Bates, chief information security officer at Aurum Solutions, the data driven digital transformation firm, notes the AI hype has made many organisations turn to use the technology. However, this is not necessary. He explains how organisations must reevaluate whether implementing AI is truly necessary, otherwise it could result in complicated regulatory processes.

“The act is a positive step towards improving safety around use of AI, but legislation isn’t a standalone solution. Many of the act’s provisions don’t come into effect until 2026, and with this technology evolving so rapidly, legislation risks becoming outdated by the time it actually applies to AI developers.

“Notably, the act does not require AI model developers to provide attribution to the data sources used to build models, leaving many authors of original material unable to assert and monetise their rights on copywrite material. Alongside legislative reform, businesses need to focus on educating staff on how to safely use AI, where it should and shouldn’t be deployed and identifying targeted use-cases where it can boost productivity.”

“AI isn’t a silver bullet for everything. Not every process needs to be overhauled by AI and in some cases, a simple automation process is the better option. All too often, firms are implementing AI solutions just because they want to jump on the bandwagon. Instead, they should think about what problems need to be solved, and how to do that in the most efficient way.”

Banks must be aware of how to remain compliant 

Shaun Hurst, principal regulatory advisor at Smarsh

Shaun Hurst, principal regulatory advisor at Smarsh, the software development firm said: “As the world’s first legislation specifically targeting AI comes into law today, financial services firms will need to ensure compliance when deploying such technology for the purpose of providing their services.

“Banks utilising AI technologies categorised as high-risk must now adhere to stringent regulations focusing on system accuracy, robustness and cybersecurity, including registering in an EU database and comprehensive documentation to demonstrate adherence to the AI Act. For AI applications like facial recognition or summarising internal communications, banks will need to maintain detailed logs of the decision-making process. This includes data inputs, the AI model’s decision-making criteria and the rationale behind specific outcomes.

“While the aim is to ensure accountability and the ability to audit AI systems for fairness, accuracy and compliance with privacy regulations, the increase in regulatory pressure means financial institutions must assess their capabilities in keeping abreast with these changes in legislation and whether existing compliance technologies are up to scratch.”

The post EU Launches AI Act: Offenders Must be Made an Example Of Explains Industry appeared first on The Fintech Times.