Skip links

Bi-Monthly Feb 2024

AI-Driven Deception: Fortifying Fintech’s Defences Against the Next Wave of Financial Frauds

Last year's recognition of ICO-LUX by the Fintech Germany Award spotlighted not just a single startup's success but also heralded a broader shift towards embracing AI in combating fraud across the Fintech landscape. From the East German town of Jena, ICO-LUX's breakthrough in document verification has set a new benchmark, showcasing how AI can transform traditional security measures into dynamic, cutting-edge solutions. This pivotal innovation underscores the Fintech industry's potential to leverage AI, but it also brings to light the complexities and challenges inherent in Generative AI technologies. As we explore impact of AI on combating fraud, it's imperative to also consider the dual-edged nature of these technologies, particularly the challenges posed by Generative AI.

The Dark Side of Generative AI

In the wrong hands this powerful technology introduces significant risks for financial institutions and customers. The widespread availability of AI technologies has dramatically escalated the threat of fraud and financial crimes, with SAS, a leading American firm in analytics and AI software, forecasting that generative AI could cause a surge in identity fraud, costing the global banking sector an additional US$2 billion annually.

Statistics from Cifas, a UK-based non-profit dedicated to fraud prevention, highlight growing concerns as well. In 2022, instances of identity fraud surged by approximately 25%, while the use of AI tools to attempt to deceive banking security systems saw a dramatic increase of 84%.

While fraudsters trying to bypass security and identification systems for their financial gain has been a longstanding issue, the integration of AI into these efforts has introduced a new dimension of complexity and efficacy to their schemes. Criminals using AI - which can help perform rapid, automated tasks, among other functions - can scrape the internet at record speed. Deepfake videos and realistic digital content can be used in sophisticated phishing scams. AI-driven chatbots can imitate human conversations to entice victims, while voice cloning technologies can bypass voice authentication systems. Additionally, AI's data analysis capabilities enable highly personalised scams, and its ability to pressure test systems can uncover and exploit financial security vulnerabilities, potentially leading to unauthorised access and substantial data or financial loss.

The Rise of Synthetic Identity Fraud

The emergence of AI-fuelled scams extends beyond conventional identity theft, introducing a more sophisticated threat known as synthetic identity fraud. Fraudsters create synthetic identities by blending real and falsified personal information, such as combining a legitimate tax identification number with a fabricated name and date of birth. This composite identity is then used to open financial accounts. Since part of the data is genuine, it makes detection using conventional fraud monitoring systems challenging.

The process often starts with an application for credit, which may initially be rejected, yet this action creates a credit file in the name of the synthetic identity. Over time, fraudsters carefully cultivate these identities, building a credit history that appears genuine. Once the credit lines are fully established and maximised, the repayments abruptly stop, resulting in significant losses for financial institutions. With the application of Generative AI fraudsters can easily create thousands of such synthetic identities. Once the fraud has been detected by the financial institution, it is very hard to chase a person, who has never existed.

Deloitte expects synthetic identity fraud to generate at least US$23 billion in losses by 2030 in the US, prompting many banks and Fintechs to develop more advanced security systems to weed out would-be perpetrators. While our research has predominantly uncovered synthetic identity fraud data specific to the US, we anticipate this issue will increasingly affect the EU and Germany in the foreseeable future.

Leveraging AI for Good: Fraud Detection and Prevention

Generative AI, while offering new avenues for fraudsters, also provides a powerful tool for financial institutions to fight back. It's particularly effective in analysing anomalies and suspicious patterns in real-time, aiding in the prevention of fraud related to digital payments, account takeovers, and synthetic identities created by AI tools. Financial compliance officers are already investing in new technologies, including AI, to fortify defences against these sophisticated scams, with a significant percentage acknowledging AI-powered scams as a growing threat.

One specific solution for Fintechs and financial institutions is to turn towards more sophisticated biometric security systems. These systems leverage both physical and behavioural biometrics to offer multi-layered defence mechanisms against fraudulent activities. Physical biometrics assess unique individual traits like fingerprint patterns, while behavioural biometrics analyse patterns in user interaction, such as typing speed. By integrating these technologies, banks can significantly enhance the accuracy of identity verification, making it exceedingly difficult for fraudsters to mimic real user identities. These advancements not only fortify security measures against emerging threats but also streamline the user experience, reducing friction in customer verification processes.

The Data Protection Dilemma

Nevertheless, responding appropriately to AI-abetted fraud requires massive amounts of legitimate data to detect patterns that allow security professionals to flag illicit activity. The data protection dilemma within the finance sector, particularly in Germany, underscores a pivotal challenge: balancing the innovative potential of Generative AI against stringent data protection laws. Germany's adherence to principles of data minimisation and purpose limitation in data collection, while historically justified, is increasingly seen as a constraint in the age of AI-driven analytics and fraud prevention. The European Union's Artificial Intelligence Act (AIA) and Germany's active role in shaping these regulations reflect a broader attempt to navigate this complex terrain. The AIA is designed to establish a comprehensive framework for the regulation of AI systems within the EU, including measures aimed at enhancing security and preventing fraudulent activities facilitated by AI technologies. While the AIA can indeed set standards and obligations for AI system providers within the EU to ensure that AI is used safely and ethically, its direct impact on preventing fraud, especially by actors outside the EU, poses challenges. Fraudsters operating outside the EU's jurisdiction can access and utilise AI technologies without adhering to the AIA's stringent regulations. This creates a potential loophole where fraudsters can exploit AI's capabilities for malicious purposes while EU-based financial institutions must comply with the AIA's requirements. While the AIA represents a significant step towards regulating AI use within the EU, its direct effectiveness in preventing fraud perpetrated by external actors remains limited. The act's success in fostering a safer AI ecosystem may depend on global cooperation and the adoption of similar standards worldwide.

In practical terms, integrating AI into business operations may introduce considerable legal and operational challenges, particularly when considering specific use cases. Given the stringent requirements of the GDPR and the looming AIA, companies must adopt a strategy in choosing AI providers and ensuring regulatory compliance tailored to each AI application. The evolving nature of AI technology further complicates adherence to the GDPR, challenging companies to balance innovation with the transparency and rights protection of data subjects.

To capitalise on the opportunities Generative AI offers without falling behind in the international race for technological advancement, a reformulation of these data protection principles is necessary. In essence, the sector's future hinges on navigating this delicate balance, requiring reforms that are both timely and in tune with the rapid advancements in AI technology. The ultimate goal is to ensure that Germany's data protection laws serve as an enabler rather than a barrier to leveraging AI's full potential in the fight against financial fraud.

by Guenther Petelin | Senior Advisor | Berlin Finance Initiative

Wir verwenden ausschließlich funktionale Cookies