DT
PT
Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
Add Tribune As Your Trusted Source
search-icon-img
search-icon-img
Advertisement

Indian banks accelerating AI adoption for financial crime compliance: Report

  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

New Delhi [India], December 9 (ANI): The Indian banks are rapidly integrating machine learning models into Financial Crime Compliance (FCC) operations amid rising fraud and regulatory scrutiny, making the traditional rule-based systems inadequate, KPMG said in a report.

Advertisement

The report highlighted that the legacy manual and threshold-based methods are "progressively losing effectiveness" against sophisticated financial crime.

Advertisement

This is prompting financial institutions to shift to AI-driven frameworks for Anti-Money Laundering (AML), fraud detection and customer risk assessment, it said.

Advertisement

Notably, the KPMG report also highlighted that the shift towards AI is being accelerated by regulatory expectations, including RBI's FREE-AI framework and SEBI's guidelines, which call for responsible and explainable AI systems.

It added that financial institutions are moving from pilot implementations to "full-scale machine learning integration" across the customer lifecycle.

Advertisement

The report further cited RBI Innovation Hub's MuleHunter.AI tool, noting that over 15 Indian banks now use it and that one major bank achieved 95% accuracy in detecting mule accounts.

Highlighting the use of AI to tackle fraud globally, the report, citing the World Economic Forum, said that global financial services have already spent USD 35 billion on AI adoption through 2023, with investment projected to reach USD 97 billion by 2027.

The report highlighted that rule-based Financial Crime Compliance (FCC) systems face high false positives, lack adaptability to emerging laundering typologies, and cannot scale with rising transaction volumes.

In contrast, machine learning models enable real-time monitoring, anomaly detection, behavioural analytics and automated drafting of Suspicious Activity Reports using natural language processing.

KPMG also noted increasing regulatory focus on model risk management, emphasising the need for independent validation to address opacity, bias, data quality issues, and vulnerability to adversarial manipulation.

The report warned that AI-driven systems, if not properly stress-tested, could amplify systemic risks. (ANI)

(This content is sourced from a syndicated feed and is published as received. The Tribune assumes no responsibility or liability for its accuracy, completeness, or content.)

Read what others don’t see with The Tribune Premium

  • Thought-provoking Opinions
  • Expert Analysis
  • Ad-free on web and app
  • In-depth Insights
Advertisement
Advertisement
Advertisement
tlbr_img1 Classifieds tlbr_img2 Videos tlbr_img3 Premium tlbr_img4 E-Paper tlbr_img5 Shorts