30% Indian businesses have responsible AI practices: Nasscom report
The report states that 60% of businesses confident in scaling AI responsibly have mature practices in place
Indian businesses are demonstrating strong momentum and are steadily advancing on their responsible AI (RAI) journeys, with 30% having established mature RAI practices and 45% actively implementing formal frameworks, as per the Nasscom’s State of Responsible AI in India 2025 report.
The report, unveiled at the Responsible Intelligence Confluence event in New Delhi, states that about 60% of businesses confident in scaling AI responsibly have mature practices in place. This progress spans enterprise sizes, with large corporations leading at 46%, while SMEs and startups are gaining ground at 20% and 16%, respectively. Sector-wise, BFSI leads at 35% maturity, followed by TMT at 31% and healthcare at 18%, with nearly half of businesses across these industries actively advancing their frameworks.
The report was based on a survey of 574 senior executives from large enterprises, SMEs, and startups involved in the commercial development and/or use of AI in India; offering a comprehensive view of this transition, mapping how businesses across sectors are progressing on their Responsible AI journeys.
Speaking at the event, Sangeeta Gupta, Senior VP & Chief Strategy Officer, Nasscom, said AI becomes deeply embedded in critical decisions across finance, healthcare, and public services, responsible AI is no longer optional; it is foundational to building trust, ensuring accountability, and sustaining innovation.
"The real measure of India's AI leadership will not just be in the scale of adoption, but in how responsibly and inclusively these systems are designed and deployed. For businesses, this means moving beyond compliance checkboxes to embedding responsible practices across the entire AI lifecycle," she says.
"With the right investments in governance, talent, and transparent frameworks, India has the opportunity to set global benchmarks for trustworthy AI that serves society at large," she added.
The report noted that the companies report highest confidence in meeting data protection obligations, reflecting the maturity of privacy frameworks, though monitoring-related compliances remain an area requiring further strengthening. Accountability is still largely top-down, with about 48% placing primary responsibility with the C-suite or board, though 26% now locate it with departmental heads, and AI ethics boards are gaining traction.
Additionally, the governance mechanisms are strengthening, with AI ethics boards and committees gaining traction, particularly among mature organizations where 65% have constituted such bodies, though some businesses remain cautious about their utility and effectiveness, it noted.
Despite this progress, the report highlighted that the organizations continue to navigate significant challenges. On the risk front, hallucinations (56%), privacy violations (36%), lack of explainability (35%), and unintended bias or discrimination (29%) are the most frequently experienced challenges, while lack of high-quality data (43%), regulatory uncertainty (20%), and shortage of skilled personnel (15%) are the biggest barriers to effective RAI implementation.
Notably, for large enterprises and startups, regulatory uncertainty remains a significant concern, while SMEs cite high implementation costs as their second biggest challenge, highlighting the diverse nature of obstacles across different business sizes.
As AI capabilities deepen and systems become more autonomous, responsible AI is emerging as a defining factor for which businesses can scale confidently while retaining stakeholder trust. Businesses with higher RAI maturity already report better preparedness for emerging AI technologies, especially Agentic AI systems, as per the report.
Nearly half of mature organizations express confidence that their existing frameworks can address these evolving challenges, though industry leaders caution that most businesses may need to update their RAI frameworks to adequately address agentic AI-related risks.







