Bank of Russia Maps AI Use Across Finance
The Bank of Russia has published an updated overview of AI use in the country’s financial sector. The report shows where AI is already used and how regulators want companies to handle the risks. The central bank says financial institutions are actively deploying AI to cut costs, improve customer experience, and strengthen risk controls.
The page also frames the regulator’s approach as risk-based and “technology-neutral,” with support mechanisms ranging from monitoring and policy initiatives to a regulatory sandbox and experimental legal regimes for AI solutions.
Where banks and fintechs actually use AI
Based on a 2025 Bank of Russia survey of 252 financial organizations, the central bank lists the most common AI application areas across finance: customer interactions, identity and authentication, anti-fraud, complaints handling, customer preference detection, marketing, analytics and forecasting, and risk management.
The central bank’s core message is that AI use isn’t limited to chatbots. It points to long-running use cases like credit scoring — where lending decisions are often made with minimal or no human involvement — as proof that AI has been part of finance for years.
Why institutions deploy it
The Bank of Russia says firms use AI to:
- reduce operating costs,
- improve customer experience,
- increase transparency and automate business processes,
- gain competitive advantage, and
- optimize risk management.
In plain terms: AI is being positioned as an efficiency tool and a risk tool—not only a “new product” feature.
What the regulator is doing about it
The central bank says it is actively supporting AI adoption while keeping oversight in place, including:
- monitoring AI use in financial products and services,
- developing and reviewing regulatory initiatives for AI, and
- enabling experiments via the Bank of Russia’s regulatory sandbox and experimental legal regimes focused on AI.
It also states it follows Russia’s National Strategy for AI development through 2030 and aligns its policy stance with that framework.
The risk message: AI doesn’t remove accountability
A key point in the Bank of Russia’s FAQ section: using AI does not reduce a financial institution’s responsibility if client rights are violated. Customers can still file complaints with the central bank or take disputes to court.
The regulator also highlights a practical “human fallback” expectation: it recommends that financial firms allow customers to refuse AI interactions and switch to a human operator. The page says survey results showed over 80% of organizations that use AI on a regular basis already follow this recommendation.
Ethics guidance and “best practices” work
To build trust in AI, the Bank of Russia points to its Code of Ethics for AI development and use in the financial market. It also says market participants expressed readiness to work with the regulator on a best-practices compendium to show how the ethics recommendations can be applied in real business processes.
Why it matters for crypto
- AI is becoming a default layer in fraud detection and identity checks—two areas that directly affect crypto on/off-ramps and AML workflows.
- A regulator-backed sandbox and experimental legal regimes can accelerate “AI in finance” pilots, including monitoring, compliance automation, and risk scoring systems relevant to digital assets.
- The “AI doesn’t remove responsibility” stance signals tougher expectations for explainability and dispute resolution in automated financial decisions.
What to watch next
- Whether the Bank of Russia publishes the planned best-practices compendium tied to its AI ethics code.
- Further updates from the regulator’s monitoring work—especially if it starts naming higher-risk AI use cases (credit, fraud, identity).
- Expansion of sandbox or experimental legal regime participation and what categories of AI systems are prioritized.
Source: Bank of Russia