- Home
- Blue Talks
- Can Artificial Intelligence Help Boost Financial Inclusion?
Can artificial intelligence help boost financial inclusion?
For all its potential superpowers, a lack of regulation means it could be a double-edged sword
Today, billions of people still lack access to financial tools that can help them improve their lives and prosper. Without a bank account, they may lack financial records. Without those records, it can be impossible to qualify for loans that could help finance a business, home or education.
The UN views financial inclusion, or access to useful and affordable financial products that meet people’s needs, as an enabler of eight of the 17 Sustainable Development Goals (SDGs) including ending hunger, achieving gender equality and promoting economic growth.
Technology has already played a role in helping make progress on SDGs, and financial inclusion is a prime example. In Kenya, in a matter of eight years, the M-PESA program for basic mobile money payments lifted nearly 200,000 people, or 2% of the country’s population, out of extreme poverty. Now, many view artificial intelligence as the next logical step to harness the democratization of digital technology and the explosion of data for financial inclusion. Its so-called practical superpowers present the potential to foster the inclusiveness and equitability of financial services in underbanked regions such as Africa, according to a FIBR report.
“AI is definitively a game-changer for financial inclusion,” said Marco Trombetta, IE Professor of Accounting and Management Control. He explained that the technology could help eliminate one of the biggest barriers to financial inclusion — identification. AI-powered systems could use biometric detectors like facial recognition alongside data aggregation tools to confirm who a person is at a distance and without standard documentation.
AI data processing can also be harnessed to offer credit to those who have traditionally been left behind. As fintech entrepreneur Douglas Merrill told the New York Times in 2012: “All data is credit data. We just don’t know how to use it yet.”
Fast forward ten years, and a growing number of companies are using alternative data to offer and determine the conditions for loans and other financial products like insurance. This data can come from nearly anywhere — social networks, phone calls, photos of people’s homes and even smartwatch-generated fitness data. The Discovery Vitality program in South Africa, the world’s largest “behavioral platform linked to financial services” for instance, offers financial rewards for healthy behavior, which is tracked daily from people’s homes, phones, cars and bodies.
As with any digital product, there is a tradeoff between service and privacy. For vulnerable populations, in particular, customer protection and education around both data and financial services become key. Yet the environment around this emerging technology remains something of a regulatory wild west.
“The market for financial services for people at the bottom of the pyramid is potentially very big. This fact may attract all sorts of players. Some are good game-changers, but some others may not be necessarily well-intentioned,” explained Trombetta. “Moreover, many financially excluded people live in non-democratic countries, which may increase the danger of bad use of the data collected.”
At the same time, a study by the Consultative Group to Assist the Poor (CGAP) looked at digital credit in Kenya and Tanzania and discovered cause for concern. A significant amount of those taking out digital loans with the ease of a few swipes did not understand the products’ costs or terms. This was a major factor behind half of the users repaying a loan late and in Tanzania, a staggering 31% of users ended up defaulting on one of these loans.
“If customer protection and responsible financial management cannot be guaranteed, then a digital channel may increase the number of clients of the formal financial system, but it can also make their lives worse,” added Trombetta.
Adding to the challenge is the overwhelming opacity of the “black box” of AI algorithms. How can they be explained to users if their creators may hardly understand how the algorithms come to decisions? How can users know why they may not qualify for a product if the terms are not clear? These issues, as well the importance of using high-quality, context-specific training data and monitoring and addressing biases in the algorithms, remain largely unresolved by regulatory systems.
But the pitfalls don’t erase the potential of AI in boosting financial inclusion.
Algorithms using natural language processing have been successfully deployed as chatbots or at call centers to reduce costs for financial service providers, and they open up the possibility of serving more people. Likewise, AI has been wielded effectively for financial fraud prevention, which also can drive down the costs of financial services. As PricewaterhouseCoopers described in a recent report, lowering costs to increase access “might be the only way banks can deepen inclusion and make the economics work.”
During the Covid-19 crisis, the presence of digital financial services allowed governments to transfer money to vulnerable people quickly and precisely, Trombetta pointed out. Whether more advanced algorithms could allow governments to better target those in need is being experimented with right now.
For instance, South African computer scientist Raesetje Sefala is currently building algorithms that flag poverty hot spots and developing data sets she hopes will direct aid, housing or clinics to the areas that need it most. "Let's use what is cutting edge and apply it straight away, or as a continent, we will never get out of poverty,” she said.