Modern finance runs on a web of complex technologies that grow more intricate every year. Yet for millions around the world, this digital ecosystem remains out of reach. Lacking access to modern financial tools, they rely on local, manual systems where creditworthiness is often subjective and beyond their control. Their absence from the global financial map doesn’t just limit their opportunities—it slows economic growth for everyone.

In the last century, centralized digital verification systems revolutionized how we defined and managed trust by embedding it in institutional policies. But the 21st century is reshaping that idea. Instead of relying on institutions to enforce trust, decentralized technologies like blockchain are embedding trust directly into their design. While blockchain has often been dismissed as a buzzword or a speculative playground, its real strength lies in its ability to redefine how we share evidence, assign liability, and distribute control.

Decentralized systems such as blockchain-based crowdlending platforms and deep-tier supply chain finance platforms are showing what this future looks like. Both types platform archetypes use distributed ledger technology (DLT) to extend fair access to credit and to reimagine the governance structures that support it. By coding trust and accountability into the system’s very architecture, they are helping to build a more inclusive financial world—one where transparency and equity are not policies, but built-in principles. 

The SciTech Research Hub at IE University explores breakthroughs in emerging technologies, with a special focus on how they can tackle pressing global challenges such as inclusion and sustainable development. In this context, Professor Álvaro Arenas examines how blockchain-based financial models can expand access and build systems that are not only transparent but also adaptable and resilient at scale. His work highlights how accountability, when designed into the system itself, can drive both innovation and trust—two pillars essential to a more inclusive digital economy.

The research imperative

Traditional financial systems have made efforts to include underserved populations via tools such as microfinance, digital identities and mobile payments. While these tools have made access simpler and faster, they depend heavily on centralized intermediaries—and the trust placed on them. When these intermediaries fail, the system collapses under its own weight. 

On the other hand, the blockchain model proposes a distributed architecture where verification, enforcement and liability are shared and programmed into the very structure of financial platforms. But decentralization alone isn’t an infallible solution. Without thoughtful design and governance, distributed systems can become fragmented and unstable. Both platform archetypes have developed new governance models that balance autonomy with oversight. 

Dr. Arenas’s research examines how these two platform archetypes calibrate governance to achieve that balance. His study explores how principles of accountability and inclusion evolve within different blockchain models—especially community-based and regulatory-based. Blockchain-based crowdlending platforms are  selectively decentralized crowd lending systems where governance is shared by an active cooperating community; deep-tier supply chain finance platforms, by contrast, are enterprise-driven tokenization platforms for supply-chain applications embedded within a centralized banking sandbox. 

Blockchain-based crowdlending platforms: financial inclusion through shared blockchain governance

Blockchain-based crowdlending platforms are collaborative investment platforms that connect small farmers with fair financing and access to international markets, generating profitability and positive impact. The invisibility that the unbanked are facing in the Global South deprives them of access to traditional credit scoring systems. These platforms provide them visibility through a blockchain-based platform that connects farmer cooperatives to investors.

A central design choice in many of these platforms is enforcing accountability through economic stake. Participants who hold authority may be required to post a staked reserve (often structured as a percentage of funded value) into a pooled mechanism that functions like a loss-absorbing buffer for potential defaults. This reserve-based design creates “forced accountability” by ensuring that actors who influence decisions also carry measurable downside exposure.

Governance is typically shared: token holders or designated stakeholder groups can vote on onboarding new originators, approving proposals, and adjusting governance rules, with oversight supported by independent audits. Many systems reinforce transparency through recurring deliberation rituals (digital assemblies, periodic reviews, or hybrid physical–digital meetings). Across all of this, the blockchain serves as a recordkeeping and verification layer—keeping key events and commitments tamper-evident, traceable, and accessible to authorized stakeholders.

When well-designed, this model can reduce information asymmetries and improve portfolio performance relative to comparable high-friction markets, while also generating social and environmental benefits by treating borrowers, local validators, and investors as partners in a shared governance ecosystem rather than as disconnected counterparties.

Deep-tier supply chain finance platforms: encoding legitimacy in the supply chain

Deep-tier supply chain finance platforms also aim to provide greater visibility to the underrepresented—but in this case, to deep-tier suppliers who are neglected by the mainstream legacy system. Suppliers, especially in the Global South, face challenges in using their invoices as collateral with the centralized banking system, as they are manually created and therefore deemed unverifiable or inauthentic. This kind of system allows suppliers to convert their invoices into tokenized assets, making them transparent and non-fungible. 

Many deep-tier platforms take a “middle path” by stacking legacy financial infrastructure with distributed ledger technology. Institutional actors—banks, anchor buyers, and sometimes public-sector or regulatory bodies—retain control over rulemaking, approvals, and risk governance, while blockchain provides the shared recordkeeping layer that improves transparency, auditability, and data integrity.

In an ecosystem like this one, each entity plays an equal role in creating an indisputable trail. In a typical cycle, a supplier issues an invoice, which is verified by the buyer, helping the bank evaluate it as programmable collateral. The records stay transparent on the ledger, and the governance layer holds everyone involved accountable. In the event of a default, all information is readily available and indisputable. 

Through tokenization,  these platforms provide legitimacy to supplier invoices. It allows banks to lend based on verifiable data instead of trust. They also make it clear that financial inclusion becomes not only possible but also sustainable when policymakers take part in the structural design of beneficial financial ecosystems. 

Embedding governance in platform design

A key takeaway from the research is the contrast between the two types of platforms’ technical foundations. They tend to operate on very different architectures, yet both are guided by a shared principle: governance as the cornerstone of trust. Blockchain crowdlending platforms decentralize decision-making, guided by the idea that local knowledge is essential for lending. Deep-tier supply chain finance platforms, on the other hand, build on centralized policies to maintain clear regulation, which is important for platforms at scale. Both type of platforms succeed because they adapt to the situation and operational context they were working in.

Traditionally, governance has always been viewed as a set of procedures—a human-managed process for oversight and control. In blockchain systems, that governance is evolving into code. Automated contracts now execute decisions based on predefined parameters, embedding rules directly into the system’s logic. Even in the digital space, however, governance must continue to adapt and evolve in response to the needs of the communities it serves. 

Innovation and regulation are often portrayed as opposing forces—one racing ahead, the other trying to catch up. Our findings suggest a different story. In these platforms, innovation and oversight operate in tandem rather than at odds. Blockchain-based crowd-lending platforms rely on transparency to replace the need for traditional permissions, while deep-tier supply chain finance platforms simplify permissions by generating verifiable, traceable evidence. Together, they show that progress doesn’t come from bypassing rules but from refining and clarifying them.

In deep-tier supply chain finance platforms, regulatory policies evolve through observation of the ecosystem behavior. Similarly, in blockchain-based crowdlending platforms, governance and procedural changes occur through proposals that are voted on by community members and executed by smart contracts. This symbiosis of policy and technology marks a major milestone, where regulation can now become a mutually beneficial and sustainable process of setting boundaries for both policymakers and innovators. 

The future of trust and adaptive governance

The shift from legacy, institution-centric governance toward infrastructure-based trust is a major milestone in the evolution of governance. The growing maturity of blockchain-based crowdlending platforms and deep-tier supply chain finance platforms shows that this transition is not confined to credit markets—it offers transferable lessons for digital identity, insurance, and even carbon credit measurement and verification. What these platform archetypes make clear is that decentralization alone is not sufficient. Sustainable progress comes from decentralization that is embedded in structured governance—with clear decision rights, accountability mechanisms, auditability, and compliance pathways. For future innovators, the message is clear: pilot projects must move beyond experimentation to design governance that is both automated and adaptive. In the financial sector, that means creating frameworks where rules can evolve predictably without losing integrity. In climate markets, it could mean using traceability as a form of governance to ensure accountability in emission tracking and offset verification. Across sectors, the goal is the same—to embed transparency and coordination so that trust becomes measurable and governance becomes responsive.

Ultimately, these technological advancements show that the best way to use technology is to empower people to take control of the systems that they depend on. By designing systems where trust is built into the architecture rather than imposed by policy, we move closer to a future of equitable governance and inclusive opportunities and shared accountability.

Housing for humans: How Bayesian networks can make sense of Europe’s property market

Europe is gripped by a housing crisis. Rapid urban migration and the much-criticized property speculation boom have led to chronic shortages in city centers. Planners and private developers are racing to fill these gaps, but to provide genuine solutions, they must build where the demand is. Given the increasing diversity of European cities and the new work-life patterns birthed by the pandemic, this isn’t as easy as it sounds. 

Bayesian networks provide a way to cut through the confusion. While other mathematical models focus on the relationship between cause and effect, Bayesian networks identify each of the various causes at play and explore their relationships with one another, showing which factors are most important to the overall outcome. Not only that, they work on probabilities rather than certainties, an important caveat for the volatile real estate market.

Dr. Manuele Leonelli, Associate Professor of Statistics at IE University, and Álvaro García Murga, student of the Dual Degree in Business Administration + Data & Business Analytics, applied the Bayesian lens to analyze real estate data from Spain’s biggest three cities: Madrid, Barcelona and Valencia. The results clearly demonstrate that each city has its own set of house price drivers, meaning that a “must-have” in Madrid is barely even a “nice-to-have” in Valencia. 

The results are relevant to any city that’s grappling with a housing shortage and wants to put the right solutions in place. By identifying what people are looking for in their property, we can build cities that truly understand the people who inhabit them.

Putting the data into context

The study came about partly by chance. Through LinkedIn, Dr. Leonelli came across a vast dataset compiled by Idealista, Spain’s biggest property website, including 180,000 geo-referenced housing listings. 

Dr. Leonelli was immediately struck by the size of the data. It’s unusual to have massive data sets on this kind of topic, available free of charge. There was a personal motivation, too; as a tenant in Madrid, he is accustomed to the idiosyncrasies of the Spanish property market and relished the chance to get a better understanding of the situation.

Spain is one of the fastest-growing economies in Europe and the world’s second-most popular tourist destination, but its biggest cities are struggling to keep up. The Bank of Spain estimates the current housing deficit at around 700,000 properties, and the suffocating demand, exacerbated by the so-called “Airbnb effect,” is driving rents beyond what many ordinary people can pay.

“Having some understanding of the Madrid market and of Barcelona and Valencia meant we could really understand what the data was telling us,” explains Dr. Leonelli. “A model can tell you something, data can tell you something, but it's the researcher who puts it into context.”

How Bayesian networks provide transparency for housing data

At their simplest, Bayesian networks provide a way of managing uncertainty by looking at the traditional cause-and-effect relationship from both directions. They examine individual variables and assess their likelihood of causing a particular outcome, but they also analyze the outcome and predict which factors are most likely to have caused it. Each variable is treated as a probability, and updating one factor will affect the probability of all. If one factor is found to be absent, it will increase the likelihood of the others. 

Bayesian models have long been used in environmental modelling, and they are commonly applied in medical research to study everything from arthritis to neuroscience. In real estate, however, researchers have typically used other probabilistic mathematical approaches, such as linear regression, which identifies the most important pricing factors by drawing a best-fit line between them on a graph. However, Dr. Leonelli believes that Bayesian networks are ideally suited to the property world, as they illustrate how the ecosystem actually works and may lead to more realistic predictions.

“Traditional econometric models are about measuring the effect of a specific variable that we’re interested in: in this case, prices. Those models are designed to find the effect that a particular input has on the output. Bayesian networks do something a little bit different. They model how the inputs interact with each other.”

Simplicity is another advantage. AI and machine learning may already be able to predict house prices with a certain degree of accuracy, but many models are “black box,” meaning that they don’t explain their calculations. Bayesian networks are scalable and transparent. At a time when governments around the world are being criticized for perceived secrecy, the Bayesian approach can help them justify decisions around taxation and investment. 

“How could a government say, ‘We implemented this policy because an AI model told us to,’ and we have no understanding as to why?” Dr. Leonelli points out. “Housing is something that the public, the government and the private sector are all interested in. We want to create something that supports humans.” 

Crunching the numbers

In building this robust, human-centric model, the Idealista dataset provided a perfect launchpad. Each listing came packed with information, including the property’s latitude, longitude and key structural attributes, such as the number of bedrooms, year of construction and distances to key reference points such as the city center. Using data from Google Maps and Geopy, which converts addresses and landmarks into geographic coordinates, Dr. Leonelli and his team were able to add additional factors like the distance to the closest park, supermarket and other facilities.

They then condensed their vast trove of data into three primary variables: structural factors (such as the age of the building, the constructed area, and the number of rooms and bathrooms), spatial information (distance to essential points nearby) and amenities (such as garden, terrace or swimming pool). While the desirability or social status of an area is impossible to gauge with raw data, the spatial variable considered the number of properties available nearby, which may indirectly indicate popularity.

Perhaps the most notable aspect of the research was bootstrapping, which essentially involves drawing random samples of the dataset over and over again to road-test results and increase the overall sample size without requiring fresh inputs.

In all, the team created 200 models from their data and identified the features that were consistent. “With bootstrapping, your conclusions are much more likely, and you are more certain that what you are actually estimating is a signal of a true relationship in the data,” Dr. Leonelli explains.

Matching the data with the reality

These dozens of models provided a number of clear patterns, with each of the three cities displaying its own very specific pricing drivers.

While prices in all three cities were influenced by access to the most affluent corridors, such as Las Ramblas in Barcelona or Valencia’s Blasco Ibañez, important contrasts emerged beyond these alluring enclaves. Prices in Madrid were driven by amenities inside the building, such as terraces, swimming pools and lifts. Barcelona’s prices were most heavily influenced by the type and condition of the property, as well as access to local supermarkets. Valencia’s were dominated by structural fundamentals, notably the age of the building.

Dr. Leonelli explains, “The models really told us that prices are driven by very distinct dynamics in the three cities. In a way, you might expect that, but you would also expect that some things would be universal: amenities, for example. You would think that they’re always important, but what we found was that in different cities, they matter more or less.”

It is easy to speculate on the reasons. Madrid’s average income far outstrips any other city in Spain, and this affluence, combined with its interior location, may explain the pull of desirable amenities such as swimming pools and terraces. Barcelona, on the other hand, is a densely packed city with little space for cars, which may underpin the desire for nearby retail outlets. 

The biggest takeaway right now, Dr. Leonelli says, is the evidence that large differences exist across locations. Analyzing this data allowed him to reaffirm what he himself has experienced as a resident in Spain. And as he notes, when the data matches reality, it’s a good sign that the model is correct. 

Solving the housing puzzle

Dr. Leonelli is at pains to stress that his model cannot predict the evolution of the housing market. Its purpose is to illustrate the fundamental drivers of the housing market to facilitate smarter decisions in the future. In this, he believes his model has provided a good proof of concept.

Given the clarity of the results obtained from his study, Dr. Leonelli feels there is clear potential for the further application of Bayesian statistics to real estate trends. “The model has already proven that we could actually support public decisions [with data]. It would be fantastic to bring this work forward and get policymakers or the private sector involved, because they could also give us more information to make it even better.”

He hopes to take this model further and study other European cities, but he highlights the importance of local knowledge and context in correctly analyzing datasets. Just as Spain’s cities each have unique factors affecting property demand, so will each country in Europe have its own real estate riddle to solve: a puzzle that combines culture, economy and trends, condensed into four walls.

Explore More from This Research Series