4 Traps when applying Artificial Intelligence to B2B Lending

AdobeStock/ktsdesign

To understand artificial intelligence bias and its impact on B2B lending, let's look at some underlying causes.

The crowdsourcing concept called the “wisdom of the crowd” is where a thousand non-experts will make better decisions than the most sophisticated experts in any field. Multiple experiments have been done around this concept. For example, using the online game Foldit, more than 57,000 players helped scientists at the University of Washington solve a long-standing molecular biology problem within three weeks.

Yet humans are subject to biases in their decision-making. Some examples include:

  • Overconfidence: We are too confident in our own abilities.
  • Confirmation bias: We tend to listen to only the information that proves our points.
  • Clustering illusion: We see patterns in random events, like the number 7 turns up five time in a row on the craps table and you see a pattern.
  • Recency effect: We weigh the latest information more heavily than older data.
  • Ostrich effect: We bury or ignore negative information.
  • Information bias: More is not necessarily better.

These can lead to artificial intelligence bias issues in the algorithms we design to try to make us more efficient and effective. There are four areas that we should recognize:

1. Algorithm bias: For example, your model may be motivated by profit margin and could sway loans toward certain individuals or businesses. Or in medicine, some patients make you money and a big dependency could be insurance, but the model may conflict with patient health. There could be hidden bias and design bias. Design bias may be intentional, as designers of algorithms may conflict with societal values or norms.

2. Data bias: Algorithms are only as good as the data they are learning from, and bias can be embedded in data. For example, some organizations are trying to predict dilution in order to finance invoices. This is by no means a small feat, especially when the data sets typically have been insulated in a benign credit world. (See the story: Post Confirmation Dilution in an Uncertain Credit World.)

3. Interpretations of what the algorithms mean: Algorithms are a black box, and designers may know the model limitations and how to interpret it, but when it comes to lender or relationship managers, they may not know how to interpret. This is where you get into what I consider the most important error that can be made — Type I or Type II errors, also called false positives and false negatives, respectively.

4. Who is responsible for the decisions the model makes? While AI may produce better outcomes, it can also reduce autonomy.

So in building AI applications, it’s important to bear the above points in mind. No doubt, there are big advantages with AI as it can reduce inherent individual biases. With lending applications, the ability to get smarter as you look at more data sets to help reduce expected losses is quite attractive, but we must bear in mind the caveat that most modeling does not have a long enough business credit cycle. And for many models built on recent data, the real concern can arise when this long benign credit cycle ends.

Share on Procurious

Discuss this:

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.