[Editor’s Note: This is a guest post by David Snitkof, Orchard Platform Co-founder & Chief Analytics Officer. Orchard is a strategic partner for LendIt Fintech USA 2018 taking place on April 9-11, 2018 in San Francisco.]
Imagine you are a lender with agreements in place to sell a portion of the whole loans you originate each month to three different institutional investors. For argument’s sake, the terms of these agreements are similar, and the eligibility criteria for the loans are relatively the same. And, let’s also say, that every month, you will be required to allocate ~$5 million worth of your total monthly originations across these three investors. How do you devise a method for allocating loans in a way that is not only the fairest for each buyer—avoiding even the appearance of possible selection bias—but that also ensures you don’t breach agreement covenants and is easy and cost-effective for your team to manage? How do you assure buyers that the loans they have been given are a representative portion of all the loans originated that month that meet their eligibility requirements and that the best loans haven’t been given to another buyer, or worse, cherry-picked by you and kept for your own balance sheet or fund?
Is Random Selection the Right Approach to Allocation?
Well, like plenty of others before you, you might say, “Okay, I’m just going to count it off like grade school gym class. One, two, three, one, two, three, one, two, three.” Assigning each eligible loan from the pool to a buyer in that fashion until there are no loans left. Intuitively, you believe, at the end of the day, this type of random selection is the best way to make sure your allocations are fair and unbiased and that each buyer receives the same distribution of loans.
Unfortunately, relying on intuition often isn’t the best approach to developing a mathematically sound methodology. Any loan metric you look at across a pool of loans—interest rates, loan purposes, FICOs, loan sizes, borrower incomes—will be distributed differently. In fact, counterintuitively, each of the three buyers will likely have a very different distribution at the end of each month, not because anyone is trying to do anything shady but simply because of the effects of randomness embedded in that particular allocation methodology.
Adjusting for the Effects of Randomness
If you flip a quarter one time, there’s an approximately 50 percent chance of it landing on heads. Obviously, that doesn’t mean if you flip it ten times, you’ll have a perfect 50-50 split between heads and tails. However, as you increase the number of flips, you will get closer to a 50-50 split.
Of course, that’s an oversimplified example, but given the availability today of massive amounts of computing power at relatively low costs, one way to adjust for the effects of randomness is to take a similar approach, increasing the number of times you simulate a given allocation methodology to generate allocation scenarios. Out of the total number of trials, there will be a worst-case scenario and a best-case. For example, in a scenario test consisting of one thousand trials, one of those thousand will result in the three buyers receiving really, really, really similar allocations. One of those trials will result in wildly different allocations. And then, as expected, there will be a lot of variation in between. So, this is an important thing to be able to quantify, of being able to say, “Well, how consistent is the allocation model? How skewed is it versus how representative is it of an intended allocation?”
Quantifying the Results and Putting Them to Use
The allocation of whole loans, whether distributing between multiple loan buyers or determining which loans to pledge to a particular credit facility, involves non-trivial complexity and correspondingly large variability in outcomes. Now imagine that you have this new ability to quantify the accuracy of your loan allocation methodology. In our latest white paper, Building a Capital Roadmap: The Challenges and Benefits of Creating and Optimizing a Funding Model for Each Stage of a Lender’s Growth, we discuss that, as lenders grow and secure multiple financing options, they need to be able to monitor covenants, concentration limits, or portfolio performance closely, and to have a method for determining the most efficient use of available capital. In addition to using scenario testing to optimize allocations to reduce the overall cost of capital—or other periodic goals—with the ability quantify the accuracy of allocations, a lender can build this new metric into their allocation model to increase the transparency and quantifiability of their process, while also providing an optimization metric that can be used to minimize adverse selection.
About the Author
David Snitkof is co-founder and Chief Analytics Officer at Orchard Platform. He has applied analytics and technology to finance, healthcare, travel, and media for over 12 years. At American Express, he worked on risk, product, and marketing analytics for new consumer card products and partnerships and also developed the underwriting criteria used to approve or decline billions of dollars in new credit. At Citigroup, David led a team driving analytics and strategy for the full Small Business credit lifecycle. David also was head of analytics and marketing at Oyster.com, an online travel startup since acquired by TripAdvisor. David excels at developing creative uses for data, communication of technical concepts, and building high-performing teams and products. He graduated from Brown University, where he studied Economics, Cognitive Psychology, & Neuroscience.