Lending portfolio tutorial
This tutorial builds a lending portfolio model with three entities — borrowers, loans, and payments — and answers four questions a credit analytics team would ask: how many loans does each borrower have, how much has been repaid on their most recent loan, which loan has received the largest repayment, and how should borrowers be classified by risk level. Every query runs against inline DuckDB tables with no external data.
The data model
The dataset has four tables: a borrower dimension, loan and payment fact tables, and a pre-aggregated OBT that stores each borrower's loan IDs as an array.
lending.dim_borrowers— one row per borrower.borrower_idis the primary key.credit_scoreis a numeric score, andmembership_tieris'gold','silver', or'bronze'.lending.fct_loans— one row per loan contract.loan_idis the primary key.loan_borrower_idis a foreign key to borrowers.loan_amountis the principal andinterest_rateis the annual rate.lending.fct_payments— one row per payment made on a loan.payment_idis the primary key.payment_loan_idis a foreign key to loans.amountis the payment value.lending.agg_borrowers_obt— pre-aggregated: each borrower'slast_loan_id(most recent by issue date) andloans(array of all loan IDs, ordered chronologically).
/* SQL */
--
CREATE SCHEMA IF NOT EXISTS lending;
DROP TABLE IF EXISTS lending.dim_borrowers;
DROP TABLE IF EXISTS lending.fct_loans;
DROP TABLE IF EXISTS lending.fct_payments;
DROP TABLE IF EXISTS lending.agg_borrowers_obt;
--
CREATE TABLE lending.dim_borrowers (
borrower_id BIGINT,
name VARCHAR,
credit_score BIGINT,
membership_tier VARCHAR
);
INSERT INTO lending.dim_borrowers VALUES
(100, 'Alice', 720, 'gold'),
(101, 'Bob', 680, 'silver'),
(102, 'Carol', 750, 'gold'),
(103, 'Dan', 620, 'bronze'),
(104, 'Eve', 700, 'silver');
--
CREATE TABLE lending.fct_loans (
loan_id BIGINT,
loan_borrower_id BIGINT,
loan_amount DECIMAL(10,2),
interest_rate DECIMAL(5,2),
issue_date DATE
);
INSERT INTO lending.fct_loans VALUES
(200, 100, 5000.00, 5.50, DATE '2024-01-15'),
(201, 100, 12000.00, 4.75, DATE '2024-06-01'),
(202, 101, 8000.00, 8.25, DATE '2024-03-10'),
(203, 102, 15000.00, 4.75, DATE '2024-02-20'),
(204, 102, 3000.00, 6.00, DATE '2024-09-05'),
(205, 103, 20000.00, 12.00, DATE '2024-04-12'),
(206, 104, 7000.00, 6.00, DATE '2024-05-30'),
(207, 104, 10000.00, 5.50, DATE '2024-11-01');
--
CREATE TABLE lending.fct_payments (
payment_id BIGINT,
payment_loan_id BIGINT,
amount DECIMAL(10,2),
payment_date DATE
);
INSERT INTO lending.fct_payments VALUES
(300, 200, 250.00, DATE '2024-02-15'),
(301, 200, 250.00, DATE '2024-03-15'),
(302, 201, 600.00, DATE '2024-07-01'),
(303, 201, 600.00, DATE '2024-08-01'),
(304, 202, 400.00, DATE '2024-04-10'),
(305, 203, 750.00, DATE '2024-03-20'),
(306, 203, 750.00, DATE '2024-04-20'),
(307, 204, 300.00, DATE '2024-10-05'),
(308, 205, 500.00, DATE '2024-05-12'),
(309, 206, 350.00, DATE '2024-06-30'),
(310, 207, 500.00, DATE '2024-12-01');
--
CREATE TABLE lending.agg_borrowers_obt (
borrower_id BIGINT,
last_loan_id BIGINT,
loans BIGINT[]
);
INSERT INTO lending.agg_borrowers_obt
SELECT
loan_borrower_id AS borrower_id,
MAX_BY(loan_id, issue_date) AS last_loan_id,
ARRAY_AGG(loan_id ORDER BY issue_date)
FROM lending.fct_loans
GROUP BY loan_borrower_id;
--
SELECT CAST(COUNT(*) AS INTEGER) AS Count FROM lending.agg_borrowers_obt;| Count BIGINT |
|---|
| 5 |
Define entities and map external data
Every FeatureQL model starts by declaring entities and their primary key inputs. This lending model has three entities: BORROWERS (IDs in the 100s), LOANS (200s), and PAYMENTS (300s). The #ENTITY type annotation on each INPUT() tells FeatureQL which entity a key belongs to — this is how it validates relationship traversals.
CREATE OR REPLACE FEATURES IN FM.LENDING AS
SELECT
BORROWERS := ENTITY(),
LOANS := ENTITY(),
PAYMENTS := ENTITY(),
BORROWER_ID := INPUT(BIGINT#BORROWERS),
LOAN_ID := INPUT(BIGINT#LOANS),
PAYMENT_ID := INPUT(BIGINT#PAYMENTS),
;| feature_name VARCHAR | status VARCHAR | message VARCHAR |
|---|---|---|
| FM.LENDING.BORROWERS | CREATED | Feature created as not exists |
| FM.LENDING.LOANS | CREATED | Feature created as not exists |
| FM.LENDING.PAYMENTS | CREATED | Feature created as not exists |
| FM.LENDING.BORROWER_ID | CREATED | Feature created as not exists |
| FM.LENDING.LOAN_ID | CREATED | Feature created as not exists |
| FM.LENDING.PAYMENT_ID | CREATED | Feature created as not exists |
With entities declared, the next step maps table columns to features using EXTERNAL_COLUMNS(). Each mapping declares which column is the lookup key (BIND TO the entity's input), which columns to expose, and which table to read from. The TABLES. prefix groups all raw mappings together, keeping them separate from derived features.
Foreign key columns carry entity annotations too: loan_borrower_id BIGINT#BORROWERS tells FeatureQL that this column references the BORROWERS entity, and payment_loan_id BIGINT#LOANS references LOANS. The OBT's loans ARRAY(BIGINT#LOANS) declares an array of loan entity references — this is what enables the EXTEND(ZIP(...)) pattern later.
CREATE OR REPLACE FEATURES IN FM.LENDING AS
SELECT
TABLES.DIM_BORROWERS := EXTERNAL_COLUMNS(
borrower_id BIGINT#BORROWERS BIND TO BORROWER_ID,
name VARCHAR,
credit_score BIGINT,
membership_tier VARCHAR,
FROM TABLE(lending.dim_borrowers)
),
TABLES.FCT_LOANS := EXTERNAL_COLUMNS(
loan_id BIGINT#LOANS BIND TO LOAN_ID,
loan_borrower_id BIGINT#BORROWERS,
loan_amount DECIMAL(10,2),
interest_rate DECIMAL(5,2),
issue_date DATE,
FROM TABLE(lending.fct_loans)
),
TABLES.FCT_PAYMENTS := EXTERNAL_COLUMNS(
payment_id BIGINT#PAYMENTS BIND TO PAYMENT_ID,
payment_loan_id BIGINT#LOANS,
amount DECIMAL(10,2),
payment_date DATE,
FROM TABLE(lending.fct_payments)
),
TABLES.AGG_BORROWERS_OBT := EXTERNAL_COLUMNS(
borrower_id BIGINT#BORROWERS BIND TO BORROWER_ID,
last_loan_id BIGINT#LOANS,
loans ARRAY(BIGINT#LOANS),
FROM TABLE(lending.agg_borrowers_obt)
),
;| feature_name VARCHAR | status VARCHAR | message VARCHAR |
|---|---|---|
| FM.LENDING.TABLES.DIM_BORROWERS | CREATED | Feature created as not exists |
| FM.LENDING.TABLES.FCT_LOANS | CREATED | Feature created as not exists |
| FM.LENDING.TABLES.FCT_PAYMENTS | CREATED | Feature created as not exists |
| FM.LENDING.TABLES.AGG_BORROWERS_OBT | CREATED | Feature created as not exists |
How many loans does each borrower have?
Counting loans per borrower is a PK-to-FK aggregation: each borrower (the primary key side) has one or more loans referencing them (the foreign key side). In SQL, you would write a GROUP BY loan_borrower_id subquery and LEFT JOIN it back to the borrower dimension. In FeatureQL, RELATED() replaces that entire pattern.
BORROWER_ID.RELATED(COUNT(1) GROUP BY TABLES.FCT_LOANS[loan_borrower_id]) reads as: "for each BORROWER_ID, count the rows in fct_loans where loan_borrower_id matches." The GROUP BY inside RELATED() names the foreign key column; VIA BORROWER_ID is inferred from the chained syntax. FeatureQL generates the subquery and the join automatically.
IS_MULTI_BORROWER is a pure boolean derived from NUM_LOANS — no additional table access, just a comparison.
WITH
BORROWER_NAME := TABLES.DIM_BORROWERS[name],
CREDIT_SCORE := TABLES.DIM_BORROWERS[credit_score],
NUM_LOANS := BORROWER_ID.RELATED(COUNT(1) GROUP BY TABLES.FCT_LOANS[loan_borrower_id]),
IS_MULTI_BORROWER := NUM_LOANS > 1,
SELECT
BORROWER_ID,
BORROWER_NAME,
CREDIT_SCORE,
NUM_LOANS,
IS_MULTI_BORROWER,
FROM FM.LENDING
FOR CROSS
BORROWER_ID := BIND_VALUES(SEQUENCE(100,104)),
LOAN_ID := BIND_VALUES(SEQUENCE(200,207)),
;| FM.LENDING.BORROWER_ID BIGINT | BORROWER_NAME VARCHAR | CREDIT_SCORE BIGINT | NUM_LOANS BIGINT | IS_MULTI_BORROWER VARCHAR |
|---|---|---|---|---|
| 100 | Alice | 720 | 2 | TRUE |
| 101 | Bob | 680 | 1 | FALSE |
| 102 | Carol | 750 | 2 | TRUE |
| 103 | Dan | 620 | 1 | FALSE |
| 104 | Eve | 700 | 2 | TRUE |
How much has been repaid on each borrower's most recent loan?
This question crosses two entity boundaries: borrower → loan → payments. The OBT stores each borrower's last_loan_id as a foreign key to the loans entity. To get the total repaid on that loan, we need to sum payment amounts for that specific loan — a FK-to-FK aggregation.
LAST_LOAN_ID.RELATED(SUM(TABLES.FCT_PAYMENTS[amount]) GROUP BY TABLES.FCT_PAYMENTS[payment_loan_id]) reads as: "follow last_loan_id to the loans entity, then aggregate payment amounts grouped by payment_loan_id." The result is a single decimal per borrower — the total amount repaid on their most recent loan.
In SQL, this would require two joins (borrower OBT → payments → group by loan) or a correlated subquery. FeatureQL resolves the multi-hop path from the entity annotations.
WITH
BORROWER_NAME := TABLES.DIM_BORROWERS[name],
LAST_LOAN_ID := TABLES.AGG_BORROWERS_OBT[last_loan_id],
LAST_LOAN_REPAID := LAST_LOAN_ID.RELATED(
SUM(TABLES.FCT_PAYMENTS[amount]) GROUP BY TABLES.FCT_PAYMENTS[payment_loan_id]
),
SELECT
BORROWER_ID,
BORROWER_NAME,
LAST_LOAN_ID,
LAST_LOAN_REPAID,
FROM FM.LENDING
FOR CROSS
BORROWER_ID := BIND_VALUES(SEQUENCE(100,104)),
PAYMENT_ID := BIND_VALUES(SEQUENCE(300,310)),
;| FM.LENDING.BORROWER_ID BIGINT | BORROWER_NAME VARCHAR | LAST_LOAN_ID BIGINT | LAST_LOAN_REPAID VARCHAR |
|---|---|---|---|
| 100 | Alice | 201 | 1200.00 |
| 101 | Bob | 202 | 400.00 |
| 102 | Carol | 204 | 300.00 |
| 103 | Dan | 205 | 500.00 |
| 104 | Eve | 207 | 500.00 |
Which loan has received the largest repayment?
This question requires working with each borrower's array of loans, enriching each element with repayment data, and then picking the top one. This is the EXTEND() + TRANSFORM() pattern.
EXTEND(ZIP(BORROWER_LOANS AS loan_id) WITH LOAN_REPAID AS loan_repaid1) does three things: ZIP() converts the scalar array of loan IDs into an array of rows (each row has a loan_id field), then EXTEND() looks up each loan's total repayment from the LOAN_REPAID feature and adds it as a loan_repaid1 field on every row. The result is ARRAY<ROW<loan_id, loan_repaid1>> — a nested table inside each borrower row.
.TRANSFORM(SELECT loan_id, loan_repaid1 ORDER BY loan_repaid1 DESC LIMIT 1) then runs a mini-query inside that array: sort by repayment descending and keep only the top row. The result is a single-element array containing the loan with the largest total repayment.
In SQL, the equivalent would be an UNNEST + JOIN + ROW_NUMBER() OVER (PARTITION BY borrower_id ORDER BY repaid DESC) window function + re-aggregation. FeatureQL keeps the nested structure intact — no flattening and re-grouping.
WITH
BORROWER_NAME := TABLES.DIM_BORROWERS[name],
BORROWER_LOANS := TABLES.AGG_BORROWERS_OBT[loans],
LOAN_REPAID := LOAN_ID.RELATED(
SUM(TABLES.FCT_PAYMENTS[amount]) GROUP BY TABLES.FCT_PAYMENTS[payment_loan_id]
),
LOANS_ENRICHED := EXTEND(
ZIP(BORROWER_LOANS AS loan_id)
WITH LOAN_REPAID AS loan_repaid1
),
TOP_REPAYMENT := LOANS_ENRICHED.TRANSFORM(SELECT loan_id, loan_repaid1 ORDER BY loan_repaid1 DESC LIMIT 1),
SELECT
BORROWER_ID,
BORROWER_NAME,
TOP_REPAYMENT,
FROM FM.LENDING
FOR CROSS
BORROWER_ID := BIND_VALUES(SEQUENCE(100,104)),
PAYMENT_ID := BIND_VALUES(SEQUENCE(300,310)),
;| FM.LENDING.BORROWER_ID BIGINT | BORROWER_NAME VARCHAR | TOP_REPAYMENT VARCHAR |
|---|---|---|
| 100 | Alice | [{loan_id: 201, loan_repaid1: 1200.00}] |
| 101 | Bob | [{loan_id: 202, loan_repaid1: 400.00}] |
| 102 | Carol | [{loan_id: 203, loan_repaid1: 1500.00}] |
| 103 | Dan | [{loan_id: 205, loan_repaid1: 500.00}] |
| 104 | Eve | [{loan_id: 207, loan_repaid1: 500.00}] |
Classify borrowers by risk level
Building a classification feature means combining multiple signals into a single label. Here, NUM_LOANS (from RELATED()) and TOTAL_REPAID (from EXTEND() + TRANSFORM() + UNWRAP_ONE()) feed a CASE WHEN expression.
TOTAL_REPAID reuses the same LOANS_ENRICHED array from Q3 but applies a different TRANSFORM: SELECT SUM(loan_repaid1) aggregates all loan repayments into a single-row array, and .UNWRAP_ONE() extracts the scalar value.
The CASE WHEN then classifies: borrowers with 2+ loans and over $500 total repaid are 'low' risk; borrowers with 2+ loans or over $300 total repaid are 'medium'; everyone else is 'high'.
WITH
BORROWER_NAME := TABLES.DIM_BORROWERS[name],
BORROWER_LOANS := TABLES.AGG_BORROWERS_OBT[loans],
NUM_LOANS := BORROWER_ID.RELATED(COUNT(1) GROUP BY TABLES.FCT_LOANS[loan_borrower_id]),
LOAN_REPAID := LOAN_ID.RELATED(
SUM(TABLES.FCT_PAYMENTS[amount]) GROUP BY TABLES.FCT_PAYMENTS[payment_loan_id]
),
LOANS_ENRICHED := EXTEND(
ZIP(BORROWER_LOANS AS loan_id)
WITH LOAN_REPAID AS loan_repaid1
),
TOTAL_REPAID := LOANS_ENRICHED.TRANSFORM(SELECT SUM(loan_repaid1)).UNWRAP_ONE(),
RISK_LEVEL := CASE
WHEN NUM_LOANS >= 2 AND TOTAL_REPAID > 500.00 THEN 'low'
WHEN NUM_LOANS >= 2 OR TOTAL_REPAID > 300.00 THEN 'medium'
ELSE 'high'
END,
SELECT
BORROWER_ID,
BORROWER_NAME,
RISK_LEVEL,
NUM_LOANS,
TOTAL_REPAID,
FROM FM.LENDING
FOR CROSS
BORROWER_ID := BIND_VALUES(SEQUENCE(100,104)),
LOAN_ID := BIND_VALUES(SEQUENCE(200,207)),
PAYMENT_ID := BIND_VALUES(SEQUENCE(300,310)),
;| FM.LENDING.BORROWER_ID BIGINT | BORROWER_NAME VARCHAR | RISK_LEVEL VARCHAR | NUM_LOANS BIGINT | TOTAL_REPAID VARCHAR |
|---|---|---|---|---|
| 100 | Alice | low | 2 | 1700.00 |
| 101 | Bob | medium | 1 | 400.00 |
| 102 | Carol | low | 2 | 1800.00 |
| 103 | Dan | medium | 1 | 500.00 |
| 104 | Eve | low | 2 | 850.00 |
Putting it all together
This single query combines every feature from the previous sections: borrower name and credit score from the dimension table, loan count and multi-borrower flag from RELATED(), last-loan repayment from FK-to-FK RELATED(), top repayment loan from EXTEND() + TRANSFORM(), and risk level from CASE WHEN.
WITH
BORROWER_NAME := TABLES.DIM_BORROWERS[name],
CREDIT_SCORE := TABLES.DIM_BORROWERS[credit_score],
LAST_LOAN_ID := TABLES.AGG_BORROWERS_OBT[last_loan_id],
BORROWER_LOANS := TABLES.AGG_BORROWERS_OBT[loans],
NUM_LOANS := BORROWER_ID.RELATED(COUNT(1) GROUP BY TABLES.FCT_LOANS[loan_borrower_id]),
IS_MULTI_BORROWER := NUM_LOANS > 1,
LAST_LOAN_REPAID := LAST_LOAN_ID.RELATED(
SUM(TABLES.FCT_PAYMENTS[amount]) GROUP BY TABLES.FCT_PAYMENTS[payment_loan_id]
),
LOAN_REPAID := LOAN_ID.RELATED(
SUM(TABLES.FCT_PAYMENTS[amount]) GROUP BY TABLES.FCT_PAYMENTS[payment_loan_id]
),
LOANS_ENRICHED := EXTEND(
ZIP(BORROWER_LOANS AS loan_id)
WITH LOAN_REPAID AS loan_repaid1
),
TOP_REPAYMENT := LOANS_ENRICHED.TRANSFORM(SELECT loan_id, loan_repaid1 ORDER BY loan_repaid1 DESC LIMIT 1),
TOTAL_REPAID := LOANS_ENRICHED.TRANSFORM(SELECT SUM(loan_repaid1)).UNWRAP_ONE(),
RISK_LEVEL := CASE
WHEN NUM_LOANS >= 2 AND TOTAL_REPAID > 500.00 THEN 'low'
WHEN NUM_LOANS >= 2 OR TOTAL_REPAID > 300.00 THEN 'medium'
ELSE 'high'
END,
SELECT
BORROWER_ID,
BORROWER_NAME,
CREDIT_SCORE,
NUM_LOANS,
IS_MULTI_BORROWER,
LAST_LOAN_REPAID,
TOP_REPAYMENT,
RISK_LEVEL,
FROM FM.LENDING
FOR CROSS
BORROWER_ID := BIND_VALUES(SEQUENCE(100,104)),
LOAN_ID := BIND_VALUES(SEQUENCE(200,207)),
PAYMENT_ID := BIND_VALUES(SEQUENCE(300,310)),
;| FM.LENDING.BORROWER_ID BIGINT | BORROWER_NAME VARCHAR | CREDIT_SCORE BIGINT | NUM_LOANS BIGINT | IS_MULTI_BORROWER BOOLEAN | LAST_LOAN_REPAID VARCHAR | TOP_REPAYMENT VARCHAR | RISK_LEVEL VARCHAR |
|---|---|---|---|---|---|---|---|
| 100 | Alice | 720 | 2 | TRUE | 1200.00 | [{loan_id: 201, loan_repaid1: 1200.00}] | low |
| 101 | Bob | 680 | 1 | FALSE | 400.00 | [{loan_id: 202, loan_repaid1: 400.00}] | medium |
| 102 | Carol | 750 | 2 | TRUE | 300.00 | [{loan_id: 203, loan_repaid1: 1500.00}] | low |
| 103 | Dan | 620 | 1 | FALSE | 500.00 | [{loan_id: 205, loan_repaid1: 500.00}] | medium |
| 104 | Eve | 700 | 2 | TRUE | 500.00 | [{loan_id: 207, loan_repaid1: 500.00}] | low |
In a production FeatureMesh deployment, the data engineer owns the first two layers: entity declarations (ENTITY(), INPUT()) and data source mappings (EXTERNAL_COLUMNS()). These rarely change and define how the warehouse connects to the feature model. The analyst owns everything above: RELATED() aggregations, EXTEND() enrichments, TRANSFORM() filters, and CASE WHEN classifications. They build and iterate on business logic without touching the underlying table wiring.
What's next
- RELATED() reference — all join patterns (PK-to-FK, FK-to-PK, FK-to-FK) with syntax details
- EXTEND() reference — enriching arrays with cross-entity data, aliasing, and VIA/BIND TO inference
- Batch analytics —
BIND_KEYSET(),EXTERNAL_COLUMNS(), and running analytics across large datasets