Types
Every feature in FeatureQL has a type, and the language does not coerce between type families automatically. If you add a BIGINT to a DOUBLE, you get an error — you must cast explicitly.
This strictness exists for a reason: FeatureQL transpiles to multiple backends (DuckDB, Trino, BigQuery, DataFusion), and each handles implicit coercion differently. What silently widens in one backend might truncate in another. By requiring explicit casts, FeatureQL guarantees that your feature produces the same result everywhere.
In practice, this is less painful than it sounds. Integers and decimals mix freely in expressions (FeatureQL handles the promotion automatically), and most business logic is exactly that: combining decimal features with integer constants or computed values. The compiler only asks you to be explicit when mixing truly different type families, like integers with floating point.
Supported types
| Category | Canonical name | Aliases |
|---|---|---|
| Integer | BIGINT | INT64 |
INT | INT32, INTEGER | |
SMALLINT | INT16 | |
TINYINT | INT8 | |
| Decimal | DECIMAL | |
| Floating Point | FLOAT | FLOAT32 |
DOUBLE | FLOAT64 | |
| String | VARCHAR | |
| Boolean | BOOLEAN | |
| Temporal | TIMESTAMP | |
DATE | ||
INTERVAL | ||
| Document | JSON | |
| Complex | ARRAY | LIST |
ROW | STRUCT |
FeatureQL does not support the MAP type. Use an ARRAY of ROWs with an INDEX instead — see Array of Rows for details.
Type inference
Most of the time you don't need to declare types. FeatureQL infers them from context:
- Literals —
'Hello'isVARCHAR,1isBIGINT,1.25isDECIMAL(3,2),1.25e0isDOUBLE - Operators and functions —
1 + 2producesBIGINTbecause both operands areBIGINT - No implicit coercion across type families — mixing
BIGINTwithDOUBLEin arithmetic is an error. Write1::DOUBLE + 2e0instead.
Integer to DECIMAL promotion
Integers promote to DECIMAL automatically when used alongside DECIMAL values in arithmetic, comparisons, or BETWEEN. This applies to both literal integers and computed integer features:
WITH
PRICE := 10.25, -- DECIMAL(4,2)
QUANTITY := INPUT(BIGINT), -- a computed integer, not a literal
SELECT
PRICE + 5, -- works: literal 5 promotes to DECIMAL
PRICE + QUANTITY, -- works: computed BIGINT also promotes to DECIMAL
PRICE > QUANTITY, -- works: comparison promotes too When an integer is promoted, FeatureQL treats it as DECIMAL(N, 0) where N depends on the integer type: N=3 for TINYINT, N=5 for SMALLINT, N=10 for INT, N=20 for BIGINT.
WITH
PRICE := 10.25
SELECT
ADDITION := PRICE + 1,
MULTI_STEP := 10 + PRICE + 5,
COMPARISON := PRICE > 10,
ADDITION_TINYINT := PRICE + TINYINT '1',
ADDITION_TYPE := TYPEOF(ADDITION),
MULTI_STEP_TYPE := TYPEOF(MULTI_STEP),
ADDITION_TINYINT_TYPE := TYPEOF(ADDITION_TINYINT),
;| ADDITION VARCHAR | MULTI_STEP VARCHAR | COMPARISON BOOLEAN | ADDITION_TINYINT VARCHAR | ADDITION_TYPE VARCHAR | MULTI_STEP_TYPE VARCHAR | ADDITION_TINYINT_TYPE VARCHAR |
|---|---|---|---|---|---|---|
| 11.25 | 25.25 | true | 11.25 | DECIMAL(23,2) | DECIMAL(24,2) | DECIMAL(6,2) |
This promotion does not apply inside ARRAY or ROW constructors. Elements in a collection must belong to the same type family: "Integers", "Decimals", or "Floating Points". Within a family, different precisions are unified automatically: ARRAY[1.25, 1.1] produces ARRAY(DECIMAL(3,2)), and ARRAY[ROW(2::BIGINT), ROW(INT '2')] produces ARRAY(ROW(field_1 BIGINT)).
@fql-playground(integers_promotion_array_and_row)
The rule to remember: integers and decimals mix freely in expressions (operators, functions, comparisons). They do not mix inside data constructors (ARRAY, ROW). When the compiler asks for a cast inside a constructor, add an explicit ::DECIMAL or write the literal as a decimal (e.g., 5.0 instead of 5).
When you need explicit types
You only declare types in two places:
- Inputs —
INPUT(BIGINT)declares a feature that accepts bound values of a specific type - External source mappings —
EXTERNAL_COLUMNS(...)need column types because FeatureQL can't infer them from an external system
Everything else is inferred from the expressions you write.
Entity annotations
This is where FeatureQL's type system goes beyond SQL. Entity annotations attach business meaning to a type: BIGINT#CUSTOMERS means "a BIGINT that identifies a customer." This is not a comment or a naming convention. It is part of the type, checked by the compiler, and used by operations like RELATED() and EXTEND() to resolve joins automatically.
Declaring entities
Entities are the core business objects in your data model. You declare them with ENTITY() and link an input to an entity using a type annotation:
SELECT
CUSTOMERS := ENTITY(),
ORDERS := ENTITY(),
CUSTOMER_ID := INPUT(BIGINT#CUSTOMERS),
ORDER_ID := INPUT(BIGINT#ORDERS) By convention, entity names use plural forms — CUSTOMERS, not CUSTOMER.
The #CUSTOMERS annotation tells FeatureQL that CUSTOMER_ID is a key for the CUSTOMERS entity. When you later write RELATED(ORDER_CITY VIA LAST_ORDER_ID), FeatureQL checks that LAST_ORDER_ID has a #ORDERS annotation matching the entity of ORDER_CITY's input. If they don't match, you get a compile time error instead of a silent wrong join.
Entity IDs support BIGINT, VARCHAR, and TIMESTAMP — covering numeric identifiers, UUIDs, and categorical keys.
Where annotations are declared
In practice, entity annotations are mainly declared once at the data mapping boundary using EXTERNAL_COLUMNS or INLINE_COLUMNS. This is typically done by data engineering teams who define the core entities and their keys. Business users who build features on top of these mappings rarely need to write entity annotations themselves; they just use features that already carry them.
If you are writing a quick exploratory query with inline data, annotations are optional. If you are building features for shared use in the registry, annotations are the mechanism that makes RELATED() and EXTEND() work correctly.
Casting with entity annotations
Entity annotations are semantic boundaries. Changing or removing an annotation could silently break relationship tracking, so FeatureQL requires UNSAFE_CAST() for any operation that modifies the entity annotation. Regular CAST() and :: only work when the entity stays the same.
WITH
ENTITY1 := ENTITY(),
ENTITY2 := ENTITY(),
ENTITY1_ID := INPUT(BIGINT#ENTITY1),
SELECT
ENTITY1_ID,
CAST_REMOVE_ENTITY := UNSAFE_CAST(ENTITY1_ID AS BIGINT),
CAST_CHANGE_ENTITY := UNSAFE_CAST(ENTITY1_ID AS BIGINT#ENTITY2),
TYPEOF(ENTITY1_ID) as TYPE0,
TYPEOF(CAST_REMOVE_ENTITY) as TYPE1,
TYPEOF(CAST_CHANGE_ENTITY) as TYPE2,
-- ENTITY1_ID + 1, -- IMPOSSIBLE
UNSAFE_CAST(UNSAFE_CAST(ENTITY1_ID AS BIGINT) + 1 AS BIGINT#ENTITY1) as NOW_POSSIBLE, -- If you really want to do it
TYPEOF(NOW_POSSIBLE) as TYPE3,
FOR
ENTITY1_ID := BIND_VALUES(SEQUENCE(1,3))
;| ENTITY1_ID BIGINT | CAST_REMOVE_ENTITY BIGINT | CAST_CHANGE_ENTITY BIGINT | TYPE0 VARCHAR | TYPE1 VARCHAR | TYPE2 VARCHAR | NOW_POSSIBLE BIGINT | TYPE3 VARCHAR |
|---|---|---|---|---|---|---|---|
| 1 | 1 | 1 | BIGINT#ENTITY1 | BIGINT | BIGINT#ENTITY2 | 2 | BIGINT#ENTITY1 |
| 2 | 2 | 2 | BIGINT#ENTITY1 | BIGINT | BIGINT#ENTITY2 | 3 | BIGINT#ENTITY1 |
| 3 | 3 | 3 | BIGINT#ENTITY1 | BIGINT | BIGINT#ENTITY2 | 4 | BIGINT#ENTITY1 |
Three patterns for entity annotation changes, all requiring UNSAFE_CAST():
- Remove:
UNSAFE_CAST(ENTITY1_ID AS BIGINT)— strips the annotation - Change:
UNSAFE_CAST(ENTITY1_ID AS BIGINT#ENTITY2)— reassigns to a different entity - Preserve through arithmetic: cast away, compute, cast back —
UNSAFE_CAST(UNSAFE_CAST(ID AS BIGINT) + 1 AS BIGINT#ENTITY1)
If you find yourself reaching for UNSAFE_CAST() frequently, it likely signals a modeling issue in your entity boundaries rather than a casting problem.
Type casting
Two equivalent syntaxes for converting between types — standard SQL and PostgreSQL shorthand:
SELECT
CAST('123' AS BIGINT), -- Standard SQL casting
'123'::BIGINT, -- Shorthand syntax
'Amount: ' || 123::VARCHAR || 'EUR' -- :: casting takes higher precedence| ?_0 BIGINT | ?_1 BIGINT | ?_2 VARCHAR |
|---|---|---|
| 123 | 123 | Amount: 123EUR |
The :: shorthand has higher precedence than most operators, which is why 123::VARCHAR || 'EUR' works without parentheses — the cast happens before the concatenation.
Supported conversions
CAST() and :: support conversions between these type pairs:
| From | To |
|---|---|
BIGINT | DECIMAL, DOUBLE, VARCHAR |
DECIMAL | BIGINT, DOUBLE, VARCHAR |
DOUBLE | BIGINT, DECIMAL, VARCHAR |
VARCHAR | BIGINT, DECIMAL, DOUBLE, DATE, TIMESTAMP, BITSTRING |
DATE | TIMESTAMP, VARCHAR |
TIMESTAMP | DATE, VARCHAR |
BITSTRING | VARCHAR |
Complex types have structural constraints: arrays can only be cast to other arrays (or JSON), rows to other rows, and arrays of rows to other arrays of rows.
For casts that may fail at runtime, TRY_CAST() returns NULL instead of raising an error — useful when working with external data that may contain invalid values.
Type inspection
TYPEOF() returns the FeatureQL type, while SQLTYPEOF() returns the type in the target backend. These can differ — FeatureQL's BIGINT might be INTEGER in DuckDB or INT64 in BigQuery:
SELECT
1 AS FEATURE1,
TYPEOF(FEATURE1) AS FEATUREQL_TYPE_OF_1, -- The FeatureQL type
SQLTYPEOF(FEATURE1) AS SQL_TYPE_OF_1 -- The type in backend database| FEATURE1 BIGINT | FEATUREQL_TYPE_OF_1 VARCHAR | SQL_TYPE_OF_1 VARCHAR |
|---|---|---|
| 1 | BIGINT | INTEGER |
This distinction helps when debugging: if a query works in one backend but not another, comparing TYPEOF() and SQLTYPEOF() tells you whether the issue is in the FeatureQL layer or the backend translation.
Troubleshooting type errors
When the compiler rejects your query with a type error, this table covers the most common cases:
| Error message | Cause | Fix |
|---|---|---|
Unexpected parameter types in ADD(BIGINT, DOUBLE) | Mixing integer and floating point in an expression | Cast one side: value::DOUBLE + 2.5e0 or use matching literals: 1e0 + 2.5e0 |
| Array or ROW constructor type mismatch | Mixing BIGINT and DECIMAL inside ARRAY[...] or ROW(...) | Write all elements in the same type: ARRAY[10.25, 5.0] or ARRAY[10.25, 5::DECIMAL] |
Entity mismatch in RELATED() or EXTEND() | The VIA key has a different entity annotation than the target feature's INPUT | Check that your foreign key and the target entity's primary key share the same #ENTITY annotation |
UNSAFE_CAST required when casting an annotated type | Regular CAST or :: cannot change or remove entity annotations | Use UNSAFE_CAST(value AS TARGET_TYPE). If you need this often, reconsider your entity model |
For a complete list of error codes and messages, see All user errors .