Python library
The featuremesh Python package translates FeatureQL to SQL and executes it against your database. It supports DuckDB, Trino, and BigQuery for batch analytics, and connects to the FeatureMesh online service for real-time serving.
Installation
pip install featuremesh Quick start
Setup
from featuremesh import OfflineClient, Backend
import duckdb
# Create a SQL executor function for DuckDB
def query_duckdb(sql: str):
"""Execute SQL query and return results as DataFrame."""
conn = duckdb.connect(":memory:")
result = conn.sql(sql)
return result.df()
# Get your access token for your project on https://console.featuremesh.com/login (page settings)
__YOUR_ACCESS_TOKEN__ = "your_access_token" Offline client (batch analytics)
The offline client translates FeatureQL to SQL and runs it on a local database. You provide a function that takes a SQL string and returns a DataFrame — this is how FeatureMesh connects to your backend:
# Create an offline client
client_offline = OfflineClient(
access_token=__YOUR_ACCESS_TOKEN__, # Can be None if no persistence is needed
backend=Backend.DUCKDB,
sql_executor=query_duckdb
)
# Execute a FeatureQL query
result = client_offline.query("""
WITH
FEATURE1 := INPUT(BIGINT)
SELECT
FEATURE1 := BIND_VALUES(ARRAY[1, 2, 3]),
FEATURE2 := FEATURE1 * 2
""")
# Access results
print(result.dataframe) # Pandas DataFrame
print(result.sql) # Translated SQL
print(result.success) # True if query succeeded Online client (real-time serving)
The online client sends queries to the FeatureMesh serving API, which executes them against your configured real-time data sources (Redis, JDBC, HTTP):
from featuremesh import OnlineClient
# Create an online client
client_online = OnlineClient(access_token=__YOUR_ACCESS_TOKEN__)
# Execute a FeatureQL query
result = client_online.query("""
WITH
FEATURE1 := INPUT(BIGINT)
SELECT
FEATURE1 := BIND_VALUES(ARRAY[1, 2, 3]),
FEATURE2 := FEATURE1 * 2
""")
# Access results
print(result.dataframe) Jupyter notebook integration
The %%featureql cell magic lets you write FeatureQL directly in notebook cells and see results as DataFrames. Load the extension first:
%load_ext featuremesh Then set a default client so the magic command knows where to send queries:
from featuremesh import set_default, OfflineClient, Backend
import duckdb
# Create SQL executor
def query_duckdb(sql: str):
return duckdb.sql(sql).df()
# Create and set default client
client = OfflineClient(
access_token=__YOUR_ACCESS_TOKEN__,
backend=Backend.DUCKDB,
sql_executor=query_duckdb
)
set_default("client", client) Now you can write FeatureQL directly in cells:
%%featureql
WITH
FEATURE1 := INPUT(BIGINT)
SELECT
FEATURE1 := BIND_VALUES(ARRAY[1, 2, 3]),
FEATURE2 := FEATURE1 * 2 | FEATURE1 | FEATURE2 |
|---|---|
| 1 | 2 |
| 2 | 4 |
| 3 | 6 |
Magic command options
| Option | What it does |
|---|---|
--client CLIENT | Use a specific client variable from the notebook namespace |
--debug | Enable debug mode for detailed query information |
--show-sql | Print the translated SQL alongside results |
--hide-dataframe | Suppress the DataFrame output |
--show-slt | Print the query in SLT (SQL Logic Test) format |
--hook VARIABLE | Store the complete result object in a notebook variable |
For example, to see the generated SQL and capture results for later use:
%%featureql --client client_duckdb --show-sql --hook results
WITH
FEATURE1 := INPUT(BIGINT)
SELECT
FEATURE1 := BIND_VALUES(ARRAY[1, 2, 3]),
FEATURE2 := FEATURE1 * 2 See also
- Python library guide — Configuration, backend setup, error handling, and result objects
- Getting started — Overview of all entry points
- FeatureQL for the Impatient — Quick tour of the language for SQL users
- Demos Docker container — Full environment with notebooks and sample data