Quickstart
This page focuses on the FeatureStore custom resource, which is the main interface for deploying and configuring Feast after the operator is installed.
TOC
PrerequisitesFeatureStore CR OverviewCommon Service ConfigurationOffline StoreOnline StoreRegistryUICommon Persistence PatternsPVC-backed ConfigurationRedis Online Store + SQL RegistryPostgreSQL Online Store + SQL RegistryRegistry in Object StorageNotes on SecretsFeature Repository InitializationDefault InitializationInit TemplateGit RepositoryDeploy FeatureStoreServices and AccessUse Feast SDK and CLIRepository LayoutExample Feature DefinitionsCLIConfigure AuthorizationSDK Token ConfigurationFurther ReadingPrerequisites
- Feast Operator is installed. See Install Feast.
- For PVC-backed examples, the cluster should have a default
StorageClass, or the targetstorageClassNameshould be known in advance. - For external backends instead of PVC-backed file storage, prepare the required services in advance:
- PostgreSQL 16 is recommended. PostgreSQL is used for the SQL registry and optionally for the PostgreSQL online store.
- Redis 6.0 or later is recommended. Redis is used for the Redis online store. Feast supports a single Redis instance, Redis Cluster, and Redis Sentinel.
- Prepare Kubernetes Secrets that contain backend connection parameters before they are referenced from the
FeatureStoreCR.
For the PostgreSQL and Redis examples in this page, the backend service, database or schema, credentials, and Kubernetes Secrets are assumed to already exist. Connectivity and driver compatibility should still be validated in the target cluster.
FeatureStore CR Overview
The most commonly used fields in a FeatureStore CR are:
Common Service Configuration
Offline Store
offlineStore controls where historical feature data is stored and whether a remote offline server is exposed.
Common options:
persistence.file: file-based storage, typically DuckDB on a PVC, suitable for local development and evaluationpersistence.store: external offline backend such as BigQuery or another supported Feast offline storeserver: {}: creates a remote offline server Service for network access
PVC-backed DuckDB example:
Store-backed example:
Online Store
onlineStore is used for low-latency feature serving.
Common options:
persistence.file: file-based local storage, usually for development onlypersistence.store: Redis, PostgreSQL, or another supported Feast online store backendserver: optional extra server settings such asenv,envFrom,resources,tls, orvolumeMounts
Recommended backend versions for the examples in this page:
- Redis 6.0 or later
- PostgreSQL 16
PVC-backed file example:
Redis-backed example:
PostgreSQL-backed example:
Registry
registry stores Feast metadata. It can be local to the current FeatureStore, or it can reuse a remote registry.
Common options:
registry.local.persistence.file: file-based registry on PVC or object storageregistry.local.persistence.store: SQL-backed registryregistry.local.server: {}: exposes the registry server for remote accessregistry.remote: reuses the registry from anotherFeatureStoreor an external hostname
For SQL-backed registry examples in this page, PostgreSQL 16 is recommended.
PVC-backed registry example:
SQL-backed registry example:
Remote-registry example:
UI
To enable the Feast UI, set:
Without ui: {}, the UI Service is not created.
Common Persistence Patterns
PVC-backed Configuration
This is the simplest pattern for evaluation, local testing, and single-replica deployments:
Redis Online Store + SQL Registry
This is a common pattern for a durable online store and a database-backed registry.
Create the Secret first:
Then reference it from the FeatureStore:
PostgreSQL Online Store + SQL Registry
For PostgreSQL as the online store backend, the Secret can include both postgres and sql entries:
Then use:
Registry in Object Storage
The registry file can also be placed in S3 or GCS:
Notes on Secrets
When persistence.store is used, the operator reads backend parameters from a Kubernetes Secret.
Important points:
- by default, the Secret key name matches the configured
type - for
type: redis, the operator reads therediskey - for
type: postgres, the operator reads thepostgreskey - for
type: sql, the operator reads thesqlkey - if required, the key can be overridden with
secretKeyName
The Secret content should follow the backend configuration style used in feature_store.yaml, but omit the backend type field itself.
Feature Repository Initialization
The operator can initialize the feature repository in three common ways.
Default Initialization
If spec.feastProjectDir is not set, the operator runs a default feast init <project> and creates a repository under:
Init Template
An init template can be used when the operator should bootstrap a specific Feast template:
Git Repository
Git can be used when feature definitions are already maintained in source control:
If the feature repository is not located in the default feature_repo subdirectory, set featureRepoPath.
Deploy FeatureStore
Apply the PVC-backed example:
The main readiness field is status.phase. When the deployment is ready, this field becomes Ready.
Check it directly with:
Services and Access
The operator writes connection information to the FeatureStore status:
The operator creates Service names using the pattern feast-<featurestore-name>-<component>. Common examples are:
feast-<featurestore-name>-onlinefeast-<featurestore-name>-offlinefeast-<featurestore-name>-registryfeast-<featurestore-name>-ui
The generated clientConfigMap contains a client-side feature_store.yaml for the current FeatureStore deployment. The configuration is stored under the feature_store.yaml key and can be loaded directly by Feast clients, including Python applications and in-cluster jobs.
Common access patterns:
- use the generated Service names inside the cluster
- use the generated client ConfigMap as Feast client configuration
- expose the Services through the access method used in the target environment
If network access to the offline server or registry server is required, offlineStore.server: {} or registry.local.server: {} should be set in the CR.
Use Feast SDK and CLI
Feature definitions are typically maintained outside the FeatureStore workload, in a local repository, Git repository, CI job, or another SDK-based workflow. The generated client feature_store.yaml is used to connect the Feast CLI or Python SDK to the deployed services.
When feast apply is run from a client environment, the registry service must be reachable from the Feast client. For the local registry mode used in this page, set:
If remote materialization or remote historical retrieval is also required, expose the offline server as well:
Repository Layout
Prepare a feature repository in the Feast client environment and place the generated client configuration in that repository as feature_store.yaml. The configuration can be read from the ConfigMap named in .status.clientConfigMap.
Example repository layout:
The repository root is the directory that contains feature_store.yaml.
The same repository root is used by both the Feast CLI and the Python SDK. For Python code, load it with:
Example Feature Definitions
The following example defines one entity, one file-based batch source, two feature views, one push source, and one feature service. Together, these objects are enough to demonstrate registration, online serving, materialization, and push-based updates in a single repository.
driver_stats_source reads batch data from data/driver_stats.parquet. driver_hourly_stats exposes those fields for normal batch-to-online materialization. driver_stats_push_source and driver_hourly_stats_fresh show how the same entity can also receive fresh values through push. driver_activity_v1 groups the feature view into a reusable serving definition.
Example feature definition file:
CLI
Run feast apply from the root directory of the feature repository. In the example above, the command is run in <feature-repo>, not in the data/ subdirectory.
feast apply reads feature_store.yaml from the repository root and scans Python files in that repository recursively. If feature definitions are stored in subdirectories, the command is still run from the repository root, as long as those files are inside the repository and not excluded by .feastignore.
If the command is run from another working directory, specify the repository explicitly:
The default form is:
Configure Authorization
Kubernetes-based authorization in Feast has three parts:
- Declare role names in the
FeatureStoreCR. - Define Feast
Permissionobjects in the feature repository using the same role names. - Bind Kubernetes users or ServiceAccounts to those roles with
RoleBinding.
Declaring feast-reader and feast-writer in the FeatureStore CR and binding subjects to those roles is not sufficient on its own. Feast authorization decisions also require matching Permission definitions in the feature repository.
Example FeatureStore fragment:
Example Permission definitions in the feature repository:
After adding or updating Permission objects, run feast apply so the authorization metadata is registered in Feast.
Example RoleBinding configuration:
Create example ServiceAccount tokens:
If a custom token lifetime is required, add --duration to the command.
SDK Token Configuration
When authz.kubernetes is enabled, the Feast Python client sends the token as a bearer token automatically. For clients running in the same cluster with a mounted ServiceAccount token, no additional SDK setting is usually required.
For external clients, provide the token in one of these ways:
- set
authz_config.user_tokeninfeature_store.yaml - set the
LOCAL_K8S_TOKENenvironment variable
Example feature_store.yaml fragment:
Example environment variable:
Further Reading
For end-to-end Feast workflows and broader product usage, continue with the official Feast documentation:
- Concepts: core Feast objects and data model
- Architecture: deployment model, retrieval paths, and serving patterns
- Quickstart: end-to-end local workflow with SDK usage
- Sample use-case tutorials: scenario-based tutorials for common ML workflows
- Running Feast in production: production patterns and operational guidance