Skip to main content
🛠 This page is for engineering teams self-hosting their own Lightdash instance. If you want to configure any of these settings, please speak to the Lightdash team or your Lightdash administrators.
This is a reference to all environment variables that can be used to configure a Lightdash deployment.
VariableDescription
PGHOST(Required) Hostname of postgres server to store Lightdash data
PGPORT(Required) Port of postgres server to store Lightdash data
PGUSER(Required) Username of postgres user to access postgres server to store Lightdash data
PGPASSWORD(Required) Password for PGUSER
PGDATABASE(Required) Database name inside postgres server to store Lightdash data
PGCONNECTIONURIConnection URI for postgres server to store Lightdash data in the format postgresql://user:password@host:port/db?params. This is an alternative to providing the previous PG variables.
PGMAXCONNECTIONSMaximum number of connections to the database
PGMINCONNECTIONSMinimum number of connections to the database
LIGHTDASH_SECRET(Required) Secret key used to secure various tokens in Lightdash. This must be fixed between deployments. If the secret changes, you won’t have access to Lightdash data.
SECURE_COOKIESOnly allows cookies to be stored over a https connection. We use cookies to keep you logged in. This is recommended to be set to true in production. (default=false)
COOKIES_MAX_AGE_HOURSHow many hours a user session exists before the user is automatically signed out. For example if 24, then the user will be automatically after 24 hours of inactivity.
TRUST_PROXYThis tells the Lightdash server that it can trust the X-Forwarded-Proto header it receives in requests. This is useful if you use SECURE_COOKIES=true behind a HTTPS terminated proxy that you can trust. (default=false)
SITE_URLSite url where Lightdash is being hosted. It should include the protocol. E.g https://lightdash.mycompany.com (default=http://localhost:8080)
INTERNAL_LIGHTDASH_HOSTInternal Lightdash host for the Headless browser to send requests when your Lightdash instance is not accessible from the Internet. Needs to support https if SECURE_COOKIES=true (default=Same as SITE_URL)
STATIC_IPServer static IP so users can add the IP to their warehouse allow-list. (default=http://localhost:8080)
LIGHTDASH_QUERY_MAX_LIMITQuery max rows limit (default=5000)
LIGHTDASH_QUERY_DEFAULT_LIMITDefault number of rows to return in a query (default=500)
LIGHTDASH_QUERY_MAX_PAGE_SIZEMaximum page size for paginated queries (default=2500)
SCHEDULER_ENABLEDEnables/Disables the scheduler worker that triggers the scheduled deliveries. (default=true)
SCHEDULER_CONCURRENCYHow many scheduled delivery jobs can be processed concurrently. (default=3)
SCHEDULER_JOB_TIMEOUTAfter how many milliseconds the job should be timeout so the scheduler worker can pick other jobs. (default=600000, 10 minutes)
SCHEDULER_SCREENSHOT_TIMEOUTTimeout in milliseconds for taking screenshots
SCHEDULER_INCLUDE_TASKSComma-separated list of scheduler tasks to include
SCHEDULER_EXCLUDE_TASKSComma-separated list of scheduler tasks to exclude
LIGHTDASH_CSV_CELLS_LIMITMax cells on CSV file exports (default=100000)
LIGHTDASH_CHART_VERSION_HISTORY_DAYS_LIMITConfigure how far back the chart versions history goes in days (default=3)
LIGHTDASH_PIVOT_TABLE_MAX_COLUMN_LIMITConfigure maximum number of columns in pivot table (default=200)
GROUPS_ENABLEDEnables/Disables groups functionality (default=false)
CUSTOM_VISUALIZATIONS_ENABLEDEnables/Disables custom chart functionality (default=false)
LIGHTDASH_MAX_PAYLOADMaximum HTTP request body size (default=5mb)
LIGHTDASH_LICENSE_KEYLicense key for Lightdash Enterprise Edition. See Enterprise License Keys for details. Get your license key
HEADLESS_BROWSER_HOSTHostname for the headless browser
HEADLESS_BROWSER_PORTPort for the headless browser (default=3001)
ALLOW_MULTIPLE_ORGSIf set to true, new users registering on Lightdash will have their own organization, separated from others (default=false)
LIGHTDASH_MODEMode for Lightdash (default, demo, pr, etc.) (default=default)
DISABLE_PATDisables Personal Access Tokens (default=false)
PAT_ALLOWED_ORG_ROLESComma-separated list of organization roles allowed to use Personal Access Tokens (default=All roles)
PAT_MAX_EXPIRATION_TIME_IN_DAYSMaximum expiration time in days for Personal Access Tokens
MAX_DOWNLOADS_AS_CODEMaximum number of downloads as code (default=100)
EXTENDED_USAGE_ANALYTICSEnables extended usage analytics (default=false)
USE_SECURE_BROWSERUse secure WebSocket connections for headless browser (default=false)
DISABLE_DASHBOARD_COMMENTSDisables dashboard comments (default=false)
ORGANIZATION_WAREHOUSE_CREDENTIALS_ENABLEDEnables organization warehouse settings (default=false)
HEADWAY_ENABLEDEnables the Headway changelogs widget in the Lightdash menu (default=true)
SAVED_METRICS_TREE_ENABLEDEnables Saved Trees in the Metrics Canvas view (default=false)
Lightdash also accepts all standard postgres environment variables

Athena

VariableDescription
ATHENA_WAREHOUSE_IAM_ROLE_AUTHSet to true to enable IAM role authentication for Athena warehouse connections. When enabled, users can choose between Access Keys and IAM Role auth in the connection form. IAM Role auth uses the AWS default credential chain (e.g. ECS task role, EC2 instance profile) instead of explicit access keys. Default: false.

Snowflake

VariableDescription
SNOWFLAKE_WAREHOUSE_ERROR_MESSAGECustom error message displayed when users encounter Snowflake warehouse access errors. Use {warehouseName} as a placeholder for the actual warehouse name. Example: You don't have access to warehouse {warehouseName}. Please reach out to your admin.
SNOWFLAKE_UNAUTHORIZED_ERROR_MESSAGECustom error message displayed when users encounter Snowflake authorization errors (e.g., “Object does not exist or not authorized”). Use {snowflakeTable} and {snowflakeSchema} as placeholders. Example: You don't have access to the {snowflakeTable} table. Please go to '{snowflakeSchema}' and request access.

SMTP

This is a reference to all the SMTP environment variables that can be used to configure a Lightdash email client.
VariableDescription
EMAIL_SMTP_HOST(Required) Hostname of email server
EMAIL_SMTP_PORTPort of email server (default=587)
EMAIL_SMTP_SECURESecure connection (default=true)
EMAIL_SMTP_USER(Required) Auth user
EMAIL_SMTP_PASSWORDAuth password [1]
EMAIL_SMTP_ACCESS_TOKENAuth access token for Oauth2 authentication [1]
EMAIL_SMTP_ALLOW_INVALID_CERTAllow connection to TLS server with self-signed or invalid TLS certificate (default=false)
EMAIL_SMTP_SENDER_EMAIL(Required) The email address that sends emails
EMAIL_SMTP_SENDER_NAMEThe name of the email address that sends emails (default=Lightdash)
EMAIL_SMTP_IMAGE_INLINE_CIDEmbed images directly into emails as CID attachments instead of referencing external URLs. Useful for deployments behind firewalls where email clients cannot reach internal image URLs (default=false)
[1] EMAIL_SMTP_PASSWORD or EMAIL_SMTP_ACCESS_TOKEN needs to be provided

SSO

These variables enable you to control Single Sign On (SSO) functionality.
VariableDescription
AUTH_DISABLE_PASSWORD_AUTHENTICATIONIf “true” disables signing in with plain passwords (default=false)
AUTH_ENABLE_GROUP_SYNCIf “true” enables assigning SSO groups to Lightdash groups (default=false)
AUTH_ENABLE_OIDC_LINKINGEnables linking a new OIDC identity to an existing user if they already have another OIDC with the same email (default=false)
AUTH_ENABLE_OIDC_TO_EMAIL_LINKINGEnables linking OIDC identity to an existing user by email. Required when using SCIM with SSO (default=false)
AUTH_GOOGLE_OAUTH2_CLIENT_IDRequired for Google SSO
AUTH_GOOGLE_OAUTH2_CLIENT_SECRETRequired for Google SSO
AUTH_OKTA_OAUTH_CLIENT_IDRequired for Okta SSO
AUTH_OKTA_OAUTH_CLIENT_SECRETRequired for Okta SSO
AUTH_OKTA_OAUTH_ISSUERRequired for Okta SSO
AUTH_OKTA_DOMAINRequired for Okta SSO
AUTH_OKTA_AUTHORIZATION_SERVER_IDOptional for Okta SSO with a custom authorization server
AUTH_OKTA_EXTRA_SCOPESOptional for Okta SSO scopes (e.g. groups) without a custom authorization server
AUTH_ONE_LOGIN_OAUTH_CLIENT_IDRequired for One Login SSO
AUTH_ONE_LOGIN_OAUTH_CLIENT_SECRETRequired for One Login SSO
AUTH_ONE_LOGIN_OAUTH_ISSUERRequired for One Login SSO
AUTH_AZURE_AD_OAUTH_CLIENT_IDRequired for Azure AD
AUTH_AZURE_AD_OAUTH_CLIENT_SECRETRequired for Azure AD
AUTH_AZURE_AD_OAUTH_TENANT_IDRequired for Azure AD
AUTH_AZURE_AD_OIDC_METADATA_ENDPOINTOptional for Azure AD
AUTH_AZURE_AD_X509_CERT_PATHOptional for Azure AD
AUTH_AZURE_AD_X509_CERTOptional for Azure AD
AUTH_AZURE_AD_PRIVATE_KEY_PATHOptional for Azure AD
AUTH_AZURE_AD_PRIVATE_KEYOptional for Azure AD
DATABRICKS_OAUTH_CLIENT_IDClient ID for Databricks OAuth
DATABRICKS_OAUTH_CLIENT_SECRETClient secret for Databricks OAuth (optional)
DATABRICKS_OAUTH_AUTHORIZATION_ENDPOINTAuthorization endpoint URL for Databricks OAuth
DATABRICKS_OAUTH_TOKEN_ENDPOINTToken endpoint URL for Databricks OAuth

S3

These variables allow you to configure S3 Object Storage, which is required to self-host Lightdash.
VariableDescription
S3_ENDPOINT(Required) S3 endpoint for storing results
S3_BUCKET(Required) Name of the S3 bucket for storing files
S3_REGION(Required) Region where the S3 bucket is located
S3_ACCESS_KEYAccess key for authenticating with the S3 bucket
S3_SECRET_KEYSecret key for authenticating with the S3 bucket
S3_USE_CREDENTIALS_FROMConfigures the credential provider chain for AWS S3 authentication if access key and secret is not provided. Supports: env (environment variables), token_file (token file credentials), ini (initialization file credentials), ecs (container metadata credentials), ec2 (instance metadata credentials). Multiple values can be specified in order of preference.
S3_EXPIRATION_TIMEExpiration time for scheduled deliveries files (default=259200, 3d)
S3_FORCE_PATH_STYLEForce path style addressing, needed for MinIO setup e.g. http://your.s3.domain/BUCKET/KEY instead of http://BUCKET.your.s3.domain/KEY (default=false)
RESULTS_S3_BUCKETName of the S3 bucket used for storing query results (default=S3_BUCKET)
RESULTS_S3_REGIONRegion where the S3 query storage bucket is located (default=S3_REGION)
RESULTS_S3_ACCESS_KEYAccess key for authenticating with the S3 query storage bucket (default=S3_ACCESS_KEY)
RESULTS_S3_SECRET_KEYSecret key for authenticating with the S3 query storage bucket (default=S3_SECRET_KEY)

Persistent download URLs

When enabled, CSV and dashboard ZIP exports return a stable Lightdash-hosted URL (e.g. https://lightdash.example.com/api/v1/file/{id}) instead of a direct S3 signed URL. Each time this URL is accessed, Lightdash generates a short-lived S3 signed URL and redirects to it — so the underlying URL never goes stale and download links survive IAM credential rotation.
VariableDescription
PERSISTENT_DOWNLOAD_URLS_ENABLEDEnables persistent download URLs (default=false)
PERSISTENT_DOWNLOAD_URL_EXPIRATION_SECONDSHow long the persistent URL remains accessible (default=259200, 3 days). When persistent URLs are enabled, S3_EXPIRATION_TIME is ignored and each redirect generates a signed URL that expires after 5 minutes.
PERSISTENT_DOWNLOAD_URL_EXPIRATION_SECONDS_EMAILOverride expiration for email download links. Falls back to PERSISTENT_DOWNLOAD_URL_EXPIRATION_SECONDS when not set.
PERSISTENT_DOWNLOAD_URL_EXPIRATION_SECONDS_SLACKOverride expiration for Slack download links. Falls back to PERSISTENT_DOWNLOAD_URL_EXPIRATION_SECONDS when not set.
PERSISTENT_DOWNLOAD_URL_EXPIRATION_SECONDS_MSTEAMSOverride expiration for MS Teams download links. Falls back to PERSISTENT_DOWNLOAD_URL_EXPIRATION_SECONDS when not set.

Cache

Note that you will need an Enterprise License Key for this functionality.
VariableDescription
RESULTS_CACHE_ENABLEDEnables caching for chart results (default=false)
AUTOCOMPLETE_CACHE_ENABLEDEnables caching for filter autocomplete results (default=false)
CACHE_STALE_TIME_SECONDSDefines how long cached results remain valid before being considered stale (default=86400, 24h)
These variables are deprecated; use the RESULTS_S3_* versions instead.
VariableDescription
RESULTS_CACHE_S3_BUCKETDeprecated - use RESULTS_S3_BUCKET (default=S3_BUCKET)
RESULTS_CACHE_S3_REGIONDeprecated - use RESULTS_S3_REGION (default=S3_REGION)
RESULTS_CACHE_S3_ACCESS_KEYDeprecated - use RESULTS_S3_ACCESS_KEY (default=S3_ACCESS_KEY)
RESULTS_CACHE_S3_SECRET_KEYDeprecated - use RESULTS_S3_SECRET_KEY (default=S3_SECRET_KEY)

Logging

VariableDescription
LIGHTDASH_LOG_LEVELThe minimum level of log messages to show. DEBUG, AUDIT, HTTP, INFO, WARN ERROR (default=INFO)
LIGHTDASH_LOG_FORMATThe format of log messages. PLAIN, PRETTY, JSON (default=pretty)
LIGHTDASH_LOG_OUTPUTSThe outputs to send log messages to (default=console)
LIGHTDASH_LOG_CONSOLE_LEVELThe minimum level of log messages to display on the console (default=LIGHTDASH_LOG_LEVEL)
LIGHTDASH_LOG_CONSOLE_FORMATThe format of log messages on the console (default=LIGHTDASH_LOG_FORMAT)
LIGHTDASH_LOG_FILE_LEVELThe minimum level of log messages to write to the log file (default=LIGHTDASH_LOG_LEVEL)
LIGHTDASH_LOG_FILE_FORMATThe format of log messages in the log file (default=LIGHTDASH_LOG_FORMAT)
LIGHTDASH_LOG_FILE_PATHThe path to the log file. Requires LIGHTDASH_LOG_OUTPUTS to include file to enable file output. (default=./logs/all.log)

Prometheus

VariableDescription
LIGHTDASH_PROMETHEUS_ENABLEDEnables/Disables Prometheus metrics endpoint (default=false)
LIGHTDASH_PROMETHEUS_PORTPort for Prometheus metrics endpoint (default=9090)
LIGHTDASH_PROMETHEUS_PATHPath for Prometheus metrics endpoint (default=/metrics)
LIGHTDASH_PROMETHEUS_PREFIXPrefix for metric names.
LIGHTDASH_GC_DURATION_BUCKETSBuckets for duration histogram in seconds. (default=0.001, 0.01, 0.1, 1, 2, 5)
LIGHTDASH_EVENT_LOOP_MONITORING_PRECISIONPrecision for event loop monitoring in milliseconds. Must be greater than zero. (default=10)
LIGHTDASH_PROMETHEUS_LABELSLabels to add to all metrics. Must be valid JSON

Security

VariableDescription
LIGHTDASH_CSP_REPORT_ONLYEnables Content Security Policy (CSP) reporting only mode. This is recommended to be set to false in production. (default=true)
LIGHTDASH_CSP_ALLOWED_DOMAINSList of domains that are allowed to load resources from. Values must be separated by commas.
LIGHTDASH_CSP_REPORT_URIURI to send CSP violation reports to.
LIGHTDASH_CORS_ENABLEDEnables Cross-Origin Resource Sharing (CORS) (default=false)
LIGHTDASH_CORS_ALLOWED_DOMAINSList of domains that are allowed to make cross-origin requests. Values must be separated by commas.

Analytics & Event Tracking

VariableDescription
RUDDERSTACK_WRITE_KEYRudderStack key used to track events (by default Lightdash’s key is used)
RUDDERSTACK_DATA_PLANE_URLRudderStack data plane URL to which events are tracked (by default Lightdash’s data plane is used)
RUDDERSTACK_ANALYTICS_DISABLEDSet to true to disable RudderStack analytics
POSTHOG_PROJECT_API_KEYAPI key for Posthog (by default Lightdash’s key is used)
POSTHOG_FE_API_HOSTHostname for Posthog’s front-end API
POSTHOG_BE_API_HOSTHostname for Posthog’s back-end API

AI Analyst

These variables enable you to configure the AI Analyst functionality. Note that you will need an Enterprise Licence Key for this functionality.
VariableDescription
AI_COPILOT_ENABLEDEnables/Disables AI Analyst functionality (default=false)
ASK_AI_BUTTON_ENABLEDEnables the “Ask AI” button in the interface for direct access to AI agents, when disabled agents can be acessed from /ai-agents route (default=false)
AI_EMBEDDING_ENABLEDEnables AI embedding functionality for verified answers similarity matching (default=false)
AI_DEFAULT_PROVIDERDefault AI provider to use (openai, azure, anthropic, openrouter, bedrock) (default=openai)
AI_DEFAULT_EMBEDDING_PROVIDERDefault AI provider for embeddings (openai, bedrock, azure) (default=openai)
AI_COPILOT_DEBUG_LOGGING_ENABLEDEnables debug logging for AI Copilot (default=false)
AI_COPILOT_TELEMETRY_ENABLEDEnables telemetry for AI Copilot (default=false)
AI_COPILOT_REQUIRES_FEATURE_FLAGRequires a feature flag to use AI Copilot (default=false)
AI_COPILOT_MAX_QUERY_LIMITMaximum number of rows returned in AI-generated queries (default=500)
AI_VERIFIED_ANSWER_SIMILARITY_THRESHOLDSimilarity threshold (0-1) for verified answer matching (default=0.6)
The AI Analyst supports multiple providers for flexibility. Choose one of the provider configurations below based on your preferred AI service. OpenAI and Anthropic are the recommended providers as they are the most tested and stable.

Minimum Required Setup

To enable AI Analyst, set AI_COPILOT_ENABLED=true and provide an API key for AI_DEFAULT_PROVIDER (e.g., OPENAI_API_KEY for OpenAI, ANTHROPIC_API_KEY for Anthropic).

OpenAI Configuration

VariableDescription
OPENAI_API_KEY(Required when using OpenAI) API key for OpenAI
OPENAI_MODEL_NAMEOpenAI model name to use (default=gpt-5.2)
OPENAI_EMBEDDING_MODELOpenAI embedding model for verified answers (default=text-embedding-3-small)
OPENAI_BASE_URLOptional base URL for OpenAI compatible API
OPENAI_AVAILABLE_MODELSComma-separated list of models available in the model picker (default=All supported models)
Using an OpenAI-compatible proxy (e.g. LiteLLM)If your organization uses an OpenAI-compatible proxy like LiteLLM, you can connect it to Lightdash by setting AI_DEFAULT_PROVIDER=openai and pointing OPENAI_BASE_URL to your proxy URL. For example:
  • AI_DEFAULT_PROVIDER=openai
  • OPENAI_API_KEY=<your-proxy-api-key>
  • OPENAI_BASE_URL=<your-proxy-url>
  • OPENAI_MODEL_NAME=<model-name-configured-in-your-proxy>
Make sure your proxy has the model names correctly mapped. For example, if you set OPENAI_MODEL_NAME=gpt-4o, your proxy needs to have that model routed to whatever underlying provider you’re using. The same applies for the embedding model.If you want to expose multiple models through the proxy, use OPENAI_AVAILABLE_MODELS with a comma-separated list of model names.

Anthropic Configuration

VariableDescription
ANTHROPIC_API_KEY(Required when using Anthropic) API key for Anthropic
ANTHROPIC_MODEL_NAMEAnthropic model name to use (default=claude-sonnet-4-5)
ANTHROPIC_AVAILABLE_MODELSComma-separated list of models available in the model picker (default=All supported models)

Azure AI Configuration

VariableDescription
AZURE_AI_API_KEY(Required when using Azure AI) API key for Azure AI
AZURE_AI_ENDPOINT(Required when using Azure AI) Endpoint for Azure AI
AZURE_AI_API_VERSION(Required when using Azure AI) API version for Azure AI
AZURE_AI_DEPLOYMENT_NAME(Required when using Azure AI) Deployment name for Azure AI
AZURE_EMBEDDING_DEPLOYMENT_NAMEDeployment name for Azure embedding model (default=text-embedding-3-small)
AZURE_USE_DEPLOYMENT_BASED_URLSUse deployment-based URLs for Azure OpenAI API calls (default=true)

OpenRouter Configuration

VariableDescription
OPENROUTER_API_KEY(Required when using OpenRouter) API key for OpenRouter
OPENROUTER_MODEL_NAMEOpenRouter model name to use (default=openai/gpt-4.1-2025-04-14)
OPENROUTER_SORT_ORDERProvider sorting method (price, throughput, latency) (default=latency)
OPENROUTER_ALLOWED_PROVIDERSComma-separated list of allowed providers (anthropic, openai, google) (default=openai)

AWS Bedrock Configuration

VariableDescription
BEDROCK_API_KEY(Required if not using IAM credentials) API key for Bedrock (alternative to IAM credentials)
BEDROCK_ACCESS_KEY_ID(Required if not using API key) AWS access key ID for Bedrock
BEDROCK_SECRET_ACCESS_KEY(Required if using access key ID) AWS secret access key for Bedrock
BEDROCK_SESSION_TOKENAWS session token (for temporary credentials)
BEDROCK_REGION(Required) AWS region for Bedrock
BEDROCK_MODEL_NAMEBedrock model name to use (default=claude-sonnet-4-5)
BEDROCK_EMBEDDING_MODELBedrock embedding model for verified answers (default=cohere.embed-english-v3)
BEDROCK_AVAILABLE_MODELSComma-separated list of models available in the model picker (default=All supported models)

Supported Models

OpenAI: gpt-5.1, gpt-5.2 (default) Anthropic: claude-sonnet-4-5, claude-haiku-4-5, claude-sonnet-4 AWS Bedrock: claude-sonnet-4-5, claude-haiku-4-5, claude-sonnet-4
Exact model snapshots are automatically assigned (e.g., gpt-5.1gpt-5.1-2025-11-13).
For Bedrock, the region prefix is also added based on BEDROCK_REGION (e.g., claude-sonnet-4-5us.anthropic.claude-sonnet-4-5-20250929-v1:0).

Slack Integration

These variables enable you to configure the Slack integration.
VariableDescription
SLACK_SIGNING_SECRETRequired for Slack integration
SLACK_CLIENT_IDRequired for Slack integration
SLACK_CLIENT_SECRETRequired for Slack integration
SLACK_STATE_SECRETRequired for Slack integration (default=slack-state-secret)
SLACK_APP_TOKENApp token for Slack
SLACK_PORTPort for Slack integration (default=4351)
SLACK_SOCKET_MODEEnable socket mode for Slack (default=false)
SLACK_CHANNELS_CACHED_TIMETime in milliseconds to cache Slack channels (default=600000, 10 minutes)
SLACK_SUPPORT_URLURL for Slack support
SLACK_MULTI_AGENT_CHANNEL_ENABLEDEnables the multi-agent Slack channel (Beta) feature for the whole instance (default=false)

GitHub Integration

These variables enable you to configure Github integrations
VariableDescription
GITHUB_PRIVATE_KEY(Required) GitHub private key for GitHub App authentication
GITHUB_APP_ID(Required) GitHub Application ID
GITHUB_CLIENT_ID(Required) GitHub OAuth client ID
GITHUB_CLIENT_SECRET(Required) GitHub OAuth client secret
GITHUB_APP_NAME(Required) Name of the GitHub App
GITHUB_REDIRECT_DOMAINDomain for GitHub OAuth redirection

Microsoft Teams Integration

These variables enable you to configure Microsoft Teams integration.
VariableDescription
MICROSOFT_TEAMS_ENABLEDEnables Microsoft Teams integration (default=false)

Google Cloud Platform

These variables enable you to configure Google Cloud Platform integration.
VariableDescription
GOOGLE_CLOUD_PROJECT_IDGoogle Cloud Platform project ID
GOOGLE_DRIVE_API_KEYGoogle Drive API key
AUTH_GOOGLE_ENABLEDEnables Google authentication (default=false)
AUTH_ENABLE_GCLOUD_ADCEnables Google Cloud Application Default Credentials (default=false)

Embedding

Note that you will need an Enterprise Licence Key for this functionality.
VariableDescription
EMBEDDING_ENABLEDEnables embedding functionality (default=false)
EMBED_ALLOW_ALL_DASHBOARDS_BY_DEFAULTWhen creating new embeds, allow all dashboards by default (default=false)
EMBED_ALLOW_ALL_CHARTS_BY_DEFAULTWhen creating new embeds, allow all charts by default (default=false)
LIGHTDASH_IFRAME_EMBEDDING_DOMAINSList of domains that are allowed to embed Lightdash in an iframe. Values must be separated by commas.

Custom roles

Note that you will need an Enterprise Licence Key for this functionality.
VariableDescription
CUSTOM_ROLES_ENABLEDEnables creation of custom organization roles with configurable permission scopes beyond the default Admin, Developer, Editor, and Viewer roles. (default=false)

Service account

Note that you will need an Enterprise Licence Key for this functionality.
VariableDescription
SERVICE_ACCOUNT_ENABLEDEnables service account functionality (default=false)

SCIM

Note that you will need an Enterprise Licence Key for this functionality.
VariableDescription
SCIM_ENABLEDEnables SCIM (System for Cross-domain Identity Management) (default=false)

Sentry

These variables enable you to configure Sentry for error tracking.
VariableDescription
SENTRY_DSNSentry DSN for both frontend and backend
SENTRY_BE_DSNSentry DSN for backend only
SENTRY_FE_DSNSentry DSN for frontend only
SENTRY_BE_SECURITY_REPORT_URIURI for Sentry backend security reports
SENTRY_TRACES_SAMPLE_RATESample rate for Sentry traces (0.0 to 1.0) (default=0.1)
SENTRY_PROFILES_SAMPLE_RATESample rate for Sentry profiles (0.0 to 1.0) (default=0.2)
SENTRY_ANR_ENABLEDEnables Sentry Application Not Responding detection (default=false)
SENTRY_ANR_CAPTURE_STACKTRACECaptures stacktrace for ANR events (default=false)
SENTRY_ANR_TIMEOUTTimeout in milliseconds for ANR detection

Intercom & Pylon

These variables enable you to configure Intercom and Pylon for customer support and feedback.
VariableDescription
INTERCOM_APP_IDIntercom application ID
INTERCOM_APP_BASEBase URL for Intercom API (default=https://api-iam.intercom.io)
PYLON_APP_IDPylon application ID
PYLON_IDENTITY_VERIFICATION_SECRETSecret for verifying Pylon identities

Kubernetes

These variables enable you to configure Kubernetes integration.
VariableDescription
K8S_NODE_NAMEName of the Kubernetes node
K8S_POD_NAMEName of the Kubernetes pod
K8S_POD_NAMESPACENamespace of the Kubernetes pod
LIGHTDASH_CLOUD_INSTANCEIdentifier for Lightdash cloud instance

Organization appearance

These variables allow you to customize the default appearance settings for your Lightdash instance’s organizations. This color palette will be set for all organizations in your instance. You can’t choose another one while these env vars are set.
VariableDescription
OVERRIDE_COLOR_PALETTE_NAMEName of the default color palette
OVERRIDE_COLOR_PALETTE_COLORSComma-separated list of hex color codes for the default color palette (must be 20 colors)

Initialize instance

When a new Lightdash instance is created, and there are no orgs and projects. You can initialize a new org and project using ENV variables to simplify the deployment process.
Initialization is skipped if you already have an organization and project.On subsequent restarts, only the update instance call is made. However, if you have multiple organizations or projects, the update will fail and the instance will not start. To safely restart without issues, remove the variables specified in the Update instance section below.
Initialize instance is only available on Lightdash Enterprise plans.For more information on our plans, visit our pricing page.
Currently we only support Databricks project types and Github dbt configuration.
VariableDescription
LD_SETUP_ADMIN_NAMEName of the admin user for initial setup (default=Admin User)
LD_SETUP_ADMIN_EMAIL(Required) Email of the admin user for initial setup
LD_SETUP_ORGANIZATION_UUIDUUID of the organization to configure. Use when multiple organizations exist to target a specific one instead of requiring exactly one to exist.
LD_SETUP_ORGANIZATION_EMAIL_DOMAINComma-separated list of email domains for organization whitelisting
LD_SETUP_ORGANIZATION_DEFAULT_ROLEDefault role for new organization members (default=viewer)
LD_SETUP_ORGANIZATION_NAME(Required) Name of the organization
LD_SETUP_ADMIN_API_KEY(Required) API key for the admin user, must start with ldpat_ prefix
LD_SETUP_API_KEY_EXPIRATIONNumber of days until API key expires (0 for no expiration) (default=30)
LD_SETUP_SERVICE_ACCOUNT_TOKEN(Required) A pre-set token for the service account, must start with ldsvc_ prefix
LD_SETUP_SERVICE_ACCOUNT_EXPIRATIONNumber of days until service account token expires (0 for no expiration) (default=30)
LD_SETUP_PROJECT_UUIDUUID of the project to configure. Use when multiple projects exist to target a specific one instead of requiring exactly one to exist.
LD_SETUP_PROJECT_NAME(Required) Name of the project
LD_SETUP_PROJECT_CATALOGCatalog name for Databricks project
LD_SETUP_PROJECT_SCHEMA(Required) Schema/database name for the project
LD_SETUP_PROJECT_HOST(Required) Hostname for the Databricks server
LD_SETUP_PROJECT_HTTP_PATH(Required) HTTP path for Databricks connection
LD_SETUP_PROJECT_PAT(Required) Personal access token for Databricks
LD_SETUP_START_OF_WEEKDay to use as start of week (default=SUNDAY)
LD_SETUP_PROJECT_COMPUTEJSON string with Databricks compute configuration like {"name": "string", "httpPath": "string"}
LD_SETUP_DBT_VERSIONVersion of dbt to use (eg: v1.8) (default=latest)
LD_SETUP_GITHUB_PAT(Required) GitHub personal access token
LD_SETUP_GITHUB_REPOSITORY(Required) GitHub repository for dbt project
LD_SETUP_GITHUB_BRANCH(Required) GitHub branch for dbt project
LD_SETUP_GITHUB_PATHSubdirectory path within GitHub repository (default=/)
In order to login as the admin user using SSO, you must enable the following ENV variable too:
AUTH_ENABLE_OIDC_TO_EMAIL_LINKING=true
This will alow you to link your SSO account with the email provided without using an invitation code.
This email will be trusted, and any user with an OIDC account with that email will access the admin user.

Update instance

On server start, we will check the following variables, and update some configuration of the organization or project. If multiple organizations or projects exist, you must set LD_SETUP_ORGANIZATION_UUID and/or LD_SETUP_PROJECT_UUID to target a specific one.
Update instance is only available on Lightdash Enterprise plans.For more information on our plans, visit our pricing page.
VariableDescription
LD_SETUP_ADMIN_EMAIL(Required if LD_SETUP_ADMIN_API_KEY is present) Email of the admin to update its Personal access token
LD_SETUP_ADMIN_API_KEYAPI key for the admin user, must start with ldpat_ prefix
LD_SETUP_ORGANIZATION_UUIDUUID of the organization to update. Required when multiple organizations exist.
LD_SETUP_ORGANIZATION_EMAIL_DOMAINComma-separated list of email domains for organization whitelisting
LD_SETUP_ORGANIZATION_DEFAULT_ROLEDefault role for new organization members (default=viewer)
LD_SETUP_PROJECT_UUIDUUID of the project to update. Required when multiple projects exist.
LD_SETUP_PROJECT_HTTP_PATHHTTP path for Databricks connection
LD_SETUP_PROJECT_PATPersonal access token for Databricks
LD_SETUP_DBT_VERSIONVersion of dbt to use (eg: v1.8) (default=latest)
LD_SETUP_GITHUB_PATGitHub personal access token
LD_SETUP_SERVICE_ACCOUNT_TOKENA pre-set token for the service account, must start with ldsvc_ prefix

Initialize multiple projects

Set LD_SETUP_PROJECTS to a JSON array to provision multiple projects at once. This is a drop-in replacement for the single-project LD_SETUP_PROJECT_* variables described in Initialize instance — when LD_SETUP_PROJECTS is set, the legacy LD_SETUP_PROJECT_* and LD_SETUP_GITHUB_* variables are ignored. On every boot, Lightdash matches each entry to an existing project by name: new names are created, existing names are updated in place.
Currently we only support Databricks project types and GitHub dbt configuration for multi-project setup.
Project names are the primary key. Renaming an entry creates a new project rather than renaming the existing one — charts and dashboards will stay on the old project.

Required companion variables

The admin, organization, and API key variables from Initialize instance still apply. LD_SETUP_ADMIN_EMAIL is required — without it, LD_SETUP_PROJECTS is ignored.

Schema

LD_SETUP_PROJECTS must be a JSON array. Each entry has the following shape:
FieldRequiredDescription
nameYesProject name. Must be non-empty and unique within the array.
warehouseConnectionYesObject with a valid type and the fields required by that warehouse (see below).
dbtConnectionYesObject with a valid type and the fields required by that dbt connection (below).
dbtVersionNoVersion of dbt to use (eg: v1.8). Defaults to latest.
embedNoEmbed config: { "secret": "...", "allowAllDashboards": true }.

Databricks warehouse fields

FieldRequiredDescription
typeYesMust be "databricks".
serverHostNameYesDatabricks host, no protocol. Example: dbc-xxxx.cloud.databricks.com.
httpPathYesSQL warehouse HTTP path, e.g. /sql/1.0/warehouses/abc123.
databaseYesSchema name. (Historical naming — this is the dbt schema, not the Unity Catalog.)
catalogNoUnity Catalog name.
authenticationTypeNoOne of personal_access_token (default), oauth_m2m, oauth_u2m.
personalAccessTokenIf authenticationType=personal_access_tokenDatabricks PAT (starts with dapi_).
oauthClientIdIf authenticationType=oauth_m2mDatabricks Service Principal client ID (a UUID).
oauthClientSecretIf authenticationType=oauth_m2mDatabricks Service Principal client secret.
computeNoArray of extra SQL warehouses: [{ "name": "...", "httpPath": "..." }].
startOfWeekNoDay to use as start of week (default=SUNDAY).
dataTimezoneNoProject-level timezone override.

GitHub dbt connection fields

FieldRequiredDescription
typeYesMust be "github".
authorization_methodYesUse "personal_access_token".
personal_access_tokenYesGitHub personal access token.
repositoryYesRepository in org/repo format.
branchYesBranch name to pull the dbt project from.
project_sub_pathYesSubdirectory path within the repo (use / for the root).
selectorNodbt selector name to limit which models are compiled.

Example

export LD_SETUP_ADMIN_EMAIL="admin@example.com"
export LD_SETUP_PROJECTS='[
  {
    "name": "Sample Project Alpha",
    "warehouseConnection": {
      "type": "databricks",
      "serverHostName": "alpha.cloud.databricks.com",
      "httpPath": "/sql/1.0/warehouses/abc123",
      "database": "alpha_db",
      "personalAccessToken": "dapi_alpha_fake_token"
    },
    "dbtConnection": {
      "type": "github",
      "authorization_method": "personal_access_token",
      "personal_access_token": "ghp_fake_alpha_token",
      "repository": "myorg/alpha-repo",
      "branch": "main",
      "project_sub_path": "/"
    }
  },
  {
    "name": "Sample Project Beta",
    "warehouseConnection": {
      "type": "databricks",
      "serverHostName": "beta.cloud.databricks.com",
      "httpPath": "/sql/1.0/warehouses/def456",
      "database": "beta_db",
      "personalAccessToken": "dapi_beta_fake_token"
    },
    "dbtConnection": {
      "type": "github",
      "authorization_method": "personal_access_token",
      "personal_access_token": "ghp_fake_beta_token",
      "repository": "myorg/beta-repo",
      "branch": "main",
      "project_sub_path": "/"
    }
  }
]'
Quote the whole value in single quotes in your shell so that $, backticks, and double quotes inside the JSON are not re-interpreted. When injecting via a secret manager or Kubernetes Secret, no escaping is needed — just paste the JSON as-is.

Databricks M2M OAuth example

Use a Databricks Service Principal when you want non-interactive, machine-to-machine authentication instead of a PAT. Lightdash exchanges the client_id + client_secret for an access token automatically on the first compile and refreshes it as needed — no user interaction is required.
export LD_SETUP_ADMIN_EMAIL="admin@example.com"
export LD_SETUP_PROJECTS='[
  {
    "name": "Sales (Databricks M2M)",
    "warehouseConnection": {
      "type": "databricks",
      "serverHostName": "dbc-xxxx.cloud.databricks.com",
      "httpPath": "/sql/1.0/warehouses/abc123",
      "catalog": "lightdash_prod",
      "database": "sales",
      "authenticationType": "oauth_m2m",
      "oauthClientId": "00000000-0000-0000-0000-000000000000",
      "oauthClientSecret": "dose...secret..."
    },
    "dbtConnection": {
      "type": "github",
      "authorization_method": "personal_access_token",
      "personal_access_token": "ghp_...",
      "repository": "myorg/dbt-sales",
      "branch": "main",
      "project_sub_path": "/"
    }
  }
]'
If you already have an M2M Service Principal configured for dbt, the field names are different. Map your dbt profile fields to Lightdash’s warehouseConnection like this:
profiles.yml (dbt)LD_SETUP_PROJECTS (Lightdash)
hostserverHostName
http_pathhttpPath
catalogcatalog
schemadatabase
auth_type: oauthauthenticationType: "oauth_m2m"
client_idoauthClientId
client_secretoauthClientSecret
M2M is non-interactive by design — Lightdash uses the OAuth client-credentials grant. No browser popup, no per-user sign-in. The Service Principal needs CAN USE on the SQL warehouse and the appropriate SELECT/USE CATALOG/USE SCHEMA grants on your data.

Validation

LD_SETUP_PROJECTS is parsed and validated at boot. Lightdash will fail to start with a descriptive error if any of the following are true:
ErrorCause
Failed to parse LD_SETUP_PROJECTSThe value is not valid JSON. Check your shell quoting.
Invalid LD_SETUP_PROJECTS: expected arrayThe top-level JSON is not an array.
name: Project name cannot be emptyAn entry has a missing or empty name.
warehouseConnection: Required / dbtConnection: RequiredAn entry is missing one of the two connection blocks.
Invalid warehouse typewarehouseConnection.type is not a supported warehouse.
Invalid dbt connection typedbtConnection.type is not a supported dbt connection.
Duplicate project name "X" in LD_SETUP_PROJECTSTwo entries share the same name.
Multiple projects found with name "X"Two pre-existing projects in the DB share the same name — rename one in the UI before redeploying.
Credentials in LD_SETUP_PROJECTS are the source of truth on every boot. If an admin rotates a PAT or OAuth secret in the UI, the next restart will overwrite it with the value from this variable. Keep the variable in sync with your secret manager, or omit a project from the array once you no longer want it managed via env.

Migrating from LD_SETUP_PROJECT_*

LD_SETUP_PROJECTS replaces the single-project LD_SETUP_PROJECT_NAME, LD_SETUP_PROJECT_HOST, LD_SETUP_PROJECT_HTTP_PATH, LD_SETUP_PROJECT_PAT, LD_SETUP_PROJECT_SCHEMA, LD_SETUP_PROJECT_CATALOG, LD_SETUP_PROJECT_COMPUTE, LD_SETUP_GITHUB_*, and LD_SETUP_DBT_VERSION variables. When both are set, LD_SETUP_PROJECTS takes precedence and the legacy variables are ignored. To migrate:
  1. Move your existing project config into a JSON object (use the existing project’s exact name so it is updated in place rather than recreated).
  2. Set LD_SETUP_PROJECTS to a single-element array containing that object.
  3. Remove the legacy LD_SETUP_PROJECT_* and LD_SETUP_GITHUB_* variables from your deployment.