dbt Cloud API

dbt Cloud is a hosted platform for dbt (data build tool), which transforms raw data in warehouses using SQL-based models. The dbt Cloud REST API provides programmatic control over the full analytics engineering lifecycle: trigger job runs, poll run status, retrieve run artifacts (compiled SQL, manifest.json, run_results.json), manage projects, environments, and connections, and administer users and permissions. It is the operational layer for scheduling and orchestrating dbt transformations on top of warehouses like Snowflake, BigQuery, Redshift, and Databricks.

Evaluated Mar 06, 2026 (0d ago) vcurrent
Homepage ↗ Repo ↗ Other dbt data-transformation sql analytics-engineering data-pipeline elt cloud
⚙ Agent Friendliness
56
/ 100
Can an agent use this?
🔒 Security
80
/ 100
Is it safe for agents?
⚡ Reliability
76
/ 100
Does it work consistently?

Score Breakdown

⚙ Agent Friendliness

MCP Quality
--
Documentation
80
Error Messages
74
Auth Simplicity
82
Rate Limits
60

🔒 Security

TLS Enforcement
100
Auth Strength
78
Scope Granularity
60
Dep. Hygiene
80
Secret Handling
80

Service account tokens are the correct pattern for agent use — avoids tying credentials to a user's personal account. However, token scoping is coarse (role-based, not endpoint-based), meaning an agent token typically has broader access than necessary. No support for token expiry or rotation via API. TLS enforced on all endpoints.

⚡ Reliability

Uptime/SLA
80
Version Stability
78
Breaking Changes
75
Error Recovery
72
AF Security Reliability

Best When

An agent needs to trigger and monitor dbt transformation jobs as part of an ELT pipeline, or when integrating dbt metadata (lineage, test results) into a broader data orchestration or observability workflow.

Avoid When

You need streaming/real-time transformation, you're using dbt Core exclusively without a Cloud account, or you need to query the transformed data itself rather than manage the transformation process.

Use Cases

  • Triggering dbt job runs from CI/CD pipelines or orchestrators (Airflow, Prefect, Dagster) after upstream data loads complete
  • Polling job run status and retrieving run artifacts (run_results.json) for downstream quality checks or alerting
  • Programmatically creating and managing dbt Cloud projects, environments, and warehouse connections for multi-tenant setups
  • Building metadata pipelines that consume dbt artifacts (manifest.json, catalog.json) for data lineage and documentation systems
  • Automating environment promotion: trigger runs in dev, staging, and prod environments with different variable overrides

Not For

  • Teams using open-source dbt Core without a Cloud account — dbt Core has no hosted API; orchestration must be done via CLI or third-party schedulers
  • Real-time data transformation — dbt runs are batch SQL jobs, not streaming processors
  • Non-SQL transformations — dbt is SQL-first; Python models exist but are limited compared to dedicated Python data tools
  • Querying or reading transformed data directly — dbt Cloud API manages runs and metadata, not data retrieval from the warehouse

Interface

REST API
Yes
GraphQL
No
gRPC
No
MCP Server
No
SDK
Yes
Webhooks
Yes

Authentication

Methods: bearer_token api_key
OAuth: No Scopes: No

Personal API tokens and service account tokens both use Bearer token auth via Authorization header. Service account tokens are preferred for CI/CD and agent use — they are not tied to a specific user's account. Token permissions inherit from the service account's role in the dbt Cloud account. No fine-grained token scoping; access is coarse-grained by role (Account Admin, Developer, Analyst, etc.).

Pricing

Model: freemium
Free tier: Yes
Requires CC: No

Developer plan is free and provides API access, making it viable for agent development and testing. Team and Enterprise plans unlock advanced scheduling, SSO, and SLA guarantees. Annual contracts available with discounts.

Agent Metadata

Pagination
offset
Idempotent
No
Retry Guidance
Not documented

Known Gotchas

  • Job runs are asynchronous — triggering a run returns a run ID immediately; agents must poll GET /runs/{run_id}/ until status is 'Success', 'Error', or 'Cancelled'. No webhook-based completion notification without additional setup.
  • Run status is numeric (1=Queued, 2=Starting, 3=Running, 10=Success, 20=Error, 30=Cancelled) — agents must map these integers; the API does not return string status in the primary field.
  • Account ID is required as a path parameter on nearly every endpoint (/api/v2/accounts/{account_id}/...) — agents must store and inject this at initialization.
  • Concurrent run limits vary by plan tier — if the account's concurrency limit is reached, new run triggers are queued or rejected; agents must handle queued state without assuming the run failed.
  • Artifacts (run_results.json, manifest.json) are only available after a run completes — polling for artifacts before completion returns 404; agents must wait for terminal run status before fetching artifacts.
  • Environment-scoped API tokens are not supported — a single service account token has access to all projects in the account; agents should use dedicated service accounts with minimal role assignment.
  • The v2 API and the older v2 beta API have overlapping but inconsistent endpoint structures — always verify endpoint path against the current docs as examples online may reference deprecated paths.

Alternatives

Full Evaluation Report

Comprehensive deep-dive: security analysis, reliability audit, agent experience review, cost modeling, competitive positioning, and improvement roadmap for dbt Cloud API.

AI-powered analysis · PDF + markdown · Delivered within 30 minutes

$99

Package Brief

Quick verdict, integration guide, cost projections, gotchas with workarounds, and alternatives comparison.

Delivered within 10 minutes

$3

Score Monitoring

Get alerted when this package's AF, security, or reliability scores change significantly. Stay ahead of regressions.

Continuous monitoring

$3/mo

Scores are editorial opinions as of 2026-03-06.

5382
Packages Evaluated
26151
Need Evaluation
173
Need Re-evaluation
Community Powered