Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mintlify.com/metabase/metabase/llms.txt

Use this file to discover all available pages before exploring further.

The Metabase API implements rate limiting to ensure fair usage and maintain system stability. Understanding these limits helps you build reliable integrations.

Rate limit overview

Rate limits vary based on the type of endpoint and your Metabase instance configuration. Generally:
  • Query endpoints (/api/dataset, /api/card/*/query) have stricter limits due to database resource usage
  • Metadata endpoints (listing resources) have more generous limits
  • Write operations (creating/updating resources) have moderate limits
Rate limits are configurable by instance administrators. Contact your Metabase admin for specific limits on your instance.

Rate limit headers

API responses include headers that indicate your current rate limit status:
X-Rate-Limit-Limit
integer
Maximum number of requests allowed in the time window
X-Rate-Limit-Remaining
integer
Number of requests remaining in the current window
X-Rate-Limit-Reset
integer
Unix timestamp when the rate limit window resets

Example response headers

HTTP/1.1 200 OK
X-Rate-Limit-Limit: 1000
X-Rate-Limit-Remaining: 987
X-Rate-Limit-Reset: 1710504600

Rate limit errors

When you exceed the rate limit, the API returns a 429 Too Many Requests status code.

Error response

{
  "message": "Rate limit exceeded. Please try again later.",
  "error": true,
  "retry_after": 60
}
retry_after
integer
Seconds to wait before retrying the request

Best practices

Implement exponential backoff

When you receive a rate limit error, implement exponential backoff to retry requests:
async function makeRequestWithBackoff(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url, options);
    
    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After') || Math.pow(2, i);
      await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
      continue;
    }
    
    return response;
  }
  
  throw new Error('Max retries exceeded');
}

Monitor rate limit headers

Check rate limit headers proactively to avoid hitting limits:
const response = await fetch(url, options);
const remaining = parseInt(response.headers.get('X-Rate-Limit-Remaining'));
const limit = parseInt(response.headers.get('X-Rate-Limit-Limit'));

if (remaining < limit * 0.1) {
  console.warn('Approaching rate limit');
  // Implement throttling
}

Batch requests efficiently

Instead of making multiple individual requests, batch operations when possible:
// Making separate requests for each card
for (const cardId of cardIds) {
  const card = await fetch(`/api/card/${cardId}`);
  // Process card
}

Cache responses

Cache API responses that don’t change frequently:
const cache = new Map();
const CACHE_TTL = 5 * 60 * 1000; // 5 minutes

async function getCachedData(url, options) {
  const cached = cache.get(url);
  
  if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
    return cached.data;
  }
  
  const response = await fetch(url, options);
  const data = await response.json();
  
  cache.set(url, { data, timestamp: Date.now() });
  return data;
}

Use webhooks instead of polling

For monitoring changes, use webhooks instead of repeatedly polling endpoints:
Polling endpoints frequently (e.g., every second) will quickly exhaust your rate limit. Use webhooks or longer intervals when polling is necessary.

Optimize query endpoints

Query execution endpoints consume more resources:
Query optimization tips:
  • Use query caching when appropriate
  • Limit result set sizes with filters
  • Execute queries during off-peak hours for large datasets
  • Use the dataset endpoints efficiently
  • Consider using saved questions instead of ad-hoc queries

Endpoint-specific limits

Query execution

Query endpoints have stricter limits:
  • POST /api/dataset/ - Execute ad-hoc queries
  • POST /api/card/{id}/query - Execute saved question queries
  • POST /api/dashboard/{id}/dashcard/{id}/card/{id}/query - Execute dashboard card queries
Consider using query result caching or downloading results periodically rather than executing the same query repeatedly.

Public endpoints

Public sharing endpoints have separate rate limits to prevent abuse:
  • GET /api/public/card/{uuid}/query
  • GET /api/public/dashboard/{uuid}

Metadata endpoints

Metadata browsing typically has more generous limits:
  • GET /api/database/
  • GET /api/collection/
  • GET /api/card/

Monitoring and troubleshooting

Track your API usage

Implement logging to monitor your API usage patterns:
function logRateLimitStatus(response) {
  console.log({
    timestamp: new Date().toISOString(),
    limit: response.headers.get('X-Rate-Limit-Limit'),
    remaining: response.headers.get('X-Rate-Limit-Remaining'),
    reset: new Date(parseInt(response.headers.get('X-Rate-Limit-Reset')) * 1000)
  });
}

Identify rate limit bottlenecks

If you’re consistently hitting rate limits:
  1. Review which endpoints you’re calling most frequently
  2. Identify opportunities for caching
  3. Batch requests where possible
  4. Optimize query complexity
  5. Consider upgrading your instance resources

Contact your administrator

For higher rate limits:
  • Discuss your use case with your Metabase administrator
  • Request rate limit adjustments for specific endpoints
  • Consider upgrading to a plan with higher limits (for Cloud customers)
Metabase Cloud plans may have different rate limits than self-hosted instances. Check your plan details or contact support for specific information.