Rate limits protect the API from abuse and ensure stable performance for all users.
Rate Limit Tiers
Free Tier
100 requests/hour
- Per API key
- Sliding window
- Burst allowance: 20/min
Professional Tier
1,000 requests/hour
- Per API key
- Sliding window
- Burst allowance: 100/min
Enterprise Tier
10,000 requests/hour
- Per API key
- Sliding window
- Burst allowance: 500/min
Custom Tier
Negotiated limits
- Dedicated infrastructure
- SLA guarantees
- Custom burst limits
- Contact sales for details
How Rate Limits Work
Sliding Window
Rate limits use a sliding window algorithm:
- Tracks requests in the last hour
- Updates continuously
- Prevents gaming the system
Example: At 10:30 AM, you can make requests that occurred since 9:30 AM.
Burst Protection
Short-term burst limits prevent sudden spikes:
- Free: 20 requests per minute
- Professional: 100 requests per minute
- Enterprise: 500 requests per minute
Checking Your Usage
Response Headers
Every API response includes rate limit info:
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 847
X-RateLimit-Reset: 1642175400
- Limit: Total requests allowed per hour
- Remaining: Requests left in current window
- Reset: Unix timestamp when limit resets
Monitoring Dashboard
View detailed usage statistics:
- Go to Settings → API Usage
- See real-time usage graphs
- View historical data
- Set up usage alerts
Rate Limit Exceeded
Response
When you exceed the limit:
{
"error": {
"code": "rate_limit_exceeded",
"message": "Rate limit exceeded. Please try again later.",
"details": {
"limit": 1000,
"reset": 1642175400
}
}
}
Status Code: 429 Too Many Requests
Retry-After Header
Retry-After: 120
Number of seconds to wait before retrying.
Optimization Strategies
1. Implement Caching
Cache API responses to reduce requests:
const cache = new Map()
async function getCourse(id) {
if (cache.has(id)) {
return cache.get(id)
}
const course = await api.get(`/courses/${id}`)
cache.set(id, course)
// Expire after 5 minutes
setTimeout(() => cache.delete(id), 5 * 60 * 1000)
return course
}
2. Batch Requests
Combine multiple operations:
Instead of:
for (const id of lessonIds) {
await api.get(`/lessons/${id}`) // 10 requests
}
Do this:
// Get lessons with modules (1 request)
const course = await api.get(`/courses/${courseId}`)
const lessons = course.modules.flatMap(m => m.lessons)
3. Use Webhooks
Instead of polling for changes, use webhooks:
Polling (inefficient):
setInterval(async () => {
const course = await api.get('/courses/123') // Every minute
}, 60000)
Webhooks (efficient):
// Register webhook once
// Receive updates automatically
4. Implement Retry Logic
Handle rate limits gracefully:
async function apiRequest(url, options = {}) {
const maxRetries = 3
let retries = 0
while (retries < maxRetries) {
const response = await fetch(url, options)
if (response.status === 429) {
const retryAfter = response.headers.get('Retry-After')
const delay = retryAfter ? parseInt(retryAfter) * 1000 : 60000
console.log(`Rate limited. Retrying in ${delay}ms`)
await sleep(delay)
retries++
continue
}
return response
}
throw new Error('Max retries exceeded')
}
5. Spread Requests
Distribute requests over time:
async function processItems(items) {
for (const item of items) {
await processItem(item)
await sleep(100) // 100ms between requests
}
}
Rate Limit Best Practices
Monitor Usage
- Check headers on every response
- Set up alerts at 80% usage
- Track usage patterns
Plan Capacity
- Estimate requests needed
- Upgrade tier if consistently hitting limits
- Consider multiple API keys for different services
Handle Errors
- Implement exponential backoff
- Queue requests when near limit
- Inform users of delays
Request Only What You Need
- Use query parameters to filter
- Paginate large datasets
- Cache frequently accessed data
Increasing Limits
Need higher limits?
Upgrade Plan
- Professional: 1,000/hour
- Enterprise: 10,000/hour
- View plans
Custom Limits
Contact sales for:
- Higher limits
- Dedicated infrastructure
- SLA guarantees
- Custom billing
Email: sales@courseforge.com