Originally published on SociaVault Blog
You’ve built a scraper. It works perfectly for 10 minutes. Then crashes with a 429 error.
Rate limit exceeded. Blocked for 15 minutes.
You restart. Same thing. Your data pipeline is broken. Users are waiting.
I’ve hit every rate limit imaginable. Instagram blocked me for a day. Twitter cut me off mid-scrape. TikTok throttled me to nothing.
Now I handle millions of API requests daily without issues. Let me show you how.
The Problem: Why APIs Have Limits
APIs protect their infrastructure with limits:
- Per second: 10 requests/sec
- Per minute: 100 requests/min
- Per hour: 5,000 requests/hr
- Per day: 50,000 requests/day
Hit any limit → blocked…
Originally published on SociaVault Blog
You’ve built a scraper. It works perfectly for 10 minutes. Then crashes with a 429 error.
Rate limit exceeded. Blocked for 15 minutes.
You restart. Same thing. Your data pipeline is broken. Users are waiting.
I’ve hit every rate limit imaginable. Instagram blocked me for a day. Twitter cut me off mid-scrape. TikTok throttled me to nothing.
Now I handle millions of API requests daily without issues. Let me show you how.
The Problem: Why APIs Have Limits
APIs protect their infrastructure with limits:
- Per second: 10 requests/sec
- Per minute: 100 requests/min
- Per hour: 5,000 requests/hr
- Per day: 50,000 requests/day
Hit any limit → blocked temporarily.
Solution 1: Exponential Backoff
When rate limited, wait before retrying. But wait longer each time.
async function requestWithBackoff(url, headers, maxRetries = 5) {
let retries = 0;
let delay = 1000; // Start with 1 second
while (retries < maxRetries) {
try {
const response = await axios.get(url, { headers });
return response.data;
} catch (error) {
if (error.response?.status === 429) {
retries++;
if (retries >= maxRetries) {
throw new Error('Max retries exceeded');
}
console.log(`Rate limited. Waiting ${delay}ms (retry ${retries}/${maxRetries})`);
await new Promise(resolve => setTimeout(resolve, delay));
delay *= 2; // Exponential: 1s, 2s, 4s, 8s, 16s
} else {
throw error;
}
}
}
}
Why this works: First retry after 1s. Second after 2s. Third after 4s. Gives the API time to recover.
Solution 2: Request Queue with Rate Limiting
Instead of bursts, queue requests and send at controlled rate:
class RateLimitedQueue {
constructor(requestsPerSecond = 5) {
this.queue = [];
this.requestsPerSecond = requestsPerSecond;
this.interval = 1000 / requestsPerSecond;
this.processing = false;
}
add(requestFunction) {
return new Promise((resolve, reject) => {
this.queue.push({ requestFunction, resolve, reject });
if (!this.processing) {
this.process();
}
});
}
async process() {
this.processing = true;
while (this.queue.length > 0) {
const item = this.queue.shift();
try {
const result = await item.requestFunction();
item.resolve(result);
} catch (error) {
item.reject(error);
}
// Wait before next request
if (this.queue.length > 0) {
await new Promise(resolve => setTimeout(resolve, this.interval));
}
}
this.processing = false;
}
}
// Usage
const queue = new RateLimitedQueue(5); // 5 requests per second
await queue.add(async () => {
return await fetchData('https://api.example.com/data');
});
Why this works: Guarantees you never exceed rate limit. Requests spaced evenly. No bursts.
Solution 3: Token Bucket Algorithm
More sophisticated rate limiting for variable loads:
class TokenBucket {
constructor(capacity, refillRate) {
this.capacity = capacity;
this.tokens = capacity;
this.refillRate = refillRate; // tokens per second
this.lastRefill = Date.now();
}
refill() {
const now = Date.now();
const timePassed = (now - this.lastRefill) / 1000;
const tokensToAdd = timePassed * this.refillRate;
this.tokens = Math.min(this.capacity, this.tokens + tokensToAdd);
this.lastRefill = now;
}
async consume(tokens = 1) {
this.refill();
if (this.tokens >= tokens) {
this.tokens -= tokens;
return true;
}
// Wait for tokens to refill
const tokensNeeded = tokens - this.tokens;
const waitTime = (tokensNeeded / this.refillRate) * 1000;
await new Promise(resolve => setTimeout(resolve, waitTime));
this.refill();
this.tokens -= tokens;
return true;
}
}
// Usage
const bucket = new TokenBucket(100, 10); // 100 max, refill 10/sec
async function makeRequest(url) {
await bucket.consume(1);
return await fetch(url);
}
Why this works: Allows bursts when tokens available, smooths over time. More flexible than fixed rate.
Solution 4: Distributed Rate Limiting with Redis
When multiple servers make requests, coordinate with Redis:
const Redis = require('ioredis');
const redis = new Redis();
class DistributedRateLimiter {
constructor(key, limit, windowSeconds) {
this.key = key;
this.limit = limit;
this.windowSeconds = windowSeconds;
}
async checkLimit() {
const now = Date.now();
const windowStart = now - (this.windowSeconds * 1000);
// Remove old requests
await redis.zremrangebyscore(this.key, 0, windowStart);
// Count requests in window
const requestCount = await redis.zcard(this.key);
if (requestCount < this.limit) {
await redis.zadd(this.key, now, `${now}-${Math.random()}`);
await redis.expire(this.key, this.windowSeconds * 2);
return true;
}
return false;
}
async waitForSlot() {
while (true) {
if (await this.checkLimit()) {
return true;
}
await new Promise(resolve => setTimeout(resolve, 100));
}
}
}
// Usage across multiple servers
const limiter = new DistributedRateLimiter('api:limit', 100, 60);
async function makeDistributedRequest(url) {
await limiter.waitForSlot();
return await fetch(url);
}
Why this works: All servers share the same Redis counter. Total requests never exceed limit. Critical for scaling.
Real Results
Before rate limiting strategies:
- Hit limit every 2 minutes
- Blocked 15 minutes each time
- Could scrape 100 profiles/hour
- Constant errors
After implementing these:
- Never hit limits
- 5,000 profiles/hour
- Zero errors
- Smooth operation
Read the Full Guide
This is a condensed version. The full guide includes:
- Smart caching strategies
- Multiple API key rotation
- Complete production-ready code
- Python implementations
- Monitoring and alerting
Read the complete guide on SociaVault →
Building social media data tools? SociaVault handles rate limiting infrastructure for you. Focus on your app, not API limits.
Discussion
What rate limiting strategies have worked for you? Share in the comments! 👇