Rate limits
oneapi.finance enforces two limits in parallel:
- A monthly quota of total successful requests, measured per customer.
- A per-minute burst limit, enforced as a GCRA token bucket per API key.
Both limits return 429 rate_limit when exceeded.
Plan caps
| Plan | Monthly quota | Per-minute burst | Price |
|---|---|---|---|
| Free | 1,000 | 8 | €0 |
| Indie | 100,000 | 60 | €19 |
| Pro | 1,000,000 | 240 | €99 |
| Business | 10,000,000 | 1,200 | €499 |
The Indie tier is the headline offering. If you need more than the Indie cap, the Pro tier is a 10× step up at roughly 5× the price. Above Pro, you should expect to talk to us about your traffic shape so we can size capacity sensibly.
How the burst limit works
The per-minute limit is implemented as a GCRA token bucket. In practice this means:
- Tokens refill at a constant rate equal to your per-minute cap divided by 60. At 60 requests per minute, you regenerate one token per second.
- The bucket has a small burst capacity above the steady rate. A fresh bucket on the Indie plan can absorb a burst of about 10 requests before it starts throttling, then settles to one per second.
- Each accepted request consumes exactly one token, regardless of endpoint,
cache hit, or response size. Batched
/v1/quote?symbols=A,B,Ccalls count as a single token, which is one of the cheapest ways to widen your effective throughput.
This shape is friendly to bursty workloads (page loads, scheduled refreshes) and hostile to runaway loops. If you need higher sustained throughput, upgrade the plan or batch your requests.
Response headers
Every API response includes these headers:
| Header | Meaning |
|---|---|
X-RateLimit-Limit-Minute | Per-minute burst cap for this key. |
X-RateLimit-Remaining-Minute | Tokens remaining in the per-minute bucket at the moment of this response. |
X-RateLimit-Reset-Minute | Unix epoch seconds when the bucket will be fully refilled. |
X-RateLimit-Limit-Month | Monthly quota for the customer (across all keys). |
X-RateLimit-Remaining-Month | Calls remaining this calendar month, UTC. |
X-RateLimit-Reset-Month | Unix epoch seconds at next month rollover (00:00 UTC on the 1st). |
Retry-After | On 429 only. Seconds the client should wait before retrying. |
Use X-RateLimit-Remaining-Minute to back off proactively rather than waiting
for a 429.
A 429 response
HTTP/1.1 429 Too Many RequestsContent-Type: application/jsonRetry-After: 4X-RateLimit-Limit-Minute: 60X-RateLimit-Remaining-Minute: 0X-RateLimit-Reset-Minute: 1735689664
{ "code": "rate_limit", "message": "Per-minute rate limit exceeded for key oa_live_abcd1234.", "status": 429, "details": { "scope": "minute", "retry_after_seconds": 4 }}details.scope is one of:
"minute"— burst limit. Retry afterRetry-Afterseconds."month"— monthly quota. Retry after the next billing period start. Upgrade the plan if this happens regularly.
Handling 429 in client code
The recommended pattern is exponential backoff with jitter, capped at one minute:
import randomimport timeimport httpx
def fetch_quote(symbol: str, api_key: str, max_attempts: int = 5): backoff = 1.0 for attempt in range(max_attempts): r = httpx.get( "https://api.oneapi.finance/v1/quote", params={"symbol": symbol}, headers={"Authorization": f"Bearer {api_key}"}, timeout=10.0, ) if r.status_code != 429: r.raise_for_status() return r.json()["quote"]
retry_after = float(r.headers.get("Retry-After", backoff)) sleep_for = min(retry_after, 60) + random.uniform(0, 0.5) time.sleep(sleep_for) backoff = min(backoff * 2, 60)
raise RuntimeError(f"Gave up on {symbol} after {max_attempts} attempts")async function fetchQuote(symbol, apiKey, maxAttempts = 5) { let backoff = 1000; for (let attempt = 0; attempt < maxAttempts; attempt++) { const r = await fetch( `https://api.oneapi.finance/v1/quote?symbol=${symbol}`, { headers: { Authorization: `Bearer ${apiKey}` } }, ); if (r.status !== 429) { if (!r.ok) throw new Error(`HTTP ${r.status}`); return (await r.json()).quote; } const retryAfterMs = (Number(r.headers.get("Retry-After")) || backoff / 1000) * 1000; const sleepMs = Math.min(retryAfterMs, 60_000) + Math.random() * 500; await new Promise((resolve) => setTimeout(resolve, sleepMs)); backoff = Math.min(backoff * 2, 60_000); } throw new Error(`Gave up on ${symbol}`);}What’s next
- Errors — error envelope and other status codes.
- Caching recipe — keep popular symbols out of the bucket entirely.
- Batching recipe — collapse N requests into one.