Client-side caching
oneapi.finance already runs a stale-while-revalidate cache server-side; that
is why you can hit /v1/quote?symbol=AAPL 1000 times a day and get answers
instantly. But you can do better by caching on your side too.
This recipe lays out TTLs that match the underlying refresh cadence of each endpoint and shows reference implementations.
Recommended TTLs
| Endpoint | Recommended client TTL | Why |
|---|---|---|
/v1/quote (during market hours) | 60 seconds | Underlying delay is ~15 minutes; sub-minute polling is wasted. |
/v1/quote (after-hours) | 1 hour | Closing print does not move until the next session opens. |
/v1/time_series (intraday) | 5 minutes | Most consumers refresh charts at this cadence. |
/v1/time_series (daily/weekly/monthly) | 4 hours during market hours, 24 hours otherwise | EOD bars finalize after close. |
/v1/statistics | 6 hours | Most fields refresh daily or quarterly. |
/v1/profile | 7 days | Effectively static. |
/v1/dividends | 24 hours | New events on the order of weeks. |
/v1/splits | 7 days | New events on the order of months. |
/v1/symbol_search | 24 hours | Stable per query. |
/v1/fx/time_series (current) | 5 minutes | Intraday FX moves but slowly; 5 min is fine for portfolio totals. |
/v1/fx/time_series (history) | 7 days | Closed bars do not change. |
These are starting points. Your application’s tolerance for staleness is the real input — a real-time-feeling watchlist might want 30-second quote TTLs even though 60 is “objectively” enough.
Choosing a cache layer
| Scale | Suggested cache |
|---|---|
| Single-machine CLI / cron job | SQLite |
| One-server web app | In-memory LRU + Redis fallback |
| Multi-instance app | Redis or Memcached |
| Edge-deployed app (Vercel, Cloudflare Workers) | KV or Workers Cache + per-region in-memory |
The goal is to avoid stampedes: when a popular symbol falls out of cache, ten clients should not simultaneously request it from us.
SQLite reference (Python)
import jsonimport sqlite3import timefrom typing import Callable
class SqliteCache: def __init__(self, path: str = "oneapi_cache.sqlite"): self.db = sqlite3.connect(path) self.db.execute(""" CREATE TABLE IF NOT EXISTS cache ( key TEXT PRIMARY KEY, value TEXT NOT NULL, expires_at INTEGER NOT NULL ) """)
def get_or_fetch(self, key: str, ttl: int, fetch: Callable[[], dict]) -> dict: row = self.db.execute( "SELECT value, expires_at FROM cache WHERE key = ?", (key,) ).fetchone() if row and row[1] > time.time(): return json.loads(row[0]) fresh = fetch() self.db.execute( "INSERT OR REPLACE INTO cache VALUES (?, ?, ?)", (key, json.dumps(fresh), int(time.time()) + ttl), ) self.db.commit() return fresh
# Usage:cache = SqliteCache()
def cached_quote(symbol: str) -> dict: return cache.get_or_fetch( f"quote:{symbol}", ttl=60, fetch=lambda: httpx.get( "https://api.oneapi.finance/v1/quote", params={"symbol": symbol}, headers={"Authorization": f"Bearer {API_KEY}"}, ).json(), )Redis reference (Node)
import { createClient } from "redis";
const redis = createClient();await redis.connect();
async function getOrFetch(key, ttlSec, fetcher) { const cached = await redis.get(key); if (cached) return JSON.parse(cached); const fresh = await fetcher(); await redis.set(key, JSON.stringify(fresh), { EX: ttlSec }); return fresh;}
const quote = await getOrFetch( "oneapi:quote:AAPL", 60, async () => { const r = await fetch("https://api.oneapi.finance/v1/quote?symbol=AAPL", { headers: { Authorization: `Bearer ${process.env.ONEAPI_KEY}` }, }); return r.json(); },);Stampede protection
A naive cache lets every concurrent miss trigger an upstream call. Add a single-flight lock:
import threadingfrom functools import lru_cache
class SingleFlight: def __init__(self): self._locks: dict[str, threading.Lock] = {} self._guard = threading.Lock()
def lock_for(self, key: str) -> threading.Lock: with self._guard: if key not in self._locks: self._locks[key] = threading.Lock() return self._locks[key]
flight = SingleFlight()
def cached_quote(symbol: str) -> dict: key = f"quote:{symbol}" cached = cache.peek(key) if cached: return cached with flight.lock_for(key): cached = cache.peek(key) if cached: return cached fresh = upstream_fetch(symbol) cache.set(key, fresh, ttl=60) return freshStale-while-revalidate
When freshness is “soon enough” but you want zero perceived latency, return the cached value and trigger a background refresh:
async function swr(key, ttlSec, staleSec, fetcher) { const wrapped = await redis.hGetAll(key); const value = wrapped.value ? JSON.parse(wrapped.value) : null; const fetchedAt = Number(wrapped.fetchedAt || 0); const age = (Date.now() / 1000) - fetchedAt;
if (value && age < ttlSec) return value;
const refresh = (async () => { const fresh = await fetcher(); await redis.hSet(key, { value: JSON.stringify(fresh), fetchedAt: String(Math.floor(Date.now() / 1000)), }); return fresh; })();
if (value && age < staleSec) { refresh.catch(() => {}); // fire-and-forget return value; } return refresh;}See also
- Rate limits
- Batching recipe — also reduces calls.
/v1/usage— see yourcache_hit_ratefor each endpoint.