Upstash & Qstash Cleanup Guide
Diagnosis Summary
Excessive Redis Reads Issue (CRITICAL)
- **Current**: 13,480,415 reads vs 31,875 writes (423:1 ratio)
- **Root Cause**: Frontend Next.js API routes (
src/lib/tenant/tenant-extractor.ts) call PostgreSQL database 2-5 times per request with NO Redis caching - **Impact**: 812 API routes are affected, causing massive database load
- **Fix Required**: Add Redis caching to
getTenantFromRequest()function
See REDIS_READS_DIAGNOSIS.md for complete analysis.
Cleanup Instructions
Option 1: SSH into Fly.io and Run Cleanup Script (Recommended)
# 1. SSH into the app
fly ssh console -a atom-saas
# 2. Once inside, run Python cleanup commands
python3 << 'EOF'
import os
import requests
# Clean Redis
redis_url = os.getenv("UPSTASH_REDIS_REST_URL")
redis_token = os.getenv("UPSTASH_REDIS_REST_TOKEN")
if redis_url and redis_token:
print("Cleaning Redis...")
headers = {"Authorization": f"Bearer {redis_token}"}
r = requests.get(f"{redis_url}/dbsize", headers=headers, timeout=5)
if r.status_code == 200:
print(f"Current keys: {r.json().get('result', 0)}")
r = requests.post(f"{redis_url}/flushdb", headers=headers, timeout=5)
print(f"Redis cleared: {r.status_code == 200}")
# Clean QStash DLQ
qstash_url = os.getenv("QSTASH_URL", "https://qstash-us-east-1.upstash.io")
qstash_token = os.getenv("QSTASH_TOKEN")
if qstash_token:
print("Cleaning QStash DLQ...")
headers = {"Authorization": f"Bearer {qstash_token}", "Content-Type": "application/json"}
r = requests.delete(f"{qstash_url}/v2/dlq", headers=headers, timeout=10)
if r.status_code in [200, 204]:
data = r.json() if r.text else {}
print(f"Deleted {data.get('deleted', 0)} DLQ messages")
# Check schedules
r = requests.get(f"{qstash_url}/v2/schedules", headers=headers, timeout=5)
if r.status_code == 200:
schedules = r.json()
if schedules:
print(f"Found {len(schedules)} active schedules")
print("Done!")
EOFOption 2: Use Existing Cleanup Scripts (After Deploy)
The cleanup script has been created at:
backend-saas/scripts/clean_all_upstash_and_qstash.py
To use it:
- Deploy the script to Fly.io:
cd backend-saas
fly deploy -a atom-saas- Run it via SSH:
fly ssh console -a atom-saas -C "python3 /app/scripts/clean_all_upstash_and_qstash.py"Option 3: Manual REST API Calls
If you have the Upstash and QStash credentials locally:
# Get credentials from Fly.io
fly secrets list -a atom-saas
# Then use curl to clean (replace with actual values)
# Clean Redis
curl -X POST "https://<UPSTASH_REDIS_REST_URL>/flushdb" \
-H "Authorization: Bearer <UPSTASH_REDIS_REST_TOKEN>"
# Clean QStash DLQ
curl -X DELETE "https://qstash-us-east-1.upstash.io/v2/dlq" \
-H "Authorization: Bearer <QSTASH_TOKEN>"
# List QStash schedules
curl -X GET "https://qstash-us-east-1.upstash.io/v2/schedules" \
-H "Authorization: Bearer <QSTASH_TOKEN>"What Gets Cleaned
Redis (Upstash)
- **All keys** - Complete cache flush (FLUSHDB)
- **Impact**: Will cause cache misses until data is repopulated
- **Recovery**: Automatic - cache will rebuild as requests come in
QStash
- **DLQ (Dead Letter Queue)** - All failed messages
- **Schedules** - Optional (not deleted by default, can be added)
Post-Cleanup Verification
# Check Redis key count
fly ssh console -a atom-saas -C "python3 -c \"
import os, requests
url = os.getenv('UPSTASH_REDIS_REST_URL')
token = os.getenv('UPSTASH_REDIS_REST_TOKEN')
r = requests.get(f'{url}/dbsize', headers={'Authorization': f'Bearer {token}'})
print(f'Redis keys: {r.json().get(\\\"result\\\", 0)}')
\""
# Check QStash DLQ
fly ssh console -a atom-saas -C "python3 -c \"
import os, requests
url = os.getenv('QSTASH_URL')
token = os.getenv('QSTASH_TOKEN')
r = requests.get(f'{url}/v2/dlq', headers={'Authorization': f'Bearer {token}'})
print(f'DLQ messages: {len(r.json())}')
\""Next Steps
- **Clean up Redis/QStash** (use Option 1 above - easiest)
- **Fix excessive reads** - Implement tenant caching in frontend (see REDIS_READS_DIAGNOSIS.md)
- **Monitor metrics** - Check Upstash dashboard for read count reduction
- **Set up alerts** - Configure alerts for abnormal read patterns
---
**Generated**: 2026-04-08
**Status**: Ready for cleanup
**Est. Time**: 5 minutes