Skip to content

Alerts & Notifications

Set up alert rules to get notified via Discord or email when critical metrics change. Alerts run automatically on a schedule and compare current values against a configurable baseline window.

TypeTrigger ConditionUse Case
Crash SpikeError count exceeds X% of baselineDetect crash loops after updates
Revenue DropDaily revenue falls below X% of baselineCatch payment issues early
DAU DropActive users drop below X% of estimated averageSpot retention problems
Retention DropDay-7 retention of recent cohorts below X% of baselineOnboarding regression
Log Error SpikeLog errors+warnings exceed X% of baselineRemote logging anomalies
  1. Go to Configuration > Alerts
  2. Click New Alert
  3. Configure:
    • Name: Descriptive name (e.g. “Post-Update Crash Watch”)
    • Type: Select alert type
    • Threshold: Multiplier or percentage (e.g. 2.0 = trigger when 2x baseline)
    • Comparison Window: Baseline period (24h, 72h, 1 week, 1 month)
    • Email: Notification email address
    • Discord Webhook: Optional Discord webhook URL
  4. Click Save
  • Spike types (Crash Spike, Log Error Spike): Trigger when current value >= baseline * threshold. A threshold of 2.0 means “trigger at 200% of normal”.
  • Drop types (Revenue, DAU, Retention): Trigger when current value < baseline * threshold. A threshold of 0.7 means “trigger when below 70% of normal”.
  1. In Discord: Server Settings > Integrations > Webhooks > New Webhook
  2. Copy the webhook URL (starts with https://discord.com/api/webhooks/...)
  3. Paste into the alert rule’s Discord Webhook field
  4. Use Test Webhook to verify the connection

Alerts appear as rich embeds with color-coded types:

TypeEmbed Color
Crash SpikeRed
Log Error SpikeDark Red
Revenue DropOrange
DAU DropBlue
Retention DropPurple

The History tab shows all triggered alerts with:

  • Alert Name and type
  • Change % from baseline
  • Delivery Status (email sent, webhook sent, errors)
  • Triggered At timestamp

Alerts have a 1-hour cooldown per rule to prevent notification spam. After triggering, the same rule won’t fire again for at least 60 minutes.

Terminal window
# List alert rules for a game
curl "https://api.questdata.io/v1/alerts?game_id=GAME_ID" \
-H "Authorization: Bearer YOUR_JWT"
# Create an alert rule
curl -X POST "https://api.questdata.io/v1/alerts" \
-H "Authorization: Bearer YOUR_JWT" \
-H "Content-Type: application/json" \
-d '{
"game_id": "GAME_ID",
"name": "Log Error Watch",
"type": "log_error_spike",
"threshold": 2.0,
"comparison_window_hours": 168,
"email": "dev@example.com",
"discord_webhook_url": "https://discord.com/api/webhooks/..."
}'
# Get alert history
curl "https://api.questdata.io/v1/alerts/history?game_id=GAME_ID&limit=50" \
-H "Authorization: Bearer YOUR_JWT"
# Test Discord webhook
curl -X POST "https://api.questdata.io/v1/alerts/RULE_ID/test-webhook" \
-H "Authorization: Bearer YOUR_JWT"
  1. Start with Crash Spike + Log Error Spike — these catch the most urgent issues
  2. Use a 1-week comparison window (168h) for stable baselines
  3. Set threshold to 2.0-3.0 for spikes — lower values cause false positives
  4. Set threshold to 0.5-0.7 for drops — catch significant declines without noise
  5. Connect Discord for instant visibility — email has delivery delays
  6. Test your webhook before relying on it for production alerts