Recurring Maintenance Events
Use this lesson to design production-grade recurring events — maintenance jobs that clean data, generate summaries, and keep your database healthy, with proper idempotency, observability, and safety guards.
Concept Overview
What Are Recurring Maintenance Events?
Recurring maintenance events are scheduled SQL jobs that run at regular intervals to keep your database healthy. Common maintenance tasks include:
- Purging expired data — sessions, temp files, old logs
- Aggregating metrics — daily revenue rollups, monthly summaries
- Rebuilding indexes — optimize table performance
- Checking data integrity — find and flag orphan records
The Three Pillars of Good Maintenance Events
| Pillar | Why It Matters |
|---|---|
| Idempotent | Running the event twice produces the same result — no double-processing |
| Observable | You can verify the event ran, what it did, and how long it took |
| Bounded | The event processes a limited batch, not the entire table — preventing long locks |
Basic Syntax & Rules
Maintenance Event Template
DELIMITER //
CREATE EVENT evt_maintenance_name
ON SCHEDULE EVERY 1 DAY
STARTS CURRENT_DATE + INTERVAL 1 DAY + INTERVAL 2 HOUR -- 2 AM tomorrow
COMMENT 'Description of what this event does'
DO
BEGIN
DECLARE v_affected INT DEFAULT 0;
-- Bounded operation
DELETE FROM target_table
WHERE condition
LIMIT 10000;
SET v_affected = ROW_COUNT();
-- Observability: log what happened
INSERT INTO maintenance_log (event_name, rows_affected, executed_at)
VALUES ('evt_maintenance_name', v_affected, NOW());
END //
DELIMITER ;
Maintenance Log Table
Create a dedicated log table for tracking all maintenance events:
CREATE TABLE maintenance_log (
log_id BIGINT AUTO_INCREMENT PRIMARY KEY,
event_name VARCHAR(100) NOT NULL,
rows_affected INT NOT NULL DEFAULT 0,
duration_ms INT,
notes TEXT,
executed_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX idx_event (event_name, executed_at)
);
Step-by-Step Examples
Example 1: Session Cleanup with Logging
Step 1: Create the maintenance log table (if it doesn't exist)
CREATE TABLE IF NOT EXISTS maintenance_log (
log_id BIGINT AUTO_INCREMENT PRIMARY KEY,
event_name VARCHAR(100) NOT NULL,
rows_affected INT NOT NULL DEFAULT 0,
executed_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP,
INDEX idx_event (event_name, executed_at)
);
Step 2: Create the cleanup event
DELIMITER //
CREATE EVENT evt_cleanup_sessions
ON SCHEDULE EVERY 1 HOUR
STARTS CURRENT_TIMESTAMP
COMMENT 'Remove expired sessions hourly, batch of 10000'
DO
BEGIN
DECLARE v_deleted INT DEFAULT 0;
DELETE FROM sessions
WHERE expires_at < NOW()
LIMIT 10000;
SET v_deleted = ROW_COUNT();
INSERT INTO maintenance_log (event_name, rows_affected)
VALUES ('evt_cleanup_sessions', v_deleted);
END //
DELIMITER ;
Step 3: Verify it's running
SELECT event_name, rows_affected, executed_at
FROM maintenance_log
WHERE event_name = 'evt_cleanup_sessions'
ORDER BY executed_at DESC
LIMIT 5;
| event_name | rows_affected | executed_at |
|---|---|---|
| evt_cleanup_sessions | 342 | 2026-02-10 15:00:00 |
| evt_cleanup_sessions | 1205 | 2026-02-10 14:00:00 |
Example 2: Daily Revenue Rollup (Idempotent)
Scenario: Aggregate daily revenue into a summary table for fast dashboard queries.
Key: The event must be idempotent — running it twice for the same day should produce the same result, not double the numbers.
Step 1: Create the summary table
CREATE TABLE daily_revenue_summary (
summary_date DATE PRIMARY KEY,
total_orders INT NOT NULL DEFAULT 0,
total_revenue DECIMAL(14,2) NOT NULL DEFAULT 0,
avg_order DECIMAL(10,2) NOT NULL DEFAULT 0,
updated_at DATETIME NOT NULL DEFAULT CURRENT_TIMESTAMP
ON UPDATE CURRENT_TIMESTAMP
);
Step 2: Create the idempotent rollup event
DELIMITER //
CREATE EVENT evt_daily_revenue_rollup
ON SCHEDULE EVERY 1 DAY
STARTS CURRENT_DATE + INTERVAL 1 DAY + INTERVAL 1 HOUR -- 1 AM
COMMENT 'Aggregate yesterday revenue into summary table (idempotent)'
DO
BEGIN
DECLARE v_yesterday DATE DEFAULT CURDATE() - INTERVAL 1 DAY;
-- REPLACE = idempotent: if the row exists, it's updated; if not, inserted
REPLACE INTO daily_revenue_summary
(summary_date, total_orders, total_revenue, avg_order)
SELECT
DATE(order_date) AS summary_date,
COUNT(*) AS total_orders,
SUM(total_amount) AS total_revenue,
AVG(total_amount) AS avg_order
FROM orders
WHERE DATE(order_date) = v_yesterday
AND status = 'paid';
INSERT INTO maintenance_log (event_name, rows_affected)
VALUES ('evt_daily_revenue_rollup', ROW_COUNT());
END //
DELIMITER ;
Why REPLACE INTO? If the event runs twice (e.g., after a restart), it overwrites yesterday's summary with the same data instead of doubling it. That's idempotency.
Example 3: Audit Log Retention
Scenario: Keep audit logs for 90 days, then purge. Process in batches to avoid long locks.
DELIMITER //
CREATE EVENT evt_purge_audit_logs
ON SCHEDULE EVERY 1 DAY
STARTS CURRENT_DATE + INTERVAL 1 DAY + INTERVAL 3 HOUR -- 3 AM
COMMENT 'Purge audit logs older than 90 days, 50000 rows per run'
DO
BEGIN
DECLARE v_total_deleted INT DEFAULT 0;
DECLARE v_batch_deleted INT DEFAULT 1;
purge_loop: WHILE v_batch_deleted > 0 AND v_total_deleted < 50000 DO
DELETE FROM audit_logs
WHERE created_at < NOW() - INTERVAL 90 DAY
LIMIT 5000;
SET v_batch_deleted = ROW_COUNT();
SET v_total_deleted = v_total_deleted + v_batch_deleted;
END WHILE;
INSERT INTO maintenance_log (event_name, rows_affected)
VALUES ('evt_purge_audit_logs', v_total_deleted);
END //
DELIMITER ;
Example 4: Monitoring Your Events
-- Check all events and their status
SHOW EVENTS\G
-- Check maintenance log for recent runs
SELECT
event_name,
COUNT(*) AS total_runs,
MAX(executed_at) AS last_run,
AVG(rows_affected) AS avg_rows_per_run
FROM maintenance_log
WHERE executed_at >= NOW() - INTERVAL 7 DAY
GROUP BY event_name
ORDER BY last_run DESC;
-- Find events that haven't run recently (may be broken)
SELECT event_name
FROM maintenance_log
GROUP BY event_name
HAVING MAX(executed_at) < NOW() - INTERVAL 1 DAY;
Practical Use Cases
1. Session and Token Cleanup
Hourly deletion of expired sessions, OAuth tokens, and password reset links.
2. Metrics Aggregation
Daily/weekly rollup of orders, revenue, signups into summary tables for fast dashboard queries.
3. Data Retention Compliance
GDPR/compliance-driven purging of personal data older than the retention period.
4. Index and Table Optimization
Weekly OPTIMIZE TABLE on frequently updated tables to reclaim space and rebuild indexes.
5. Orphan Record Detection
Daily check for orphan records (child rows with no parent) and flagging them for review.
Common Mistakes & Troubleshooting
| Mistake | What Happens | How to Fix |
|---|---|---|
Non-idempotent aggregation using INSERT | Re-running doubles the summary numbers | Use REPLACE INTO or INSERT ... ON DUPLICATE KEY UPDATE |
| Unbounded DELETE (no LIMIT) | Table locked for minutes, blocking all writes | Always use LIMIT and loop through batches |
| No logging | Can't tell if events are running or what they're doing | Insert into maintenance_log at the end of every event |
| Running heavy events during peak hours | Competes with production traffic | Schedule events during off-peak hours (e.g., 2–5 AM) |
| Not checking scheduler status after restart | Events silently stop running | Monitor with SHOW VARIABLES LIKE 'event_scheduler' |
| No safety cap on batch loops | Infinite loop if deletion keeps matching new rows | Add v_total_deleted < MAX_LIMIT guard |
Best Practices
- Make events idempotent — use
REPLACE INTOorON DUPLICATE KEY UPDATEfor aggregations - Batch everything —
LIMITon DELETEs, loop with a safety cap - Log every execution — timestamp, rows affected, event name
- Schedule off-peak — avoid competing with production queries
- Monitor regularly — check
maintenance_logfor missing or failed runs - Document each event — use
COMMENTin the event definition - Test in staging first — run events manually (
DO BEGIN ... END;) before scheduling
Hands-On Practice
Exercise 1: Simple Cleanup Event (Easy)
Create an event evt_cleanup_temp that runs every 6 hours and deletes rows from temp_uploads where created_at < NOW() - INTERVAL 2 DAY. Log the results.
Exercise 2: Idempotent Summary (Medium)
Create an event evt_weekly_signup_summary that:
- Runs every Monday at 1 AM
- Counts new signups from the previous week (Monday–Sunday)
- Inserts/replaces into a
weekly_signupssummary table - Is idempotent (safe to re-run)
Exercise 3: Monitor Your Events (Advanced)
Write a monitoring query that shows:
- All registered events and their schedules
- Their last execution time from
maintenance_log - Whether they're overdue (last run > expected interval)
- Average rows affected per run
Connection to Other Concepts
| Related Concept | How It Connects |
|---|---|
| Event Scheduler Basics | This lesson builds on the fundamentals with production patterns |
| Stored Procedures | Events can CALL procedures for complex multi-step logic |
| Date Arithmetic | NOW() - INTERVAL 90 DAY is the core of retention queries |
| Audit Logging | Maintenance events manage audit log retention |
| Transactions | Batch deletes can be wrapped in transactions for safety |
What to Learn Next
- 16. Transactions and Concurrency — safely handle concurrent data access
Visual Learning Diagram
flowchart TD
A["Event fires on schedule"] --> B["Check condition"]
B --> C["Execute bounded batch"]
C --> D{"More rows to process?"}
D -->|Yes, under safety cap| C
D -->|No, or cap reached| E["Log results to maintenance_log"]
E --> F["Event sleeps until next schedule"]
F --> A
classDef safe fill:#28a745,stroke:#fff,color:#fff
class E safe
Quick Reference
-- Idempotent aggregation
REPLACE INTO summary (date, total)
SELECT DATE(created_at), COUNT(*) FROM orders
WHERE DATE(created_at) = CURDATE() - INTERVAL 1 DAY;
-- Bounded batch delete
DELETE FROM logs WHERE created_at < NOW() - INTERVAL 90 DAY LIMIT 5000;
-- Log execution
INSERT INTO maintenance_log (event_name, rows_affected)
VALUES ('evt_name', ROW_COUNT());
-- Monitor events
SELECT * FROM maintenance_log ORDER BY executed_at DESC LIMIT 10;
SHOW EVENTS\G