๐ WordPress 6.9 Database Optimization: Speed Up MySQL with PHP 8.3
Optimize WordPress 6.9 database for maximum speed. Query optimization, indexing, cleanup, and PHP 8.3 database features for large-scale WordPress sites. 50+ expert techniques for sub-second queries.
Blogs Team
Database Performance Experts โข 2026 Edition
๐ Database Analysis & Performance Benchmarking (Tips 1-5)
Analyze Database Size and Structure
-- Get database size
SELECT
table_schema AS 'Database',
ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)',
ROUND(SUM(data_length) / 1024 / 1024, 2) AS 'Data (MB)',
ROUND(SUM(index_length) / 1024 / 1024, 2) AS 'Index (MB)',
COUNT(*) AS 'Tables'
FROM information_schema.tables
WHERE table_schema = 'wordpress'
GROUP BY table_schema;
-- Get table sizes
SELECT
table_name,
ROUND(((data_length + index_length) / 1024 / 1024), 2) AS 'Total (MB)',
ROUND((data_length / 1024 / 1024), 2) AS 'Data (MB)',
ROUND((index_length / 1024 / 1024), 2) AS 'Index (MB)',
table_rows AS 'Rows'
FROM information_schema.tables
WHERE table_schema = 'wordpress'
ORDER BY (data_length + index_length) DESC;
Enable MySQL Slow Query Log
# my.cnf - Enable slow query log slow_query_log = 1 slow_query_log_file = /var/log/mysql/slow-queries.log long_query_time = 2 log_queries_not_using_indexes = 1 log_slow_admin_statements = 1 # Analyze slow queries mysqldumpslow /var/log/mysql/slow-queries.log # Or use pt-query-digest pt-query-digest /var/log/mysql/slow-queries.log
Use EXPLAIN to Analyze Queries
-- Analyze a WordPress query EXPLAIN SELECT * FROM wp_posts WHERE post_type = 'post' AND post_status = 'publish' ORDER BY post_date DESC LIMIT 10; -- Look for: -- type: ALL (full table scan - bad) -- possible_keys: available indexes -- key: actual index used -- rows: rows examined -- Extra: Using filesort (bad), Using where
Monitor Database Performance Metrics
-- Current database status SHOW GLOBAL STATUS LIKE 'Questions'; SHOW GLOBAL STATUS LIKE 'Slow_queries'; SHOW GLOBAL STATUS LIKE 'Innodb_rows_read'; SHOW ENGINE INNODB STATUS\G -- Process list SHOW FULL PROCESSLIST; -- Performance schema queries SELECT * FROM performance_schema.events_statements_summary_by_digest ORDER BY SUM_TIMER_WAIT DESC LIMIT 10;
Use Query Monitor Plugin
Install Query Monitor plugin to analyze database queries in real-time.
- Shows all queries per page load
- Highlights slow queries
- Displays query callers
- Shows duplicate queries
- Analyzes query execution time
๐งน Database Cleanup & Maintenance (Tips 6-10)
Remove Post Revisions
-- Delete all but last 5 revisions
DELETE a,b,c
FROM wp_posts a
LEFT JOIN wp_term_relationships b ON (a.ID = b.object_id)
LEFT JOIN wp_postmeta c ON (a.ID = c.post_id)
WHERE a.post_type = 'revision'
AND a.ID IN (
SELECT ID FROM (
SELECT ID,
ROW_NUMBER() OVER (PARTITION BY post_parent ORDER BY post_date DESC) as rn
FROM wp_posts
WHERE post_type = 'revision'
) t WHERE t.rn > 5
);
-- Or limit revisions in wp-config.php
define('WP_POST_REVISIONS', 5);
Clean Up Spam Comments
-- Delete spam comments DELETE FROM wp_comments WHERE comment_approved = 'spam'; -- Delete comment meta for spam DELETE cm FROM wp_commentmeta cm LEFT JOIN wp_comments c ON cm.comment_id = c.comment_ID WHERE c.comment_ID IS NULL; -- Keep only last 30 days of spam in trash DELETE FROM wp_comments WHERE comment_approved = 'trash' AND comment_date < DATE_SUB(NOW(), INTERVAL 30 DAY);
Delete Orphaned Post Meta
-- Remove postmeta without parent posts
DELETE pm FROM wp_postmeta pm
LEFT JOIN wp_posts p ON pm.post_id = p.ID
WHERE p.ID IS NULL;
-- Remove specific meta keys from deleted plugins
DELETE FROM wp_postmeta
WHERE meta_key IN (
'_bad_plugin_meta',
'_deactivated_plugin_data'
);
Clean Transients
-- Delete expired transients
DELETE FROM wp_options
WHERE option_name LIKE '_transient_timeout_%'
AND option_value < UNIX_TIMESTAMP();
DELETE FROM wp_options
WHERE option_name LIKE '_transient_%'
AND option_name NOT LIKE '_transient_timeout_%'
AND option_name NOT IN (
SELECT option_name FROM wp_options
WHERE option_name LIKE '_transient_timeout_%'
);
-- Delete all transients (if safe)
DELETE FROM wp_options
WHERE option_name LIKE '%_transient_%';
Optimize Tables
# MySQL table optimization mysqlcheck -o wordpress -u root -p # Optimize specific tables OPTIMIZE TABLE wp_posts; OPTIMIZE TABLE wp_postmeta; OPTIMIZE TABLE wp_comments; OPTIMIZE TABLE wp_options; OPTIMIZE TABLE wp_term_relationships; # For InnoDB, use this instead (less locking) ALTER TABLE wp_posts ENGINE=InnoDB;
๐ Advanced Indexing Strategies (Tips 11-15)
Analyze Missing Indexes
-- Find missing indexes
SELECT
t.TABLE_NAME,
t.COLUMN_NAME,
t.DATA_TYPE,
COUNT(*) as usage_count
FROM information_schema.columns t
WHERE t.TABLE_SCHEMA = 'wordpress'
AND t.COLUMN_NAME IN ('post_date', 'post_status', 'post_type', 'post_author')
AND NOT EXISTS (
SELECT 1 FROM information_schema.statistics s
WHERE s.TABLE_SCHEMA = t.TABLE_SCHEMA
AND s.TABLE_NAME = t.TABLE_NAME
AND s.COLUMN_NAME = t.COLUMN_NAME
)
GROUP BY t.TABLE_NAME, t.COLUMN_NAME;
Add Composite Indexes for Common Queries
-- For post queries by type and status CREATE INDEX idx_post_type_status_date ON wp_posts(post_type, post_status, post_date DESC); -- For meta queries CREATE INDEX idx_postmeta_post_meta ON wp_postmeta(post_id, meta_key(50)); -- For comment queries CREATE INDEX idx_comments_post_approved_date ON wp_comments(comment_post_ID, comment_approved, comment_date_gmt DESC); -- For term relationships CREATE INDEX idx_term_relationships_object_term ON wp_term_relationships(object_id, term_taxonomy_id);
Use Partial Indexes for Large Tables
-- Index only recent posts (MySQL 8.0+) CREATE INDEX idx_recent_posts ON wp_posts((CASE WHEN post_date > DATE_SUB(NOW(), INTERVAL 30 DAY) THEN post_id END)); -- Index specific meta keys CREATE INDEX idx_featured_thumb ON wp_postmeta((CASE WHEN meta_key = '_thumbnail_id' THEN meta_value END)); -- Virtual column index (MySQL 5.7+) ALTER TABLE wp_postmeta ADD COLUMN meta_key_prefix VARCHAR(20) GENERATED ALWAYS AS (LEFT(meta_key, 20)) STORED, ADD INDEX idx_meta_key_prefix (meta_key_prefix);
Monitor Index Usage
-- Check index usage
SELECT
t.TABLE_NAME,
s.INDEX_NAME,
s.CARDINALITY,
s.SEQ_IN_INDEX,
t.TABLE_ROWS,
ROUND(s.CARDINALITY / t.TABLE_ROWS * 100, 2) as selectivity_percent
FROM information_schema.STATISTICS s
JOIN information_schema.TABLES t
ON s.TABLE_SCHEMA = t.TABLE_SCHEMA
AND s.TABLE_NAME = t.TABLE_NAME
WHERE s.TABLE_SCHEMA = 'wordpress'
AND t.TABLE_ROWS > 1000
ORDER BY selectivity_percent ASC;
-- Find unused indexes (run over weeks)
SELECT * FROM sys.schema_unused_indexes
WHERE object_schema = 'wordpress';
Full-Text Indexes for Search
-- Add full-text index for search
ALTER TABLE wp_posts
ADD FULLTEXT idx_search (post_title, post_content);
-- Use in queries
SELECT ID, post_title,
MATCH(post_title, post_content) AGAINST('search terms') as relevance
FROM wp_posts
WHERE MATCH(post_title, post_content) AGAINST('search terms' IN NATURAL LANGUAGE MODE)
AND post_status = 'publish'
ORDER BY relevance DESC
LIMIT 20;
โก Query Optimization Techniques (Tips 16-20)
Optimize WP_Query
// Bad - loads all fields
$query = new WP_Query([
'posts_per_page' => 10,
]);
// Good - only needed fields
$query = new WP_Query([
'posts_per_page' => 10,
'fields' => 'ids', // Only get IDs
'no_found_rows' => true, // Skip pagination count
'update_post_meta_cache' => false, // Skip meta cache
'update_post_term_cache' => false, // Skip term cache
]);
// Even better - custom SQL for complex queries
global $wpdb;
$results = $wpdb->get_results("
SELECT ID, post_title, post_date
FROM {$wpdb->posts}
WHERE post_status = 'publish'
AND post_type = 'post'
ORDER BY post_date DESC
LIMIT 10
");
Use prepare() for SQL Queries
// Safe and optimized
global $wpdb;
$post_type = 'post';
$status = 'publish';
$limit = 10;
$results = $wpdb->get_results(
$wpdb->prepare(
"SELECT ID, post_title
FROM {$wpdb->posts}
WHERE post_type = %s
AND post_status = %s
ORDER BY post_date DESC
LIMIT %d",
$post_type,
$status,
$limit
)
);
Optimize Meta Queries
// Bad - multiple meta queries
$query = new WP_Query([
'meta_query' => [
['key' => 'price', 'value' => 100, 'compare' => '>'],
['key' => 'stock', 'value' => 0, 'compare' => '>'],
]
]);
// Better - use EXISTS for better performance
$query = new WP_Query([
'meta_query' => [
'relation' => 'AND',
['key' => 'price', 'value' => 100, 'type' => 'NUMERIC', 'compare' => '>'],
['key' => 'stock', 'compare' => 'EXISTS'],
]
]);
// Best - custom SQL with JOINs
global $wpdb;
$results = $wpdb->get_results("
SELECT p.ID, p.post_title
FROM {$wpdb->posts} p
INNER JOIN {$wpdb->postmeta} pm1 ON p.ID = pm1.post_id AND pm1.meta_key = 'price'
INNER JOIN {$wpdb->postmeta} pm2 ON p.ID = pm2.post_id AND pm2.meta_key = 'stock'
WHERE p.post_type = 'product'
AND p.post_status = 'publish'
AND pm1.meta_value + 0 > 100
AND pm2.meta_value + 0 > 0
");
Use EXISTS Instead of IN for Subqueries
-- Slow - IN with subquery
SELECT ID, post_title
FROM wp_posts
WHERE ID IN (
SELECT post_id
FROM wp_postmeta
WHERE meta_key = 'featured'
AND meta_value = 'yes'
);
-- Fast - EXISTS
SELECT p.ID, p.post_title
FROM wp_posts p
WHERE EXISTS (
SELECT 1
FROM wp_postmeta pm
WHERE pm.post_id = p.ID
AND pm.meta_key = 'featured'
AND pm.meta_value = 'yes'
);
Batch Process Large Operations
// Bad - updates all at once
$posts = get_posts(['posts_per_page' => -1]);
foreach ($posts as $post) {
update_post_meta($post->ID, 'processed', true);
}
// Good - batch processing
$batch_size = 100;
$offset = 0;
while (true) {
$posts = get_posts([
'posts_per_page' => $batch_size,
'offset' => $offset,
'fields' => 'ids',
]);
if (empty($posts)) break;
global $wpdb;
// Single query for batch update
$wpdb->query(
$wpdb->prepare(
"INSERT INTO {$wpdb->postmeta} (post_id, meta_key, meta_value)
VALUES " . implode(',', array_fill(0, count($posts), '(%d, %s, %s)')) . "
ON DUPLICATE KEY UPDATE meta_value = VALUES(meta_value)",
array_merge(...array_map(function($id) {
return [$id, 'processed', '1'];
}, $posts))
)
);
$offset += $batch_size;
// Sleep to avoid overloading
if (function_exists('usleep')) {
usleep(500000); // 0.5 seconds
}
}
๐ง MySQL 8.0 Server Tuning (Tips 21-25)
Optimize InnoDB Settings
# my.cnf - InnoDB optimization [mysqld] # Buffer pool size (50-70% of RAM) innodb_buffer_pool_size = 4G # Log file size innodb_log_file_size = 1G innodb_log_buffer_size = 64M # Flush settings for performance innodb_flush_log_at_trx_commit = 2 innodb_flush_method = O_DIRECT # Thread concurrency innodb_thread_concurrency = 0 innodb_read_io_threads = 64 innodb_write_io_threads = 64 # Change buffer innodb_change_buffering = all # Use multiple buffer pool instances innodb_buffer_pool_instances = 8
Configure Query Cache (MySQL 8.0)
# MySQL 8.0 removed query cache - use these instead # 1. Increase table cache table_open_cache = 4000 table_definition_cache = 2000 # 2. Enable performance schema performance_schema = ON # 3. Use key buffer for MyISAM (if any) key_buffer_size = 256M # 4. Thread cache thread_cache_size = 100 # 5. Temporary table size tmp_table_size = 64M max_heap_table_size = 64M
Connection and Thread Settings
# Connection pool max_connections = 500 max_user_connections = 450 # Timeouts wait_timeout = 300 interactive_timeout = 300 net_read_timeout = 30 net_write_timeout = 30 # Backlog back_log = 500 # Sort buffer sort_buffer_size = 4M join_buffer_size = 4M read_rnd_buffer_size = 2M # Read buffer read_buffer_size = 2M
Use MySQL 8.0 Features
# Enable resource groups for better CPU management SET GLOBAL resource_group_enabled = ON; # Create dedicated resource group for WordPress CREATE RESOURCE GROUP wordpress TYPE = USER VCPU = 0-3 THREAD_PRIORITY = 10; # Use invisible indexes for testing ALTER TABLE wp_posts ALTER INDEX idx_post_date INVISIBLE; # Use descending indexes CREATE INDEX idx_post_date_desc ON wp_posts(post_date DESC); # Use hash join (MySQL 8.0.18+) SET optimizer_switch = 'hash_join=on';
Monitor Memory Usage
-- Check buffer pool usage
SHOW ENGINE INNODB STATUS\G
-- Memory allocation
SELECT
variable_name,
variable_value
FROM performance_schema.global_status
WHERE variable_name LIKE 'Innodb_buffer_pool_%';
-- Calculate optimal buffer pool size
SELECT
CEILING(SUM(data_length + index_length) / 1024 / 1024 / 1024) as 'Data Size (GB)',
CEILING(SUM(data_length + index_length) / 1024 / 1024 / 1024 * 1.2) as 'Recommended Buffer Pool (GB)'
FROM information_schema.tables
WHERE table_schema = 'wordpress';
๐ PHP 8.3 Database Features & Optimizations (Tips 26-30)
Use mysqli_execute_query() for Prepared Statements
// PHP 8.3 - New simplified prepared statements
$db = new mysqli('localhost', 'user', 'password', 'wordpress');
// Old way
$stmt = $db->prepare("SELECT * FROM wp_posts WHERE post_status = ?");
$stmt->bind_param('s', $status);
$stmt->execute();
$result = $stmt->get_result();
// PHP 8.3 - One-liner
$result = $db->execute_query(
"SELECT * FROM wp_posts WHERE post_status = ?",
['publish']
);
// Even better with type hints
$result = $db->execute_query(
"SELECT ID, post_title, post_date
FROM wp_posts
WHERE post_type = ?
AND post_status = ?
LIMIT ?",
['post', 'publish', 10],
['s', 's', 'i'] // Type hints for optimization
);
JSON Validation and Optimization
// PHP 8.3 - JSON validation
function store_json_data($data) {
// Validate JSON before storing
if (!json_validate($data)) {
throw new InvalidArgumentException('Invalid JSON');
}
global $wpdb;
return $wpdb->insert('custom_table', [
'data' => $data,
'created_at' => current_time('mysql')
]);
}
// Use MySQL JSON functions with PHP 8.3
$result = $wpdb->get_results("
SELECT * FROM wp_postmeta
WHERE JSON_VALID(meta_value)
AND JSON_EXTRACT(meta_value, '$.price') > 100
");
Randomizer Extension for Database Sampling
// PHP 8.3 - New Randomizer extension
use Random\Randomizer;
$randomizer = new Randomizer();
// Get random posts for sampling
$total_posts = wp_count_posts()->publish;
$random_offset = $randomizer->getInt(0, max(0, $total_posts - 100));
$random_posts = get_posts([
'posts_per_page' => 100,
'offset' => $random_offset,
'orderby' => 'rand', // Slow on large tables
]);
// Better: Use random IDs with indexed query
$random_ids = $wpdb->get_col("
SELECT ID
FROM wp_posts
WHERE post_status = 'publish'
ORDER BY RAND()
LIMIT 100
"); // Still slow
// Best: Use random offset with indexed sort
$random_offset = $randomizer->getInt(0, $total_posts - 100);
$posts = $wpdb->get_results($wpdb->prepare("
SELECT ID, post_title
FROM wp_posts
WHERE post_status = 'publish'
ORDER BY ID
LIMIT %d, 100
", $random_offset));
JIT Compilation for Database Operations
# php.ini - Optimize JIT for database operations
opcache.jit = tracing
opcache.jit_buffer_size = 256M
opcache.jit = 1255
opcache.enable_cli = 1
// PHP code - heavy database operations benefit from JIT
class DatabaseBatchProcessor {
public function processLargeDataset(callable $callback, int $batchSize = 1000) {
$offset = 0;
while (true) {
// This loop with many iterations benefits from JIT
$results = $this->fetchBatch($offset, $batchSize);
if (empty($results)) break;
foreach ($results as $row) {
// Heavy processing here - JIT optimized
$callback($row);
}
$offset += $batchSize;
}
}
}
Readonly Properties for Database Models
// PHP 8.3 - Readonly classes and properties
readonly class PostModel {
public function __construct(
public int $id,
public string $title,
public string $content,
public DateTimeImmutable $date
) {}
public static function fromDatabase(array $row): self {
return new self(
id: (int)$row['ID'],
title: $row['post_title'],
content: $row['post_content'],
date: new DateTimeImmutable($row['post_date'])
);
}
}
// Usage - immutable objects are cache-friendly
$posts = array_map(
[PostModel::class, 'fromDatabase'],
$wpdb->get_results("SELECT * FROM wp_posts LIMIT 10", ARRAY_A)
);
๐พ Query Caching Strategies (Tips 31-35)
Use WordPress Transients for Query Caching
// Cache expensive queries with transients
function get_cached_popular_posts($days = 30, $limit = 10) {
$cache_key = 'popular_posts_' . md5($days . $limit);
$cached = get_transient($cache_key);
if ($cached !== false) {
return $cached;
}
global $wpdb;
$results = $wpdb->get_results($wpdb->prepare("
SELECT p.ID, p.post_title, COUNT(c.comment_ID) as comment_count
FROM {$wpdb->posts} p
INNER JOIN {$wpdb->comments} c ON p.ID = c.comment_post_ID
WHERE p.post_status = 'publish'
AND p.post_type = 'post'
AND c.comment_approved = 1
AND c.comment_date > DATE_SUB(NOW(), INTERVAL %d DAY)
GROUP BY p.ID
ORDER BY comment_count DESC
LIMIT %d
", $days, $limit));
// Cache for 1 hour
set_transient($cache_key, $results, HOUR_IN_SECONDS);
return $results;
}
Implement Redis Object Cache
// wp-config.php
define('WP_REDIS_HOST', '127.0.0.1');
define('WP_REDIS_PORT', 6379);
define('WP_REDIS_DATABASE', 0);
define('WP_REDIS_TIMEOUT', 1);
define('WP_REDIS_READ_TIMEOUT', 1);
// Custom Redis cache for database queries
class RedisQueryCache {
private $redis;
private $prefix = 'db_cache:';
public function __construct() {
$this->redis = new Redis();
$this->redis->connect('127.0.0.1', 6379);
}
public function getOrSet($key, $callback, $ttl = 3600) {
$cached = $this->redis->get($this->prefix . $key);
if ($cached !== false) {
return unserialize($cached);
}
$result = $callback();
$this->redis->setex(
$this->prefix . $key,
$ttl,
serialize($result)
);
return $result;
}
public function invalidate($pattern) {
$keys = $this->redis->keys($this->prefix . $pattern);
if (!empty($keys)) {
$this->redis->del($keys);
}
}
}
Use MySQL Query Cache Alternatives
# ProxySQL for query caching
mysql> INSERT INTO mysql_query_rules
(rule_id, active, match_pattern, cache_ttl, apply)
VALUES (1, 1, '^SELECT.*wp_posts.*', 10000, 1);
# Or use memcached with MySQL
INSTALL PLUGIN memcached SONAME 'libmemcached.so';
# For WordPress, use SQL_CALC_FOUND_ROWS carefully
$wpdb->get_results("
SELECT SQL_CALC_FOUND_ROWS ID, post_title
FROM {$wpdb->posts}
WHERE post_status = 'publish'
LIMIT 10
");
$total = $wpdb->get_var("SELECT FOUND_ROWS()");
Cache Pagination Counts
// Cache total post counts per category
function get_cached_category_count($category_id) {
$cache_key = 'category_count_' . $category_id;
$count = wp_cache_get($cache_key, 'categories');
if ($count === false) {
global $wpdb;
$count = $wpdb->get_var($wpdb->prepare("
SELECT COUNT(*)
FROM {$wpdb->posts} p
INNER JOIN {$wpdb->term_relationships} tr ON p.ID = tr.object_id
INNER JOIN {$wpdb->term_taxonomy} tt ON tr.term_taxonomy_id = tt.term_taxonomy_id
WHERE p.post_status = 'publish'
AND p.post_type = 'post'
AND tt.term_id = %d
", $category_id));
wp_cache_set($cache_key, $count, 'categories', HOUR_IN_SECONDS);
}
return $count;
}
Materialized Views for Complex Queries
-- Create summary table for complex reports
CREATE TABLE wp_post_summary (
post_id INT PRIMARY KEY,
comment_count INT,
meta_count INT,
category_names TEXT,
last_updated TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
);
-- Refresh via cron or triggers
DELIMITER $$
CREATE EVENT refresh_post_summary
ON SCHEDULE EVERY 1 HOUR
DO
BEGIN
TRUNCATE wp_post_summary;
INSERT INTO wp_post_summary (post_id, comment_count, meta_count)
SELECT
p.ID,
COUNT(DISTINCT c.comment_ID) as comment_count,
COUNT(DISTINCT pm.meta_id) as meta_count
FROM wp_posts p
LEFT JOIN wp_comments c ON p.ID = c.comment_post_ID AND c.comment_approved = 1
LEFT JOIN wp_postmeta pm ON p.ID = pm.post_id
WHERE p.post_status = 'publish'
GROUP BY p.ID;
END$$
-- Query becomes lightning fast
SELECT * FROM wp_post_summary WHERE comment_count > 10;
๐ Large Scale WordPress Database Optimization (Tips 36-40)
Implement Database Sharding
// Horizontal partitioning by site ID (multisite)
define('DB_SHARD_BY_SITE', true);
function get_shard_connection($site_id) {
$shard_map = [
1 => 'db1_host',
2 => 'db2_host',
3 => 'db3_host',
];
$shard = $shard_map[$site_id] ?? 'db_default';
static $connections = [];
if (!isset($connections[$shard])) {
$connections[$shard] = new mysqli(
DB_HOST_SHARD[$shard],
DB_USER_SHARD[$shard],
DB_PASS_SHARD[$shard],
DB_NAME_SHARD[$shard]
);
}
return $connections[$shard];
}
// Query specific shard
function get_posts_from_shard($site_id, $args) {
$db = get_shard_connection($site_id);
return $db->query("SELECT * FROM wp_posts WHERE ...");
}
Archive Old Data
-- Create archive table
CREATE TABLE wp_posts_archive LIKE wp_posts;
-- Move old posts (over 2 years)
INSERT INTO wp_posts_archive
SELECT * FROM wp_posts
WHERE post_date < DATE_SUB(NOW(), INTERVAL 2 YEAR)
AND post_status NOT IN ('publish', 'future');
DELETE FROM wp_posts
WHERE post_date < DATE_SUB(NOW(), INTERVAL 2 YEAR)
AND post_status NOT IN ('publish', 'future');
-- Partition by date (MySQL 8.0)
ALTER TABLE wp_posts
PARTITION BY RANGE (YEAR(post_date)) (
PARTITION p2020 VALUES LESS THAN (2021),
PARTITION p2021 VALUES LESS THAN (2022),
PARTITION p2022 VALUES LESS THAN (2023),
PARTITION p2023 VALUES LESS THAN (2024),
PARTITION p2024 VALUES LESS THAN (2025),
PARTITION p2025 VALUES LESS THAN (2026),
PARTITION future VALUES LESS THAN MAXVALUE
);
Use Read Replicas
// WordPress with read replicas
define('DB_HOST_WRITE', 'master.db');
define('DB_HOST_READ', 'replica.db');
class DatabaseCluster {
private $write;
private $read;
public function query($sql, $is_write = false) {
if ($is_write) {
return $this->write->query($sql);
}
// Round-robin read replicas
static $replica_index = 0;
$replicas = ['replica1', 'replica2', 'replica3'];
$current = $replicas[$replica_index++ % count($replicas)];
return $this->connect($current)->query($sql);
}
}
// Force write on specific operations
function save_post($data) {
global $db_cluster;
return $db_cluster->query(
"INSERT INTO wp_posts ...",
true // force write
);
}
Denormalize for Performance
-- Add computed columns
ALTER TABLE wp_posts
ADD COLUMN comment_count_cache INT DEFAULT 0;
-- Update via triggers or hooks
function update_comment_count_cache($post_id) {
global $wpdb;
$count = $wpdb->get_var($wpdb->prepare("
SELECT COUNT(*) FROM {$wpdb->comments}
WHERE comment_post_ID = %d
AND comment_approved = 1
", $post_id));
$wpdb->update(
$wpdb->posts,
['comment_count_cache' => $count],
['ID' => $post_id]
);
}
add_action('wp_insert_comment', 'update_comment_count_cache');
add_action('wp_set_comment_status', 'update_comment_count_cache');
// Query becomes super fast
SELECT * FROM wp_posts WHERE comment_count_cache > 10;
Use Elasticsearch for Search
// Offload search to Elasticsearch
class ElasticsearchIntegration {
private $client;
public function __construct() {
$this->client = Elasticsearch\ClientBuilder::create()
->setHosts(['localhost:9200'])
->build();
}
public function indexPost($post) {
$params = [
'index' => 'wordpress',
'id' => $post->ID,
'body' => [
'title' => $post->post_title,
'content' => $post->post_content,
'date' => $post->post_date,
'categories' => wp_get_post_categories($post->ID),
'tags' => wp_get_post_tags($post->ID, ['fields' => 'names'])
]
];
return $this->client->index($params);
}
public function search($query) {
$params = [
'index' => 'wordpress',
'body' => [
'query' => [
'multi_match' => [
'query' => $query,
'fields' => ['title^3', 'content']
]
]
]
];
return $this->client->search($params);
}
}
๐ Database Monitoring & Alerts (Tips 41-45)
Set Up Performance Schema Monitoring
-- Enable performance schema
UPDATE performance_schema.setup_consumers
SET ENABLED = 'YES';
-- Top queries by execution time
SELECT
DIGEST_TEXT,
COUNT_STAR,
SUM_TIMER_WAIT/1000000000000 as total_time_sec,
AVG_TIMER_WAIT/1000000000000 as avg_time_sec
FROM performance_schema.events_statements_summary_by_digest
WHERE DIGEST_TEXT NOT LIKE '%performance_schema%'
ORDER BY SUM_TIMER_WAIT DESC
LIMIT 20;
-- Table access statistics
SELECT
OBJECT_SCHEMA,
OBJECT_NAME,
COUNT_READ,
COUNT_WRITE,
SUM_TIMER_FETCH/1000000000000 as fetch_time
FROM performance_schema.table_io_waits_summary_by_table
WHERE OBJECT_SCHEMA = 'wordpress'
ORDER BY COUNT_READ DESC;
Create Custom Alerts
#!/bin/bash
# Database monitoring script
# Alert on slow queries
SLOW_QUERIES=$(mysql -e "SHOW GLOBAL STATUS LIKE 'Slow_queries'" | awk '{print $2}')
if [ $SLOW_QUERIES -gt 100 ]; then
curl -X POST https://alerts.example.com \
-H "Content-Type: application/json" \
-d "{\"alert\": \"High slow queries: $SLOW_QUERIES\"}"
fi
# Alert on connection spikes
CONNECTIONS=$(mysql -e "SHOW STATUS LIKE 'Threads_connected'" | awk '{print $2}')
MAX_CONNECTIONS=$(mysql -e "SHOW VARIABLES LIKE 'max_connections'" | awk '{print $2}')
if [ $CONNECTIONS -gt $((MAX_CONNECTIONS * 80 / 100)) ]; then
# Send Slack alert
curl -X POST https://slack.com/api/chat.postMessage \
-H "Authorization: Bearer TOKEN" \
-d "channel=#alerts" \
-d "text=โ ๏ธ Database connections at $CONNECTIONS/$MAX_CONNECTIONS"
fi
Monitor Replication Lag
-- On replica
SHOW SLAVE STATUS\G
-- Check seconds behind master
SELECT
VARIABLE_VALUE as seconds_behind_master
FROM performance_schema.global_status
WHERE VARIABLE_NAME = 'Seconds_Behind_Master';
-- Alert if lag > 60 seconds
mysql -e "SHOW SLAVE STATUS\G" | grep Seconds_Behind_Master | awk -F: '
{
if ($2 > 60) {
system("curl -X POST https://alerts.example.com -d \"Replication lag: "$2" seconds\"")
}
}'
Track Database Growth
#!/bin/bash
# Daily database size tracking
DB_SIZE=$(mysql -e "
SELECT
ROUND(SUM(data_length + index_length) / 1024 / 1024 / 1024, 2) as size_gb
FROM information_schema.tables
WHERE table_schema = 'wordpress'
" | tail -1)
# Store in monitoring system
curl -X POST https://monitor.example.com/metrics \
-d "database_size_gb=$DB_SIZE"
# Predict growth
cat >> /var/log/db_growth.log << EOF
$(date +%Y-%m-%d) $DB_SIZE GB
EOF
# Alert if growth > 10% per week
python - << EOF
import numpy as np
sizes = np.loadtxt('/var/log/db_growth.log', usecols=2)
growth = np.diff(sizes) / sizes[:-1] * 100
if np.mean(growth[-7:]) > 10:
print("โ ๏ธ Database growing >10% weekly")
EOF
Visualize with Grafana
# Prometheus MySQL exporter docker run -d \ --name mysqld-exporter \ -e DATA_SOURCE_NAME="user:password@(localhost:3306)/" \ -p 9104:9104 \ prom/mysqld-exporter # Grafana dashboard queries # Query rate rate(mysql_global_status_queries[5m]) # Connection usage mysql_global_status_threads_connected / mysql_global_variables_max_connections # InnoDB buffer pool mysql_global_status_innodb_buffer_pool_pages_data / mysql_global_status_innodb_buffer_pool_pages_total * 100 # Slow query rate rate(mysql_global_status_slow_queries[5m])
๐ Advanced Database Techniques (Tips 46-50+)
Use Window Functions
-- Get posts with running total of comments
SELECT
p.ID,
p.post_title,
COUNT(c.comment_ID) as comment_count,
SUM(COUNT(c.comment_ID)) OVER (
ORDER BY p.post_date
ROWS UNBOUNDED PRECEDING
) as running_total
FROM wp_posts p
LEFT JOIN wp_comments c ON p.ID = c.comment_post_ID
WHERE p.post_status = 'publish'
GROUP BY p.ID, p.post_title
ORDER BY p.post_date;
-- Rank posts by comments
SELECT
p.ID,
p.post_title,
COUNT(c.comment_ID) as comment_count,
RANK() OVER (ORDER BY COUNT(c.comment_ID) DESC) as rank
FROM wp_posts p
LEFT JOIN wp_comments c ON p.ID = c.comment_post_ID
WHERE p.post_status = 'publish'
GROUP BY p.ID, p.post_title
HAVING comment_count > 0
ORDER BY rank
LIMIT 10;
Common Table Expressions (CTE)
-- Find posts with related content
WITH popular_posts AS (
SELECT ID, post_title, post_date
FROM wp_posts
WHERE comment_count > 100
AND post_status = 'publish'
),
recent_posts AS (
SELECT ID, post_title, post_date
FROM wp_posts
WHERE post_date > DATE_SUB(NOW(), INTERVAL 7 DAY)
AND post_status = 'publish'
)
SELECT
COALESCE(pp.ID, rp.ID) as ID,
COALESCE(pp.post_title, rp.post_title) as post_title,
CASE
WHEN pp.ID IS NOT NULL AND rp.ID IS NOT NULL THEN 'Popular & Recent'
WHEN pp.ID IS NOT NULL THEN 'Popular Only'
ELSE 'Recent Only'
END as category
FROM popular_posts pp
FULL OUTER JOIN recent_posts rp ON pp.ID = rp.ID
ORDER BY pp.post_date DESC, rp.post_date DESC;
-- Recursive CTE for hierarchical categories
WITH RECURSIVE category_tree AS (
SELECT term_id, name, parent, 0 as level
FROM wp_terms t
JOIN wp_term_taxonomy tt ON t.term_id = tt.term_id
WHERE tt.parent = 0
UNION ALL
SELECT t.term_id, t.name, tt.parent, ct.level + 1
FROM category_tree ct
JOIN wp_term_taxonomy tt ON ct.term_id = tt.parent
JOIN wp_terms t ON tt.term_id = t.term_id
)
SELECT * FROM category_tree
ORDER BY level, name;
JSON Data Type Optimization
-- Create table with JSON column
CREATE TABLE wp_product_attributes (
id INT AUTO_INCREMENT PRIMARY KEY,
post_id INT,
attributes JSON,
INDEX idx_attributes ((CAST(attributes->'$.price' AS DECIMAL(10,2)))),
INDEX idx_attributes_color ((CAST(attributes->'$.color' AS CHAR(50))))
);
-- Insert JSON data
INSERT INTO wp_product_attributes (post_id, attributes) VALUES (
123,
JSON_OBJECT(
'price', 99.99,
'color', 'red',
'size', 'XL',
'in_stock', true,
'tags', JSON_ARRAY('sale', 'featured')
)
);
-- Query JSON data
SELECT * FROM wp_product_attributes
WHERE JSON_EXTRACT(attributes, '$.price') > 50
AND JSON_EXTRACT(attributes, '$.color') = 'red';
-- Update JSON
UPDATE wp_product_attributes
SET attributes = JSON_SET(attributes, '$.price', 79.99)
WHERE post_id = 123;
-- Use JSON_TABLE for relational queries
SELECT p.post_title, a.*
FROM wp_posts p
CROSS JOIN JSON_TABLE(
(SELECT attributes FROM wp_product_attributes WHERE post_id = p.ID),
'$' COLUMNS(
price DECIMAL(10,2) PATH '$.price',
color VARCHAR(50) PATH '$.color',
in_stock BOOLEAN PATH '$.in_stock'
)
) a
WHERE p.post_type = 'product';
Spatial Data for Geolocation
-- Add spatial columns
ALTER TABLE wp_posts
ADD COLUMN location POINT,
ADD SPATIAL INDEX idx_location (location);
-- Store coordinates
UPDATE wp_posts
SET location = POINT(40.7128, -74.0060)
WHERE ID = 123;
-- Find nearby posts
SELECT
ID,
post_title,
ST_Distance_Sphere(location, POINT(40.7128, -74.0060)) as distance_meters
FROM wp_posts
WHERE ST_Distance_Sphere(location, POINT(40.7128, -74.0060)) < 5000
AND post_status = 'publish'
ORDER BY distance_meters;
-- Find posts in bounding box
SELECT ID, post_title
FROM wp_posts
WHERE MBRContains(
ST_GeomFromText('POLYGON((40.7 -74.1, 40.7 -73.9, 40.8 -73.9, 40.8 -74.1, 40.7 -74.1))'),
location
);
Automated Query Rewriting
-- MySQL proxy for query rewriting
-- Example with ProxySQL
INSERT INTO mysql_query_rules
(rule_id, active, match_pattern, replace_pattern, apply)
VALUES (
1, 1,
'SELECT \* FROM wp_posts WHERE ID = (\d+)',
'SELECT * FROM wp_posts_cache WHERE post_id = \1',
1
);
-- Or use MySQL 8.0 query rewrite plugins
INSTALL PLUGIN query_rewrite SONAME 'query_rewrite.so';
INSERT INTO query_rewrite.rewrite_rules
(pattern, replacement, pattern_database)
VALUES (
'SELECT * FROM wp_posts ORDER BY RAND() LIMIT ?',
'SELECT * FROM wp_posts JOIN (SELECT CEIL(RAND() * (SELECT MAX(ID) FROM wp_posts)) AS id) AS r WHERE ID >= r.id ORDER BY ID LIMIT ?',
'wordpress'
);
CALL query_rewrite.flush_rewrite_rules();
-- PHP query interception
class DatabaseInterceptor {
private $patterns = [
'/ORDER BY RAND\(\)/' => function($query) {
return str_replace(
'ORDER BY RAND()',
'JOIN (SELECT CEIL(RAND() * (SELECT MAX(ID) FROM wp_posts)) AS id) AS r ON ID >= r.id ORDER BY ID',
$query
);
}
];
public function query($sql) {
foreach ($this->patterns as $pattern => $replacement) {
if (preg_match($pattern, $sql)) {
$sql = $replacement($sql);
break;
}
}
return $this->execute($sql);
}
}
Bonus: 5 More Database Optimization Tips
- 51. Use pt-online-schema-change for zero-downtime schema changes
- 52. Implement table partitioning by date for logs and historical data
- 53. Use MySQL Enterprise Monitor or Percona Monitoring and Management
- 54. Consider Vitess or TiDB for horizontal scaling beyond 1TB
- 55. Implement database auditing with MariaDB Audit Plugin
โ Database Optimization Checklist
- Analyze table sizes
- Remove post revisions
- Clean spam comments
- Delete orphaned meta
- Clean transients
- Add missing indexes
- Optimize slow queries
- Configure InnoDB
- Set up query cache
- Monitor performance
- Schedule regular maintenance
- Archive old data
- Set up alerts
- Use read replicas
- Implement sharding
๐ฅ Download PDF: Database Optimization Checklist
โ Database Optimization FAQ
How often should I optimize my database?
Weekly for active sites, monthly for static sites. Monitor growth to adjust.
What's the biggest database performance killer?
Missing indexes on frequently queried columns, especially in wp_postmeta.
Is MySQL 8.0 faster than 5.7 for WordPress?
Yes, 30-50% faster with better indexing, CTEs, and JSON support.
Should I use MyISAM or InnoDB?
Always InnoDB. MyISAM is deprecated and unsafe for production.
๐ฌ Get Weekly Database Optimization Tips
Advanced MySQL strategies, query optimization, and performance monitoring.
๐ข Share this database guide with your team