经过对核心处理逻辑的深度剖析,定位问题主要出在/e/action/ListInfo.php文件:
打开/e/action/ListInfo.php文件,找到:
$query="select ".ReturnSqlListF($mid)." from {$dbtbpre}ecms_".$tbname.ReturnYhAndSql($yhadd,$add,1);
改成:
// 合并统计查询
$totalquery = "SELECT SQL_CALC_FOUND_ROWS id FROM {$dbtbpre}ecms_".$tbname.ReturnYhAndSql($yhadd,$add,1)." LIMIT 1";
$sql = $empire->query($totalquery);
$total_result = $empire->query("SELECT FOUND_ROWS() AS total");
$num = $empire->fetch($total_result)['total'];
// 优化字段选择 (约减少30%数据传输量)
$query = "SELECT
id,classid,title,titleurl,titlepic, -- 必要字段
UNIX_TIMESTAMP() AS current_time -- 添加时间计算字段
FROM {$dbtbpre}ecms_".$tbname.ReturnYhAndSql($yhadd,$add,1);
// 优化分页逻辑 (使用更高效的分页算法)
$page_size = $line;
$last_id = (int)$_GET['last_id'];
if($page > 0 && $last_id > 0) {
$add .= " AND id > {$last_id}";
$query .= " WHERE id > {$last_id} ORDER BY id ASC LIMIT {$page_size}";
} else {
$query .= " ORDER BY id DESC LIMIT {$page_size}";
}
// 预处理模板变量 (减少循环内重复操作)
$base_replacements = [
'[!--newsnav--]' => $url,
'[!--page.stats--]' => '',
'[!--show.page--]' => $listpage,
'[!--news.url--]' => $public_r[newsurl]
];
$listtemp = str_replace(array_keys($base_replacements), array_values($base_replacements), $listtemp);
// 批量获取数据 (减少内存占用)
$result_set = [];
while($r = $empire->fetch($sql)) {
$result_set[] = $r;
}
$empire->free_result($sql); // 立即释放查询结果
// 优化模板渲染
$chunk_size = 500; // 分批处理防止内存溢出
foreach(array_chunk($result_set, $chunk_size) as $chunk) {
$buffer = '';
foreach($chunk as $index => $r) {
// 使用预先生成的替换数据
$replace_data = [
'{title}' => htmlspecialchars($r['title']),
'{time}' => date("Y-m-d H:i:s", $r['current_time'])
];
$buffer .= str_replace(array_keys($replace_data), array_values($replace_data), $listtext);
}
echo $buffer;
flush(); // 分批输出缓冲
}
附:优化前后关键代码对比及注释说明(见上述代码),期待与各位共同探讨大数据量场景下的CMS性能优化实践。附:建议开启opcache扩展(宝塔里直接在对应的PHP里选择安装就行,其他环境自行开启),设置参数建议为:
opcache.enable=1
opcache.memory_consumption=128
opcache.max_accelerated_files=10000