Carry の Blog Carry の Blog
首页
  • Nginx
  • Prometheus
  • Iptables
  • Systemd
  • Firewalld
  • Docker
  • Sshd
  • DBA工作笔记
  • MySQL
  • Redis
  • TiDB
  • Elasticsearch
  • Python
  • Shell
  • MySQL8-SOP手册
  • 分类
  • 标签
  • 归档
GitHub (opens new window)

Carry の Blog

好记性不如烂键盘
首页
  • Nginx
  • Prometheus
  • Iptables
  • Systemd
  • Firewalld
  • Docker
  • Sshd
  • DBA工作笔记
  • MySQL
  • Redis
  • TiDB
  • Elasticsearch
  • Python
  • Shell
  • MySQL8-SOP手册
  • 分类
  • 标签
  • 归档
GitHub (opens new window)
  • MySQL

  • Redis

    • 安装配置
    • Redis 集群部署
    • redis大key分析
    • Redis手动进行主从切换
    • Redis集群添加节点之后数据重新均匀分配
    • Redis槽位slot解读
    • redis新增节点slot迁移报错故障修复
    • Redis集群的创建、剔除节点与新增节点操作过程
    • redis抓包分析脚本
    • Redis配置文件解读
    • redis cluster压测
    • redis慢查询告警脚本
    • Redis 的可用内存过高时的自动驱逐 key 策略详解
  • Keydb

  • TiDB

  • MongoDB

  • Elasticsearch

  • Kafka

  • victoriametrics

  • BigData

  • Sqlserver

  • 数据库
  • Redis
Carry の Blog
2023-01-15

redis慢查询告警脚本原创

redis_alarm.py

#!/usr/bin/env python
# -*- coding: UTF-8 -*-
import redis
import sys
import time
import os
import configparser
import requests
import json


def push_telegram(msg):
    request_header = {"content-type": "application/json; charset=UTF-8", "Authorization": "xxxxxxxxxxxx"}
    push_url = "xxxxxxxxxxxxxxxxxxxxxx"

    push_data = {"targetname": "codm_xxxxxxxxxxxx", "text": msg}
    push_data = json.dumps(push_data)
    requests.post(url=push_url, headers=request_header, data=push_data, verify=False)


names = {}
hosts = ["192.169.11.5", "192.169.11.6", "192.169.11.23"]

ports = {"192.169.11.5": [8001, 9001, 9002, 8002], "192.169.11.6": [8001, 8002, 9001, 9002], "192.169.11.23": [8001, 8002, 9001, 9002]}
# 获取文件的当前路径(绝对路径)
cur_path = os.path.dirname(os.path.realpath(__file__))
config_path = os.path.join(cur_path, "config.conf")
conf = configparser.ConfigParser()
conf.read(config_path)

for host in hosts:
    for port in ports[host]:
        names["max_" + host + "_" + str(port)] = conf.get("slowlog", "max_" + host + "_" + str(port))


print(names)
while True:
    start_time = time.perf_counter()
    for host in hosts:
        tmpMax = 0
        for port in ports[host]:
            pool = redis.ConnectionPool(host=host, port=port, password="xxxxxxxxxxxx", decode_responses=False)
            r = redis.Redis(connection_pool=pool)
            tList = r.slowlog_get(num=500)
            tmpMax = int(names["max_" + host + "_" + str(port)])
            while len(tList) > 0:
                slowContent = tList.pop()
                if slowContent["id"] > tmpMax:
                    errmsg = "redis集群主机:{}端口:{}发生慢查询\n执行时间:{}\n执行耗时:{}ms\n执行语句:{}\n客户端ip:{}".format(
                        host, port, time.strftime("%Y-%m-%d %H:%M:%S", time.localtime(slowContent["start_time"])), slowContent["duration"] / 1000, slowContent["command"], slowContent["client_address"]
                    )
                    names["max_" + host + "_" + str(port)] = slowContent["id"]
                    if (slowContent["duration"] / 1000) >= 20:
                        try:
                            push_telegram(errmsg)
                        except Exception as e:
                            print("An exception occurred:", e)

                    with open("check_rediscluster.log", "a+") as file_obj:
                        file_obj.write(errmsg)
                    tmpMax = int(slowContent["id"])
            r.close()
            conf.set("slowlog", "max_" + host + "_" + str(port), str(tmpMax))
            with open("config.conf", "w+") as f:
                conf.write(f)
            tmpMax = 0
    end_time = time.perf_counter()
    print("Calculation takes {} seconds".format(end_time - start_time))
    time.sleep(600)
    print("sleep over")

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71

config.conf 配置文件用于记录当前记录已发送慢语句的数据统计,下次从该点往下加1,初始位置从1起

[slowlog]
max_192.169.11.5_8001 = 1
max_192.169.11.6_8002 = 1
max_192.169.11.5_9001 = 1
max_192.169.11.5_9002 = 1
max_192.169.11.23_9002 = 1
max_192.169.11.23_9001 = 1
max_192.169.11.23_8002 = 1
max_192.169.11.6_9001 = 1
max_192.169.11.6_8001 = 1
max_192.169.11.23_8001 = 1
max_192.169.11.6_9002 = 1
max_192.169.11.5_8002 = 1

1
2
3
4
5
6
7
8
9
10
11
12
13
14

该脚本里用到了redis包

默认pip3下载的版本较低,使用该版本安装redis-4.5.5

wget https://files.pythonhosted.org/packages/53/30/128c5599bc3fa61488866be0228326b3e486be34480126f70e572043adf8/redis-4.5.5.tar.gz

tar zxvf redis-4.5.5.tar.gz
cd redis-4.5.5/
python3 setup.py install
1
2
3
4
5

使用方法

python3  redis_alarm.py
1

systemctl 启动

[Unit]
Description=redisalarm
After=network.target

[Service]
Type=simple
PIDFile=/var/run/redisalarm.pid
WorkingDirectory=/data/script/
ExecStart=python3 /data/script/redis_alarm.py
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
上次更新: 4/24/2025

← redis cluster压测 Redis 的可用内存过高时的自动驱逐 key 策略详解→

最近更新
01
tidb fast ddl
04-04
02
TiDB配置文件调优 原创
04-03
03
如何移除TiDB中的表分区 原创
04-03
更多文章>
Theme by Vdoing
  • 跟随系统
  • 浅色模式
  • 深色模式
  • 阅读模式