1500字范文,内容丰富有趣,写作好帮手!
1500字范文 > MySQL分库分表dble初次尝试

MySQL分库分表dble初次尝试

时间:2024-04-16 04:16:54

相关推荐

MySQL分库分表dble初次尝试

文章目录

安装MySQL配置MySQL安装zookeeper安装haproxy安装DBLE下载并安装配置DBLE全局唯一序列日志清理配置dble集群配置MySQL-HA开启DBLE参数配置MHA启动MHA启动DBLEsysbench压测DBLE使用规范智能ER表schema规范

安装MySQL

MySQL的安装过程省略,请自行安装。MySQL版本:5.7.26

MySQL结构如下:

配置MySQL

开启gtid配置增强半同步复制在3311实例创建库db1,db3,db5在3312实例创建库db2,db4,db6创建test用户 dble连接后端数据库的权限:

SELECT,INSERT,UPDATE,DELETE,FILE,CREATE,DROP,ALTER,LOCK TABLES,ALTER ROUTINE,CREATE ROUTINE,EXECUTE,INDEX,SUPER,SHOW DATABASES,REPLICATION CLIENT,REFERENCES

安装zookeeper

下载

sudo wget https://mirrors.tuna./apache/zookeeper/zookeeper-3.4.14/zookeeper-3.4.14.tar.gz解压文件至/data/service/zookeeper

创建目录

mkdir -p /data/service_data/zookeeper/{data,logs}

配置文件

cd /data/service/zookeepercp zoo_sample.cfg zoo.cfg

vim zoo.cfg# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/data/service_data/zookeeper/data# the port at which the clients will connectclientPort=2881# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## /doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge feature#autopurge.purgeInterval=1server.1=10.216.91.117:2882:3882server.2=10.216.91.118:2882:3882server.3=10.216.91.119:2882:3882

注意需要三个端口:

clientportfollower和leader交换信息使用的端口选举leader使用的端口配置集群id

echo 1 > /data/service_data/zookeeper/data/myid

注意:myid位于dataDir目录下,每个节点中的myid都不同。

启动

cd /data/service/zookeeper/binsh zkServer.sh start

查看运行状态

sh zkServer.sh status

客户端连接

sh zkCli.sh -server 10.216.91.117:2881

安装haproxy

安装过程省略。

配置文件如下:

globallog 127.0.0.1 local0maxconn 4096chroot /usr/local/haproxyuser haproxygroup haproxydaemonpidfile /usr/local/haproxy/logs/haproxy.piddefaultslog globalmode tcpretries 3option redispatchlog 127.0.0.1 local0 maxconn 2000timeout connect 3stimeout client 3600stimeout server 3600slisten statsbind 0.0.0.0:8000mode httpstats enablestats uri /adminstats refresh 5sstats show-nodestats show-legendsstats hide-versionstats auth admin:adminstats realm Haproxy Managerfrontend dble-writebind *:8066mode tcpdefault_backend dble-write-backbackend dble-write-backmode tcpbalance leastconnoption tcplogserver dble1 10.216.91.117:8066 check port 8066 inter 5000 rise 3 fall 3server dble2 10.216.91.118:8066 check port 8066 inter 5000 rise 3 fall 3

安装DBLE

下载并安装

下载地址解压并安装

tar -xvf actiontech-dble-2.20.04.0.tar.gz -C /data/apps/cd /data/apps/dble/conf cp rule_template.xml rule.xml cp schema_template.xml schema.xml cp server_template.xml server.xml

配置DBLE

scheme.xml

<!DOCTYPE dble:schema SYSTEM "schema.dtd"><dble:schema xmlns:dble="http://dble.cloud/" version="3.0"><schema name="testdb"><table name="tb_enum_sharding" dataNode="dn1,dn2" rule="rule_enum" cacheKey="ID"/><table name="tb_range_sharding" dataNode="dn1,dn2,dn3" rule="rule_range"/><table name="t1" dataNode="dn1,dn2" rule="rule_id_hash" incrementColumn="id"/><table name="sptb_jz_user" dataNode="dn1,dn2" rule="rule_id_hash"/><table name="jz_user" dataNode="dn1,dn2" rule="rule_id_hash"/><table name="sptb_jz_apply" dataNode="dn1,dn2" rule="rule_userid_hash"/><table name="jz_apply" dataNode="dn1,dn2" rule="rule_userid_hash"/><table name="sptb_jz_user2" dataNode="dn1,dn2" rule="rule_common_hash"><childTable name="sptb_jz_apply2" joinKey="user_id" parentKey="id"/></table><table name="tb_global1" dataNode="dn1,dn2" type="global"/><table name="tb_single" dataNode="dn6"/></schema><schema name="testdb1"><table name="jz_user_tmp" dataNode="dn1,dn2" rule="rule_id_hash" cacheKey="id"/></schema><schema name="testdb2" dataNode="dn5"/><schema name="tmp" dataNode="dn3"/><schema name="dtle" dataNode="dn_dtle"/><schema name="sbtest"><table name="sbtest1" dataNode="dn$1-6" rule="hash_sysbench" cacheKey="id"/></schema><dataNode name="dn1" dataHost="dataHost1" database="db_1"/><dataNode name="dn2" dataHost="dataHost2" database="db_2"/><dataNode name="dn3" dataHost="dataHost1" database="db_3"/><dataNode name="dn4" dataHost="dataHost2" database="db_4"/><dataNode name="dn5" dataHost="dataHost1" database="db_5"/><dataNode name="dn6" dataHost="dataHost2" database="db_6"/><dataNode name="dn_dtle" dataHost="dataHost1" database="db_dtle"/><dataHost balance="0" maxCon="1000" minCon="10" name="dataHost1" slaveThreshold="100"><heartbeat>show slave status</heartbeat><writeHost host="10_216_91_119" url="10.216.91.119:3311" password="doumi1.q" user="test" disabled="false" id="hostS1" weight="0"><readHost host="10_216_91_120" url="10.216.91.120:3311" password="doumi1.q" user="test" disabled="false" id="hostS1" weight="0"/><readHost host="10_216_91_118" url="10.216.91.118:3311" password="doumi1.q" user="test" disabled="false" id="hostM1" weight="0"/></writeHost></dataHost><dataHost balance="0" maxCon="1000" minCon="10" name="dataHost2" slaveThreshold="100"><heartbeat>show slave status</heartbeat><writeHost host="10_216_91_118" url="10.216.91.118:3312" password="doumi1.q" user="test" disabled="false" id="hostM2" weight="0"><readHost host="10_216_91_120" url="10.216.91.120:3312" password="doumi1.q" user="test" disabled="false" id="hostS2" weight="0"/><readHost host="10_216_91_119" url="10.216.91.119:3312" password="doumi1.q" user="test" disabled="false" id="hostS2" weight="0"/></writeHost></dataHost></dble:schema>

rule.xml

<!DOCTYPE dble:rule SYSTEM "rule.dtd"><dble:rule xmlns:dble="http://dble.cloud/" version="3.0"><tableRule name="rule_enum"><rule><columns>code</columns><algorithm>func_enum</algorithm></rule></tableRule><tableRule name="rule_range"><rule><columns>id</columns><algorithm>func_range</algorithm></rule></tableRule><tableRule name="rule_common_hash"><rule><columns>id</columns><algorithm>func_common_hash</algorithm></rule></tableRule><tableRule name="rule_id_hash"><rule><columns>id</columns><algorithm>hash2_function</algorithm></rule></tableRule><tableRule name="rule_userid_hash"><rule><columns>user_id</columns><algorithm>hash2_function</algorithm></rule></tableRule><tableRule name="rule_common_hash2"><rule><columns>id2</columns><algorithm>func_common_hash</algorithm></rule></tableRule><tableRule name="rule_uneven_hash"><rule><columns>id</columns><algorithm>func_uneven_hash</algorithm></rule></tableRule><tableRule name="rule_mod"><rule><columns>id</columns><algorithm>func_mod</algorithm></rule></tableRule><tableRule name="rule_jumpHash"><rule><columns>code</columns><algorithm>func_jumpHash</algorithm></rule></tableRule><tableRule name="rule_hashString"><rule><columns>code</columns><algorithm>func_hashString</algorithm></rule></tableRule><tableRule name="rule_date"><rule><columns>create_date</columns><algorithm>func_date</algorithm></rule></tableRule><tableRule name="rule_pattern"><rule><columns>id</columns><algorithm>func_pattern</algorithm></rule></tableRule><tableRule name="hash_sysbench"><rule><columns>id</columns><algorithm>func_hash_sysbench</algorithm></rule></tableRule><function name="func_enum" class="Enum"><property name="mapFile">partition-enum.txt</property><property name="defaultNode">0</property><property name="type">0</property></function><function name="func_range" class="NumberRange"><property name="mapFile">partition-number-range.txt</property><property name="defaultNode">0</property></function><function name="func_common_hash" class="Hash"><property name="partitionCount">2</property><property name="partitionLength">512</property></function><function name="hash2_function" class="Hash"><property name="partitionCount">2</property><property name="partitionLength">512</property></function><function name="func_uneven_hash" class="Hash"><property name="partitionCount">2,1</property><property name="partitionLength">256,512</property></function><function name="func_mod" class="Hash"><property name="partitionCount">4</property><property name="partitionLength">1</property></function><function name="func_jumpHash" class="jumpStringHash"><property name="partitionCount">2</property><property name="hashSlice">0:2</property></function><function name="func_hashString" class="StringHash"><property name="partitionCount">4</property><property name="partitionLength">256</property><property name="hashSlice">0:2</property></function><function name="func_date" class="Date"><property name="dateFormat">yyyy-MM-dd</property><property name="sBeginDate">-01-01</property><property name="sEndDate">-01-31 </property><property name="sPartionDay">10</property><property name="defaultNode">0</property></function><function name="func_pattern" class="PatternRange"><property name="mapFile">partition-pattern.txt</property><property name="patternValue">1024</property><property name="defaultNode">0</property></function><function name="func_hash_sysbench" class="Hash"><property name="partitionCount">6</property><property name="partitionLength">1</property></function></dble:rule>

server.xml

<!DOCTYPE dble:server SYSTEM "server.dtd"><dble:server xmlns:dble="http://dble.cloud/" version="3.0"><system><property name="processors">4</property><property name="backendProcessors">4</property><property name="processorExecutor">8</property><property name="backendProcessorExecutor">6</property><property name="sequenceHandlerType">2</property><property name="serverBacklog">2048</property><property name="useThreadUsageStat">1</property><property name="showBinlogStatusTimeout">10000</property><property name="charset">utf8mb4</property><property name="maxPacketSize">4194304</property><property name="txIsolation">2</property><property name="autocommit">1</property><property name="checkTableConsistency">0</property><property name="checkTableConsistencyPeriod">60000</property><property name="dataNodeIdleCheckPeriod">300000</property><property name="dataNodeHeartbeatPeriod">10000</property><property name="processorCheckPeriod">1000</property><property name="sqlExecuteTimeout">300</property><property name="idleTimeout">1800000</property><property name="recordTxn">0</property><property name="xaSessionCheckPeriod">1000</property><property name="xaLogCleanPeriod">1000</property><property name="useJoinStrategy">true</property><property name="nestLoopConnSize">4</property><property name="nestLoopRowsSize">2000</property><property name="otherMemSize">4</property><property name="orderMemSize">4</property><property name="joinMemSize">4</property><property name="bufferPoolChunkSize">4096</property><property name="bufferPoolPageNumber">512</property><property name="bufferPoolPageSize">2097152</property><property name="useSqlStat">1</property><property name="enableSlowLog">1</property><property name="bufferUsagePercent">80</property><property name="clearBigSQLResultSetMapMs">600000</property><property name="sqlRecordCount">10</property><property name="maxResultSet">524288</property><property name="flushSlowLogPeriod">1</property><property name="flushSlowLogSize">1000</property><property name="sqlSlowTime">100</property><property name="enableFlowControl">true</property><property name="flowControlStartThreshold">4096</property><property name="flowControlStopThreshold">256</property><property name="useOuterHa">true</property></system><firewall><blacklist check="true"><property name="renameTableAllow">true</property></blacklist></firewall><user name="man1"><property name="password">654321</property><property name="manager">true</property></user><user name="root"><property name="password">123456</property><property name="schemas">testdb,testdb2,tmp,dtle,testdb1</property></user><user name="test"><property name="password">111111</property><property name="schemas">sbtest</property></user><user name="user"><property name="password">AqEkFEuIFAX6g2TJQnp4cJ2r7Yc0Z4/KBsZqKhT8qSz18Aj91e8lxO49BKQElC6OFfW4c38pCYa8QGFTub7pnw==</property><property name="usingDecrypt">1</property><property name="schemas">testdb</property><property name="readOnly">true</property><property name="maxCon">100</property></user></dble:server>

全局唯一序列

dble实现全局唯一ID有四种方式:

MySQL offset-step分布式MySQL offset-step时间戳方式分布式时间戳

我们使用时间戳的方式,配置文件如下:

sequence_time_conf.properties#sequence depend on TIMEWORKID=01DATAACENTERID=01START_TIME=-01-01 00:00:00

注意:

workid 必须为[0,31]之间的整数datacenterid 必须为[0,31]之间的整数WORKID,DATAACENTERID的配置必须使该dble实例在dble集群中唯一全局序列对应字段为bigint

日志清理配置

log4j2.xml文件修改以下部分:

<DefaultRolloverStrategy max="100"><Delete basePath="logs" maxDepth="2"><IfFileName glob="*/dble-*.log.gz"><IfLastModified age="3d"><IfAny><IfAccumulatedFileSize exceeds="1GB"/><IfAccumulatedFileCountexceeds="10"/></IfAny></IfLastModified></IfFileName></Delete></DefaultRolloverStrategy>

dble集群配置

myid.properties配置如下:

#set false if not use cluster ucore/zkcluster=zk#clinet infoipAddress=10.216.91.117:2881,10.216.91.118:2881,10.216.91.119:2881port=5700#cluster namespace, please use the same one in one clusterclusterId=dble-cluster-1#it must be different for every node in clustermyid=bw-sys-k8s-v02serverID=server_02clusterHa=true

配置参数clusterHa = true

当此参数启用时,集群状态的dble将会在集群中同步自身的dataSource的状态此配置在server.xml中useOuterHa参数为false时不生效当useOuterHa参数为true但clusterHa不会true时,dble可以执行高可用切换的所有指令,但是其行为退化为单机dble,需要人工进行集群中多个dble的状态同步注意:此参数的调整需要重启dble服务

MySQL-HA

开启DBLE参数

外部ha启用参数

server.xml中system

<property name="useOuterHa">true</property>

myid.properties

配置参数clusterHa = true

配置MHA

MHA安装过程省略。

配置文件如下:

[server default]manager_log=/etc/masterha/dble1/manager.logmanager_workdir=/etc/masterha/dble1master_ip_failover_script="/etc/masterha/dble1/master_ip_failover"master_ip_online_change_script="/etc/masterha/dble1/master_ip_online_change"password="wxasd.q"ping_interval=1repl_password="wxasd.q"repl_user=dmreplreport_script="/etc/masterha/dble1/send_report"secondary_check_script=/bin/masterha_secondary_check -s 10.216.91.120 --user=root --master_host=bw-sys-k8s-v03 --master_ip=10.216.91.118 --master_port=3311ssh_user=rootuser=root[server1]candidate_master=1hostname=10.216.91.118master_binlog_dir=/data/mysql/3311_mysql/binlogdirport=3311[server2]candidate_master=1hostname=10.216.91.119master_binlog_dir=/data/mysql/3311_mysql/binlogdirport=3311ignore_fail=1[server3]candidate_master=1hostname=10.216.91.120master_binlog_dir=/data/mysql/3311_mysql/binlogdirport=3311ignore_fail=1

master_ip_failover

#!/usr/bin/env perl# Copyright (C) DeNA Co.,Ltd.## This program is free software; you can redistribute it and/or modify# it under the terms of the GNU General Public License as published by# the Free Software Foundation; either version 2 of the License, or# (at your option) any later version.## This program is distributed in the hope that it will be useful,# but WITHOUT ANY WARRANTY; without even the implied warranty of# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the# GNU General Public License for more details.## You should have received a copy of the GNU General Public License# along with this program; if not, write to the Free Software# Foundation, Inc.,# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA## Note: This is a sample script and is not complete. Modify the script based on your environment.use strict;use warnings FATAL => 'all';use Getopt::Long;use MHA::DBHelper;my ($command, $ssh_user, $orig_master_host,$orig_master_ip, $orig_master_port, $new_master_host,$new_master_ip, $new_master_port, $new_master_user,$new_master_password);GetOptions('command=s' => \$command,'ssh_user=s' => \$ssh_user,'orig_master_host=s' => \$orig_master_host,'orig_master_ip=s'=> \$orig_master_ip,'orig_master_port=i' => \$orig_master_port,'new_master_host=s'=> \$new_master_host,'new_master_ip=s' => \$new_master_ip,'new_master_port=i'=> \$new_master_port,'new_master_user=s'=> \$new_master_user,'new_master_password=s' => \$new_master_password,);exit &main();sub main {if ( $command eq "stop" || $command eq "stopssh" ) {# $orig_master_host, $orig_master_ip, $orig_master_port are passed.# If you manage master ip address at global catalog database,# invalidate orig_master_ip here.my $exit_code = 1;eval {# 调用对应的disable命令,使得部分节点不可写 $orig_master_host =~tr/./_/; system "mysql -P9066 -u man1 -p654321 -h 10.216.91.117 -e \"dataHost \@\@disable name = 'dataHost1' node='".$orig_master_host."'\"";# updating global catalog, etc$exit_code = 0;};if ($@) {warn "Got Error: $@\n";exit $exit_code;}exit $exit_code;}elsif ( $command eq "start" ) {# all arguments are passed.# If you manage master ip address at global catalog database,# activate new_master_ip here.# You can also grant write access (create user, set read_only=0, etc) here.my $exit_code = 10;eval {my $new_master_handler = new MHA::DBHelper();# args: hostname, port, user, password, raise_error_or_not$new_master_handler->connect( $new_master_ip, $new_master_port,$new_master_user, $new_master_password, 1 );## Set read_only=0 on the new master$new_master_handler->disable_log_bin_local();print "Set read_only=0 on the new master.\n";$new_master_handler->disable_read_only();## Creating an app user on the new masterprint "Creating app user on the new master..\n";$new_master_handler->enable_log_bin_local();$new_master_handler->disconnect();## try to switch the dataHost master into new master## 调用dataHost switch的命令,将新的new_master_host节点提升$new_master_host =~tr/./_/;system "mysql -P9066 -u man1 -p654321 -h 10.216.91.117 -e \"dataHost \@\@switch name = 'dataHost1' master='".$new_master_host."'\"";## Update master ip on the catalog database, etc$exit_code = 0;};if ($@) {warn $@;# If you want to continue failover, exit 10.exit $exit_code;}exit $exit_code;}elsif ( $command eq "status" ) {# do nothingexit 0;}else {&usage();exit 1;}}sub usage {print"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";}

注意:

writeHost host=“10_216_91_118” 中host必须是下划线连接,因为master_ip_failover脚本中$orig_master_host =~tr/./_/; 会将ip地址转换成下划线连接。

启动MHA

nohup /bin/masterha_manager --conf=/etc/masterha/dble1/f --ignore_fail_on_start --ignore_last_failover> /etc/masterha/dble1/mha_manager.log 2>&1 &

启动DBLE

DBLE集群使用了zk,启动过程中逐个启动dble节点。

cd /data/apps/dble/binsh dble start

sysbench压测

清除数据

/usr/share/sysbench/oltp_read_write.lua \--mysql-db=sbtest \--mysql-host=10.216.91.117 --mysql-port=8066 \--mysql-user=test --mysql-password=111111 \--auto_inc=off --tables=1 --table-size=100000 \--threads=4 --time=30 --report-interval=1 --max-requests=0 \--percentile=95 --db-ps-mode=disable --skip-trx=on cleanup

准备数据

/usr/share/sysbench/oltp_read_write.lua \--mysql-db=sbtest --mysql-host=10.216.91.117 --mysql-port=8066 \--mysql-user=test --mysql-password=111111 --auto_inc=off --tables=1 \--table-size=100000 --threads=4 --time=30 --report-interval=1 --max-requests=0 \--percentile=95 --db-ps-mode=disable --skip-trx=on prepare

执行压测

/usr/share/sysbench/oltp_read_write.lua \--mysql-db=sbtest --mysql-host=10.216.91.117 --mysql-port=8066 \--mysql-user=test --mysql-password=111111 --auto_inc=off --tables=1 \--table-size=100000 --threads=4 --time=30 --report-interval=1 --max-requests=0 \--percentile=95 --db-ps-mode=disable --skip-trx=on run

DBLE使用规范

智能ER表

符合智能ER的条件:

ER表的dataNode必须相同ER表的分片规则可以不通ER表的分片算法必须相同

案例:

ER表1:sptb_jz_user2,sptb_jz_apply2ER表2:sptb_jz_user,sptb_jz_apply

ER表1为传统ER表,ER表2则为智能ER表。

查看执行计划:

select t1.id,t2.user_id from sptb_jz_user t1 inner join sptb_jz_apply t2 on t1.id=t2.user_id;

+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------------------+| DATA_NODE | TYPE| SQL/REF |+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------------------+| dn1_0 | BASE SQL| select `t1`.`id`,`t2`.`user_id` from `sptb_jz_user` `t1` join `sptb_jz_apply` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1 || dn2_0 | BASE SQL| select `t1`.`id`,`t2`.`user_id` from `sptb_jz_user` `t1` join `sptb_jz_apply` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1 || merge_1 | MERGE | dn1_0; dn2_0 || shuffle_field_1 | SHUFFLE_FIELD | merge_1 |+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------------------+4 rows in set (0.01 sec)

select t1.id,t2.user_id from sptb_jz_user2 t1 inner join sptb_jz_apply2 t2 on t1.id=t2.user_id;

+-----------------+---------------+---------------------------------------------------------------------------------------------------------------------------------+| DATA_NODE | TYPE| SQL/REF |+-----------------+---------------+---------------------------------------------------------------------------------------------------------------------------------+| dn1_0 | BASE SQL| select `t1`.`id`,`t2`.`user_id` from `sptb_jz_user2` `t1` join `sptb_jz_apply2` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1 || dn2_0 | BASE SQL| select `t1`.`id`,`t2`.`user_id` from `sptb_jz_user2` `t1` join `sptb_jz_apply2` `t2` on `t1`.`id` = `t2`.`user_id` where 1=1 || merge_1 | MERGE | dn1_0; dn2_0|| shuffle_field_1 | SHUFFLE_FIELD | merge_1 |+-----------------+---------------+---------------------------------------------------------------------------------------------------------------------------------+4 rows in set (0.00 sec)

schema规范

规定每个逻辑schema必须对应不同的dataNode不同逻辑schema下的逻辑表可以相同,但是对应的物理表必须不同,即相同逻辑table对应的dateNode必须不同dataNode对应的后端database必须不同一个物理表不能配置给多个逻辑表使用

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。