本文摘选abcdocker运维博客
abcdocker运维博客 – 专注DevOps自动化运维,传播优秀it运维技术文章
在生产环境中往往数据存储量比较大,因此会大部分会选择分布式存储来解决,主要解决以下几个问题
海量数据存储数据高可用(冗余备份)较高读写性能和负载均衡支持多平台多语言高并发问题
常见分布式存储对比
FastDFS 相关组件及原理
FastDFS介绍
FastDFS是一个C语言实现的开源轻量级分布式文件系统,支持Linux、FreeBSD、AID等Linux系统,解决了大量数据存储和读写负载等问题,适合存储4KB~500MB之间的小文件,如图片网站,短视频网站,文档,APP下载站等,UC,京东,支付宝,迅雷,酷狗等都有使用,其中UC基于FastDFS向用户提供网盘,广告和应用下载的业务。FastDFS与MogileFS、HDFS、TFS等都不是系统级的分布式文件系统,而是应用级的分布式文件存储服务
FastDFS架构
FastDFS服务有三个角色: 跟踪服务(tracker server)、存储服务(storage server)和客户端(client)
tracker server:跟踪服务,主要做调度工作,起到均衡的作用;负责管理所有的storage server和group,每个storage在启动后会连接Tracker,告知自己所属group等信息,并保持周期性心跳,tracker根据storage心跳信息,建立group --> [storage server list]
的映射表;tracker管理的元数据很少,会直接存放在内存;tracker上的元信息都是由storage汇报的信息生成的,本身不需要持久化任何数据,tracker之间是对等关系,因此扩展tracker访问非常容器,之间增加tracker访问即可,所有tracker都接受storage心跳信息,生成元数据信息来提供读写访问(与其他master-slave架构的优势是没有单点,tracker也不会成为瓶颈,最终数据是和一个可用的storage server进行传输)
storage server:存储服务器,主要提供容量和备份访问;以group为单位,每个group内可以包含多个storage server,数据互为备份,存储容量空间以group内容量最小的storage为准;建议group内的storage server配置相同;以group为单位组织存储能够方便的进行引用隔离、负载均衡和副本数定制;
缺点: group的容量受单机存储容量的限制,同时group内机器坏掉,数据恢复只能依赖group内其他机器重新同步(硬盘替换,重新挂载重启fdfs_storaged即可)
group存储策略
round robin (轮训)load balance (选择最大剩余空间的组上传文件)specify group (指定group上传)
group中storage存储依赖本地文件系统,storage可配置多个数据存储目录,磁盘不做raid,直接分别挂在到多个目录,将这些目录配置为storage的数据目录即可
storage接受写请求时,会根据配置好的规则,选择其中一个存储目录来存储文件;为避免单个目录下文件过多,storage第一次启动,会在每个数据存储目录里创建2级子目录,每级256个,总共65536个,新写的文件会以hash的方式被路由到其中某个子目录下,然后将文件数据直接作为一个本地文件存储到该目录中
FastDFS工作流程
上传
FastDFS提供基本的文件访问接口,如upload、download、append、delete等
选择tracker server
集群中tracker之间是对等关系,客户端在上传文件时可用任意选择一个tracker
选择存储group
当tracker接受到upload file的请求时,会为该文件分配一个可以存储的group,目前支持选择group的规则为
1.Round Robin (所有group轮训使用)2.Specified group (指定某个确定的group)
3.Load balance (剩余存储空间较多的group优先)
选择storage server
当选定group后,tracker会在group内选择一个storage server给客户端,目前支持选择server的规则为
1.Round Robin (所有server轮训使用)默认规则2.根据IP地质进行排序选择第一个服务器 (IP地址最小者)
3.根据优先级进行排序 (上传优先级由storage server来设置,参数为upload_priority)
选择storage path (磁盘或者挂载点)
当分配好storage server后,客户端将向storage发送写文件请求,storage会将文件分配一个数据存储目录,目前支持选择存储路径的规则为:
1.round robin (轮训)默认2.load balance 选择使用剩余空间最大的存储路径
选择下载服务器
目前支持的规则为
1.轮训方式,可以下载当前文件的任一storage server
2.从源storage server下载
生成file_id
选择存储目录后,storage会生成一个file_id,采用Base64编码,包含字段包括: storage server ip、文件创建时间、文件大小、文件CRC32校验码和随机数;每个存储目录下有两个256*256个子目录,storage会按文件file_id进行两次hash,路由到其中一个子目录,然后将文件file_id为文件名存储在该子目录下,最后生成文件路径: group名称、虚拟磁盘路径、数据两级目录、file_id
group1 /M00/02/44/wkgDRe348wAAAAGKYJK42378.sh
其中,组名: 上传文件后所在的存储组的名称,在文件上传成功后由存储服务器返回,需要客户端自行保存
虚拟磁盘路径: 存储服务器配置的虚拟路径,与磁盘选项store_path*参数对应
数据两级目录: 存储服务器在每个虚拟磁盘路径下创建的两级目录,用于存储数据文件
同步机制
1.新增tracker服务器数据同步
由于storage server上配置了所有的tracker server,storage server和tracker server之间的通信是由storage server主动发起的,storage server为每台tracker server 启动一个线程进行通信;在通信过程中,若发现tracker server返回的本组storage server列表比本机记录少,就会将该tracker server上没有的storage server同步给该tracker,这样的机制使得tracker之间是对等的关系,数据保持一致
2.组内新增storage数据同步
若新增storage server或者其状态发生变化,tracker server都会将storage server列表同步给该组内所有storage server;以新增storage server为例,新加入storage server会主动连接tracker server,tracker server发现有新的storage server加入,就会将该组内所有的storage server返回给新加入的storage server,并重新将该组的storage server列表返回给该组内的其他storage server
3.组内storage数据同步
组内storage server之间是对等的,文件上传,删除等操作可以在组内任意一台storage server上进行。文件同步只能在同组内的storage server之间进行,采用push方式,即源服务器同步到目标服务器
A. 只在同组内的storage server之间同步B. 源数据才需要同步,备份数据不再同步
C. 特例: 新增storage server时,由其中一台将已有的所有数据(包括源数据和备份数据)同步到新增服务器
storage server 7种状态
通过命令fdfs_monitor /etc/fdfs/client.conf
可以查看ip_addr选项显示storage server当前状态
INIT 初始化,尚未得到同步已有数据的源服务器WAIT_SYNC 等待同步,已得到同步已有数据的源服务器SYNCING 同步中DELETE已删除,该服务器从本组中摘除OFFLINE 离线ONLINE在线,尚不能提供服务ACTIVE在线,可以提供服务
组内增加storage server A状态变化过程:
1.storage server A主动连接tracker server,此时tracker server将storage serverA状态设置为INIT
2.storage server A向tracker server询问追加同步的源服务器和追加同步截止时间点(当前时间),若组内只有storage server A或者上传文件数为0,则告诉新主机不需要同步数据,storage serverA状态为ONLINE;若组内没有active状态及其,就返回错误给新机器,新机器重新尝试;否则tracker将其状态设置为WAIT_SYNC
3.假如分配了storage server B为同步源服务器和截止时间点,那么storage serverB将会截止时间点之前的所有数据同步给storage server A,并请求tracker设置storage server A状态为SYNCING;到了截止时间后,storage server B向storage server A的同步将由追加同步切换为正常binlog增量同步,当获取不到更多binlog时,请求tracker将storage server A同步完所有数据,暂时没有数据要同步时,storage server B请求tracker server将storage server A的状态设置为ONLINE
4.storage server B向storage server A同步完所有数据,暂时没有数据要同步时,storage server B请求tracker server将 storage server A的状态设置为ONLINE
5.当storage server A向tracker server发起心跳时,tracker server将其状态更改为ACTIVE,之后就是增量同步(binlog)
注:整个源同步班过程是源机器启动弄一个同步线程,将数据Push到新机器,最大达到一个磁盘的IO,不能并发;由于源同步截止条件是获取不到binlog,系统繁忙,不断有新数据写入的情况,将会导致一直无法完成源同步
下载
client发送下载请求给某个tracker,必须带上文件名信息,tracker从文件名中解析出文件的group、大小、创建时间等信息,然后为该请求选择一个storage用于读请求;由于group内的文件同步是异步进行,可能出现文件没有同步到其他storage server上或者延迟的问题,可以使用nginx_fastdfs_module
模块解决
关于文件去重
由于FastDFS本身不能对重复上传的文件进行去重,而FastDHT可以做到去重。FastDHT是一个高性能的分布式哈希系统,它是基于键值对存储的,而且它需要依赖于Berkeley DB作为数据存储的媒介,同时需要依赖于libfastcommon
由于业务需要,目前不存在文件去重的时候,如果需要可以自己简单了解一下FastDHT
FastDFS-Nginx扩展模块源码分析
安装FastDFS集群
本次环境架构
环境说明
#nginx这里可以部署2台,加上keepliveed作高可用,由于我这里机器不足,就使用单台nginx进行代理nginx 192.168.31.100 nginxtracker 节点tracker 01:192.168.31.101 FastDFS,libfastcommon,nginx,ngx_cache_purgetracker 02:192.168.31.102 FastDFS,libfastcommon,nginx,ngx_cache_purge#其中tracker不提供存储Storage 节点[group1]storage 01:192.168.31.103 FastDFS,libfastcommon,nginx,fastdfs-nginx-modulestorage 02:192.168.31.104 FastDFS,libfastcommon,nginx,fastdfs-nginx-module[group2]storage 03:192.168.31.105 FastDFS,libfastcommon,nginx,fastdfs-nginx-modulestorage 04:192.168.31.106 FastDFS,libfastcommon,nginx,fastdfs-nginx-module
1.所有的服务器都需要安装nginx,主要是用于访问和上传无关;2.tracker安装nginx主要为了提供http反向代理、负载均衡以及缓存服务
3.每一台storage服务器部署Nginx及FastDFS扩展模块,主要用于对storage存储的文件提供http下载访问,仅当前storage节点找不到文件时会向源storage主机发送rediect或者proxy动作
所有节点安装
关闭防火墙,selinux
systemctl stop firewalldsystemctl disable firewalldiptables -F && iptables -X && iptables -F -t nat && iptables -X -t natiptables -P FORWARD ACCEPTsetenforce 0sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
设置yum源
yum install -y wgetwget -O /etc/yum.repos.d/CentOS-Base.repo /repo/Centos-7.repowget -O /etc/yum.repos.d/epel.repo /repo/epel-7.repoyum clean all yum makecache
温馨提示:除了nginx节点(192.168.31.100),其他节点都需要执行安装fastdfs和nginx
安装依赖包 (可解决99%的依赖问题)
yum -y install gcc gcc-c++ make autoconf libtool-ltdl-devel gd-devel freetype-devel libxml2-devel libjpeg-devel libpng-devel openssh-clients openssl-devel curl-devel bison patch libmcrypt-devel libmhash-devel ncurses-devel binutils compat-libstdc++-33 elfutils-libelf elfutils-libelf-devel glibc glibc-common glibc-devel libgcj libtiff pam-devel libicu libicu-devel gettext-devel libaio-devel libaio libgcc libstdc++ libstdc++-devel unixODBC unixODBC-devel numactl-devel glibc-headers sudo bzip2 mlocate flex lrzsz sysstat lsof setuptool system-config-network-tui system-config-firewall-tui ntsysv ntp pv lz4 dos2unix unix2dos rsync dstat iotop innotop mytop telnet iftop expect cmake nc gnuplot screen xorg-x11-utils xorg-x11-xinit rdate bc expat-devel compat-expat1 tcpdump sysstat man nmap curl lrzsz elinks finger bind-utils traceroute mtr ntpdate zip unzip vim wget net-tools
下载依赖包 (除了nginx节点,其他节点都要安装)
mkdir /root/tools/cd /root/toolswget /download/nginx-1.18.0.tar.gzwget /happyfish100/libfastcommon/archive/V1.0.43.tar.gzwget /happyfish100/fastdfs/archive/V6.06.tar.gzwget /happyfish100/fastdfs-nginx-module/archive/V1.22.tar.gz#为了保证文章可用性,本次软件包已经进行备份,下载地址如下mkdir /root/tools/cd /root/toolswget /fdfs/v6.6/nginx-1.18.0.tar.gzwget /fdfs/v6.6/V1.0.43.tar.gzwget /fdfs/v6.6/V6.06.tar.gzwget /fdfs/v6.6/V1.22.tar.gz#解压cd /root/toolstar xf nginx-1.18.0.tar.gz tar xf V1.0.43.tar.gz tar xf V1.22.tar.gz tar xf V6.06.tar.gz
安装libfastcommon (除了nginx节点,其他节点都要安装)
cd /root/tools/libfastcommon-1.0.43./make.sh ./make.sh install
安装FastDFS (除了nginx节点,其他节点都要安装)
cd /root/tools/fastdfs-6.06/./make.sh ./make.sh install
拷贝配置文件 (tracker01 02节点)
[root@tracker01 fastdfs-6.06]# cp /etc/fdfs/tracker.conf.sample /etc/fdfs/tracker.conf#tracker节点[root@01 fastdfs-6.06]# cp /etc/fdfs/client.conf.sample /etc/fdfs/client.conf#客户端文件(测试使用)[root@01 fastdfs-6.06]# cp /root/tools/fastdfs-6.06/conf/http.conf /etc/fdfs/#nginx配置文件[root@01 fastdfs-6.06]# cp /root/tools/fastdfs-6.06/conf/mime.types /etc/fdfs/#nginx配置文件
配置tracker 01节点
这里可以先配置一台节点,没有问题在启动另外的节点
创建tracker数据存储及日志目录 (需要在tracker节点执行)
mkdir /data/tracker/ -p
修改配置文件 (tracker 01节点执行)
cat >/etc/fdfs/tracker.conf <<EOFdisabled = falsebind_addr =port = 22122connect_timeout = 5network_timeout = 60base_path = /data/trackermax_connections = 1024accept_threads = 1work_threads = 4min_buff_size = 8KBmax_buff_size = 128KBstore_lookup = 0store_server = 0store_path = 0download_server = 0reserved_storage_space = 20%log_level = inforun_by_group=run_by_user =allow_hosts = *sync_log_buff_interval = 1check_active_interval = 120thread_stack_size = 256KBstorage_ip_changed_auto_adjust = truestorage_sync_file_max_delay = 86400storage_sync_file_max_time = 300use_trunk_file = false slot_min_size = 256slot_max_size = 1MBtrunk_alloc_alignment_size = 256trunk_free_space_merge = truedelete_unused_trunk_files = falsetrunk_file_size = 64MBtrunk_create_file_advance = falsetrunk_create_file_time_base = 02:00trunk_create_file_interval = 86400trunk_create_file_space_threshold = 20Gtrunk_init_check_occupying = falsetrunk_init_reload_from_binlog = falsetrunk_compress_binlog_min_interval = 86400trunk_compress_binlog_interval = 86400trunk_compress_binlog_time_base = 03:00trunk_binlog_max_backups = 7use_storage_id = falsestorage_ids_filename = storage_ids.confid_type_in_filename = idstore_slave_file_use_link = falserotate_error_log = falseerror_log_rotate_time = 00:00compress_old_error_log = falsecompress_error_log_days_before = 7rotate_error_log_size = 0log_file_keep_days = 0use_connection_pool = trueconnection_pool_max_idle_time = 3600http.server_port = 8080http.check_alive_interval = 30http.check_alive_type = tcphttp.check_alive_uri = /status.htmlEOF
启动tracker
[root@01 fastdfs-6.06]# /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start
配置tracker02节点
拷贝配置文件
scp -r /etc/fdfs/tracker.conf root@192.168.31.102:/etc/fdfs/ssh root@192.168.31.102 mkdir /data/tracker/ -p
tracker02启动tracker
[root@02 fastdfs-6.06]# /usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf start
检查启动状态
netstat -lntup|grep 22122tcp 00 0.0.0.0:22122 0.0.0.0:*LISTEN108126/fdfs_tracker
如果启动失败可以查看tracker报错
tail -f /data/tracker/logs/trackerd.log
接下来编辑启动脚本
cat > /usr/lib/systemd/system/tracker.service <<EOF[Unit]Description=The FastDFS File serverAfter=network.target remote-fs.target nss-lookup.target[Service]Type=forkingExecStart=/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf startExecStop=/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf stopExecRestart=/usr/bin/fdfs_trackerd /etc/fdfs/tracker.conf restart[Install]WantedBy=multi-user.targetEOF$ systemctl daemon-reload$ systemctl start tracker$ systemctl enable tracker$ systemctl status tracker#需要先手动kill 掉tracker
storage 01-02节点配置
storage01节点和02节点配置相同,storage03和storage04配置相同
storage01和storage02节点属于group1组
创建storage 数据存储目录
mkdir /data/fastdfs_data -p
修改配置文件
cat >/etc/fdfs/storage.conf<<EOFdisabled = falsegroup_name = group1bind_addr =client_bind = trueport = 23000connect_timeout = 5network_timeout = 60heart_beat_interval = 30stat_report_interval = 60base_path = /data/fastdfs_datamax_connections = 1024buff_size = 256KBaccept_threads = 1work_threads = 4disk_rw_separated = truedisk_reader_threads = 1disk_writer_threads = 1sync_wait_msec = 50sync_interval = 0sync_start_time = 00:00sync_end_time = 23:59write_mark_file_freq = 500disk_recovery_threads = 3store_path_count = 1store_path0 = /data/fastdfs_datasubdir_count_per_path = 256tracker_server = 192.168.31.101:22122tracker_server = 192.168.31.102:22122log_level = inforun_by_group =run_by_user =allow_hosts = *file_distribute_path_mode = 0file_distribute_rotate_count = 100fsync_after_written_bytes = 0sync_log_buff_interval = 1sync_binlog_buff_interval = 1sync_stat_file_interval = 300thread_stack_size = 512KBupload_priority = 10if_alias_prefix =check_file_duplicate = 0file_signature_method = hashkey_namespace = FastDFSkeep_alive = 0use_access_log = falserotate_access_log = falseaccess_log_rotate_time = 00:00compress_old_access_log = falsecompress_access_log_days_before = 7rotate_error_log = falseerror_log_rotate_time = 00:00compress_old_error_log = falsecompress_error_log_days_before = 7rotate_access_log_size = 0rotate_error_log_size = 0log_file_keep_days = 0file_sync_skip_invalid_record = falseuse_connection_pool = trueconnection_pool_max_idle_time = 3600compress_binlog = truecompress_binlog_time = 01:30check_store_path_mark = truehttp.domain_name =http.server_port = 80EOF#注意: 需要修改tracker_server地址,多个节点多复制几行,一个节点写一行就可以。 不建议单节点使用localhost
配置启动文件
cat >/usr/lib/systemd/system/storage.service <<EOF[Unit]Description=The FastDFS File serverAfter=network.target remote-fs.target nss-lookup.target[Service]Type=forkingExecStart=/usr/bin/fdfs_storaged /etc/fdfs/storage.conf startExecStop=/usr/bin/fdfs_storaged /etc/fdfs/storage.conf stopExecRestart=/usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl start storagesystemctl status storagesystemctl enable storage
检查启动状态
netstat -lntup|grep 23000
如果出现启动失败,可以到设置的目录查看一下log
tail -f /data/fastdfs_data/logs/storaged.log
storage 02-03节点配置
storage03和storage04节点属于group2组
基本流程不变这里只说明需要修改的地方
在storage03-storage-04节点同步执行
创建storage 数据存储目录mkdir /data/fastdfs_data -p修改配置文件cat >/etc/fdfs/storage.conf<<EOFdisabled = falsegroup_name = group2bind_addr =client_bind = trueport = 23000connect_timeout = 5network_timeout = 60heart_beat_interval = 30stat_report_interval = 60base_path = /data/fastdfs_datamax_connections = 1024buff_size = 256KBaccept_threads = 1work_threads = 4disk_rw_separated = truedisk_reader_threads = 1disk_writer_threads = 1sync_wait_msec = 50sync_interval = 0sync_start_time = 00:00sync_end_time = 23:59write_mark_file_freq = 500disk_recovery_threads = 3store_path_count = 1store_path0 = /data/fastdfs_datasubdir_count_per_path = 256tracker_server = 192.168.31.101:22122tracker_server = 192.168.31.102:22122log_level = inforun_by_group =run_by_user =allow_hosts = *file_distribute_path_mode = 0file_distribute_rotate_count = 100fsync_after_written_bytes = 0sync_log_buff_interval = 1sync_binlog_buff_interval = 1sync_stat_file_interval = 300thread_stack_size = 512KBupload_priority = 10if_alias_prefix =check_file_duplicate = 0file_signature_method = hashkey_namespace = FastDFSkeep_alive = 0use_access_log = falserotate_access_log = falseaccess_log_rotate_time = 00:00compress_old_access_log = falsecompress_access_log_days_before = 7rotate_error_log = falseerror_log_rotate_time = 00:00compress_old_error_log = falsecompress_error_log_days_before = 7rotate_access_log_size = 0rotate_error_log_size = 0log_file_keep_days = 0file_sync_skip_invalid_record = falseuse_connection_pool = trueconnection_pool_max_idle_time = 3600compress_binlog = truecompress_binlog_time = 01:30check_store_path_mark = truehttp.domain_name =http.server_port = 80EOF#配置启动文件cat >/usr/lib/systemd/system/storage.service <<EOF[Unit]Description=The FastDFS File serverAfter=network.target remote-fs.target nss-lookup.target[Service]Type=forkingExecStart=/usr/bin/fdfs_storaged /etc/fdfs/storage.conf startExecStop=/usr/bin/fdfs_storaged /etc/fdfs/storage.conf stopExecRestart=/usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl start storagesystemctl status storagesystemctl enable storage#检查启动状态netstat -lntup|grep 23000
如果出现systemctl启动失败,可以使用命令启动,在根据日志进行查看。 大概启动时间为10s
#storage启动、停止、重启命令/usr/bin/fdfs_storaged /etc/fdfs/storage.conf start/usr/bin/fdfs_storaged /etc/fdfs/storage.conf stop/usr/bin/fdfs_storaged /etc/fdfs/storage.conf restart
所有节点storage启动完毕后进行检查,是否可以获取到集群信息 (刚创建的集群比较慢,稍等一会。需要等待状态为ACTIVE即可)
#在任意节点storage节点执行命令都可以,获取结果应该如下[root@storage01 fdfs]# fdfs_monitor /etc/fdfs/storage.conf list[-07-03 01:15:25] DEBUG - base_path=/data/fastdfs_data, connect_timeout=5, network_timeout=60, tracker_server_count=2, anti_steal_token=0, anti_steal_secret_key length=0, use_connection_pool=1, g_connection_pool_max_idle_time=3600s, use_storage_id=0, storage server id count: 0server_count=2, server_index=1 tracker server is 192.168.31.102:22122#tracker server处理本次命令的节点group count: 2 #group组数量Group 1:#group1组信息group name = group1disk total space = 17,394 MBdisk free space = 13,758 MBtrunk free space = 0 MBstorage server count = 2active server count = 2storage server port = 23000storage HTTP port = 80store path count = 1subdir count per path = 256current write server index = 0current trunk file id = 0Storage 1:#storage1节点信息id = 192.168.31.103ip_addr = 192.168.31.103 ACTIVE #storage节点状态http domain = version = 6.06 #fdfs 版本join time = -07-03 01:08:29#加入集群时间up time = -07-03 01:08:29total storage = 17,394 MBfree storage = 14,098 MBupload priority = 10store_path_count = 1subdir_count_per_path = 256storage_port = 23000storage_http_port = 80current_write_path = 0source storage id = 192.168.31.104if_trunk_server = 0connection.alloc_count = 256connection.current_count = 1...............省略号............................last_heart_beat_time = -07-03 01:15:18last_source_update = 1970-01-01 08:00:00last_sync_update = 1970-01-01 08:00:00last_synced_timestamp = 1970-01-01 08:00:00 Storage 2:#storage2节点信息id = 192.168.31.104 #storage 节点IPip_addr = 192.168.31.104 ACTIVE #storage 节点状态http domain = version = 6.06#storage 节点版本join time = -07-03 01:08:26 #加入时间up time = -07-03 01:08:26total storage = 17,394 MBfree storage = 13,758 MBupload priority = 10store_path_count = 1...............省略号............................last_heart_beat_time = -07-03 01:15:17last_source_update = 1970-01-01 08:00:00last_sync_update = 1970-01-01 08:00:00last_synced_timestamp = 1970-01-01 08:00:00 Group 2: #group2集群信息group name = group2disk total space = 17,394 MBdisk free space = 15,538 MBtrunk free space = 0 MBstorage server count = 2active server count = 2storage server port = 23000storage HTTP port = 80store path count = 1subdir count per path = 256current write server index = 0current trunk file id = 0Storage 1: #storage1节点信息id = 192.168.31.105ip_addr = 192.168.31.105 ACTIVEhttp domain = version = 6.06join time = -07-03 01:13:42up time = -07-03 01:13:42total storage = 17,394 MBfree storage = 15,538 MBupload priority = 10store_path_count = 1subdir_count_per_path = 256storage_port = 23000 #storage端口storage_http_port = 80current_write_path = 0source storage id = if_trunk_server = 0connection.alloc_count = 256connection.current_count = 1...............省略号............................last_heart_beat_time = -07-03 01:15:22last_source_update = 1970-01-01 08:00:00last_sync_update = 1970-01-01 08:00:00last_synced_timestamp = 1970-01-01 08:00:00 Storage 2:id = 192.168.31.106ip_addr = 192.168.31.106 ACTIVEhttp domain = version = 6.06join time = -07-03 01:14:05up time = -07-03 01:14:05total storage = 17,394 MBfree storage = 15,538 MBupload priority = 10store_path_count = 1subdir_count_per_path = 256storage_port = 23000storage_http_port = 80current_write_path = 0source storage id = 192.168.31.105if_trunk_server = 0connection.alloc_count = 256connection.current_count = 1connection.max_count = 1total_upload_count = 0...............省略号............................total_file_write_count = 0success_file_write_count = 0last_heart_beat_time = -07-03 01:15:10last_source_update = 1970-01-01 08:00:00last_sync_update = 1970-01-01 08:00:00last_synced_timestamp = 1970-01-01 08:00:00
配置client
这里在tracker01节点配置celient客户端 (其他节点可不配置,client.conf为可选配置)
mkdir -p /data/fdfs_client/logs #日志存放路径cat >/etc/fdfs/client.conf <<EOFconnect_timeout = 5network_timeout = 60base_path = /data/fdfs_client/logstracker_server = 192.168.31.101:22122tracker_server = 192.168.31.102:22122log_level = infouse_connection_pool = falseconnection_pool_max_idle_time = 3600load_fdfs_parameters_from_tracker = falseuse_storage_id = falsestorage_ids_filename = storage_ids.confhttp.tracker_server_port = 80EOF#需要修改tracker_server地址
上传文件测试,这里的文件是init.yaml
[root@01 ~]# echo "test" >init.yaml[root@01 ~]# fdfs_upload_file /etc/fdfs/client.conf init.yamlgroup2/M00/00/00/wKgfaV7-GG2AQcpMAAAABTu5NcY98.yaml
Storage节点安装Nginx
所有storage节点mod_fastdfs.conf配置如下
cat >/etc/fdfs/mod_fastdfs.conf <<EOFconnect_timeout=2network_timeout=30base_path=/tmpload_fdfs_parameters_from_tracker=truestorage_sync_file_max_delay = 86400use_storage_id = falsestorage_ids_filename = storage_ids.conftracker_server=192.168.31.101:22122tracker_server=192.168.31.102:22122storage_server_port=23000url_have_group_name = truestore_path_count=1log_level=infolog_filename=response_mode=proxyif_alias_prefix=flv_support = trueflv_extension = flvgroup_count = 2#include http.conf[group1]group_name=group1storage_server_port=23000store_path_count=1store_path0=/data/fastdfs_data[group2]group_name=group2storage_server_port=23000store_path_count=1store_path0=/data/fastdfs_dataEOF
拷贝相关依赖 (以下是所有storage节点安装)
cp /root/tools/fastdfs-6.06/conf/http.conf /etc/fdfs/cp /root/tools/fastdfs-6.06/conf/mime.types /etc/fdfs/
安装Nginx依赖包
yum install -y gcc glibc gcc-c++ prce-devel openssl-devel pcre-devel lua-devel libxml2 libxml2-devel libxslt-devel perl-ExtUtils-Embed GeoIP GeoIP-devel GeoIP-data zlib zlib-devel openssl pcre pcre-devel gcc g++ gcc-c++ gd-devel
创建nginx用户
useradd -s /sbin/nologin nginx -M
编译nginx
cd /root/tools/nginx-1.18.0./configure --prefix=/usr/local/nginx-1.18 --with-http_ssl_module --user=nginx --group=nginx --with-http_sub_module --add-module=/root/tools/fastdfs-nginx-module-1.22/srcmake && make installln -s /usr/local/nginx-1.18 /usr/local/nginx
修改Nginx配置文件
cat > /usr/local/nginx/conf/nginx.conf <<EOFworker_processes 1;events {worker_connections 1024;}http {include mime.types;default_type application/octet-stream;sendfile on;keepalive_timeout 65;server {listen 8888;server_name localhost;location ~/group[0-9]/M00 {root /data/fastdfs_data;ngx_fastdfs_module;}}}EOF
启动nginx
/usr/local/nginx/sbin/nginx -t/usr/local/nginx/sbin/nginx
在storage节点上8888的请求且有group的都转给ngx_fastdfs_module
插件处理
接下来我们手动上传两张图片进行测试
[root@tracker01 ~]# fdfs_upload_file /etc/fdfs/client.conf abcdocker.png group1/M00/00/00/wKgfZ17-JkKAYX-SAABc0HR4eEs313.png[root@tracker01 ~]# fdfs_upload_file /etc/fdfs/client.conf i4t.jpg group2/M00/00/00/wKgfal7-JySABmgLAABdMoE-LPo504.jpg
目前我们tracker属于轮训机制,会轮训group1和group2;具体使用参数可以参考下面的文章
接下来我们可以通过浏览器访问,不同的组对应不同的项目,FastDFS集群可以有多个组,但是每台机器只可以有一个storage
http://storage1节点:8888/group1/M00/00/00/wKgfZ17-JkKAYX-SAABc0HR4eEs313.pnghttp://storage1节点:8888/group2/M00/00/00/wKgfal7-JySABmgLAABdMoE-LPo504.jpg
经过我的测试,即使我们把图片上传到group1中,在group2上面直接访问也可以访问成功,但是在group2的存储目录并没有找到图片文件。原因如下
#Nginx日志192.168.31.174 - - [03/Jul/:03:03:31 +0800] "GET /group2/M00/00/00/wKgfaF7-JyKAYde2AABdMoE-LPo633.jpg HTTP/1.1" 304 0 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:77.0) Gecko/0101 Firefox/77.0"
为何我们其他storage节点也可以访问,原因是nginx中fastdfs-nginx-module
模块可以重定向文件连接到源服务器取文件
补充:FastDFS常用命令参数
#查看集群状态fdfs_monitor /etc/fdfs/storage.conf#上传fdfs_upload_file /etc/fdfs/client.conf abcdocker.png #下载fdfs_download_file /etc/fdfs/client.conf group1/M00/00/00/wKgfZ17-JkKAYX-SAABc0HR4eEs313.png#查看文件属性fdfs_file_info /etc/fdfs/client.conf group1/M00/00/00/wKgfZ17-JkKAYX-SAABc0HR4eEs313.png#删除文件fdfs_delete_file /etc/fdfs/client.conf group1/M00/00/00/wKgfZ17-JkKAYX-SAABc0HR4eEs313.png#删除一个storage/usr/local/bin/fdfs_monitor /etc/fdfs/storage.conf delete group2 192.168.31.105
Tracker配置高可用
在tracker上安装的nginx主要为了提供http访问的反向代理、负载均衡和缓存服务
这里我们tracker01 02同时进行安装即可
#下载nginx依赖包mkdir /root/tools -pcd /root/toolswget /ngx_cache_purge-2.3.tar.gzwget /fdfs/v6.6/nginx-1.18.0.tar.gztar xf ngx_cache_purge-2.3.tar.gztar xf nginx-1.18.0.tar.gz#创建nginx用户useradd -s /sbin/nologin -M nginx#编译nginxcd /root/tools/nginx-1.18.0./configure --prefix=/usr/local/nginx-1.18 --with-http_ssl_module --user=nginx --group=nginx --with-http_sub_module --add-module=/root/tools/ngx_cache_purge-2.3make && make installln -s /usr/local/nginx-1.18 /usr/local/nginx
nginx安装完毕,接下来配置nginx.conf
tracker 中nginx节点可以不是80,我这里以80位代表
mkdir /data/nginx_cache -p$ vim /usr/local/nginx/conf/nginx.confworker_processes 1;events {worker_connections 1024;}http {include mime.types;default_type application/octet-stream;sendfile on;keepalive_timeout 65;server_names_hash_bucket_size 128;client_header_buffer_size 32k;large_client_header_buffers 4 32k;client_max_body_size 300m;proxy_redirect off;proxy_set_header Host $http_host;proxy_set_header X-Real-IP $remote_addr;proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 90;proxy_send_timeout 90;proxy_read_timeout 90;proxy_buffer_size 16k;proxy_buffers 4 64k;proxy_busy_buffers_size 128k;proxy_temp_file_write_size 128k;proxy_cache_path /data/nginx_cache keys_zone=http-cache:100m;upstream fdfs_group1 {server 192.168.31.103:8888 weight=1 max_fails=2 fail_timeout=30s;server 192.168.80.104:8888 weight=1 max_fails=2 fail_timeout=30s;}upstream fdfs_group2 {server 192.168.31.105:8888 weight=1 max_fails=2 fail_timeout=30s;server 192.168.31.106:8888 weight=1 max_fails=2 fail_timeout=30s;}server {listen 80;server_name localhost;location /group1/M00 {proxy_next_upstream http_502 http_504 error timeout invalid_header;proxy_cache http-cache;proxy_cache_valid 200 304 12h;proxy_cache_key $uri$is_args$args;proxy_pass http://fdfs_group1;expires 30d;}location /group2/M00 {proxy_next_upstream http_502 http_504 error timeout invalid_header; proxy_cache http-cache;proxy_cache_valid 200 304 12h;proxy_cache_key $uri$is_args$args;proxy_pass http://fdfs_group2;expires 30d;}}}/usr/local/nginx/sbin/nginx -t/usr/local/nginx/sbin/nginx
此时访问tracker01节点和tracker02节点应该都没有问题
http://192.168.31.101/group1/M00/00/00/wKgfaF7-JyKAYde2AABdMoE-LPo633.jpghttp://192.168.31.101/group2/M00/00/00/wKgfaF7-JyKAYde2AABdMoE-LPo633.jpghttp://192.168.31.102/group2/M00/00/00/wKgfaF7-JyKAYde2AABdMoE-LPo633.jpghttp://192.168.31.102/group2/M00/00/00/wKgfaF7-JyKAYde2AABdMoE-LPo633.jpg
效果图如下
/cgi-bin/qm/qr?k=z4VAkrD0_q26kG7iOBp060WGVlK6RTsG (二维码自动识别)
Nginx代理安装
通过上面的步骤,已经可以使用storage节点和tracker节点进行访问,但是为了解决统一管理和tracker高可用,我们还需要使用nginx在去代理tracker
#nginx安装和上面一样,我这里就只更改nginx.conf文件,nginx代理不需要缓存模块,普通安装即可$ vim /usr/local/nginx/conf/nginx.confworker_processes 1;events {worker_connections 1024;}http {include mime.types;default_type application/octet-stream;sendfile on;keepalive_timeout 65;upstream fastdfs_tracker {server 192.168.31.101:80 weight=1 max_fails=2 fail_timeout=30s;server 192.168.31.102:80 weight=1 max_fails=2 fail_timeout=30s;}server {listen 80;server_name localhost;location / {proxy_pass http://fastdfs_tracker/;}}}
最后我们在tracker01节点上,测试nginx代理是否都可以访问成功
[root@tracker01 ~]# fdfs_upload_file /etc/fdfs/client.conf i4t.jpg group1/M00/00/00/wKgfZ17-Oa2AMuRqAABdMoE-LPo686.jpg[root@tracker01 ~]# fdfs_upload_file /etc/fdfs/client.conf i4t.jpg group2/M00/00/00/wKgfaV7-Oa6ANXGLAABdMoE-LPo066.jpg访问查看[root@tracker01 ~]# curl 192.168.31.100/group1/M00/00/00/wKgfZ17-Oa2AMuRqAABdMoE-LPo686.jpg -IHTTP/1.1 200 OKServer: nginx/1.18.0Date: Thu, 02 Jul 19:49:49 GMTContent-Type: image/jpegContent-Length: 23858Connection: keep-aliveLast-Modified: Thu, 02 Jul 19:46:53 GMTExpires: Sat, 01 Aug 19:49:49 GMTCache-Control: max-age=2592000Accept-Ranges: bytes[root@tracker01 ~]# curl 192.168.31.100/group2/M00/00/00/wKgfaV7-Oa6ANXGLAABdMoE-LPo066.jpg -IHTTP/1.1 200 OKServer: nginx/1.18.0Date: Thu, 02 Jul 19:50:17 GMTContent-Type: image/jpegContent-Length: 23858Connection: keep-aliveLast-Modified: Thu, 02 Jul 19:46:54 GMTExpires: Sat, 01 Aug 19:50:17 GMTCache-Control: max-age=2592000Accept-Ranges: bytes
原文:/4758.html