Docker(六)Docker swarm - Go语言中文社区

Docker(六)Docker swarm


官网

集群

测试需要4台服务器
4台机器安装 Docker

搭建集群

1.设置主节点docker swarm init --advertise-addr [自己的IP地址]

[root@VM-0-7-centos ~]# docker swarm init --advertise-addr 172.27.0.7
Swarm initialized: current node (jw8laeer26282r3lacsug28a1) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-1mamvn85xnbos4m9oj4la2e4zkod12u7p7f1brasnc5el2fymf-6nmszqzvzn11napjgr4qdhcg7 172.27.0.7:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.

初始化节点 docker swarm init
2.获取令牌

# 获取令牌
docker swarm join-token manager #管理者
docker swarm join-token worker #工作者

在这里插入图片描述
3.加入节点
在这里插入图片描述
4.离开节点
docker swarm leave [OPTIONS]

docker swarm leave

Raft协议

双主双从: 假设一个节点挂了!其他节点是否可以用!
Raft协议: 保证大多数节点存活才可以用。 只要>1 ,集群至少大于3台!
实验:
1、将docker1机器停止。宕机! 双主,另外一个主节点也不能使用了!
在这里插入图片描述

2、可以将其他节点离开
在这里插入图片描述
3、work就是工作的、管理节点操作! 3台机器设置为了管理节点。

十分简单:集群,可用! 3个主节点。 > 1 台管理节点存活!
Raft协议: 保证大多数节点存活,才可以使用,高可用!

Dockerservice

集群: swarm docker serivce
容器 => 服务!
容器 => 服务!=> 副本!
redis 服务 => 10个副本!(同时开启10个redis容器)
体验:创建服务、动态扩展服务、动态更新服务。

docker service --help

Usage:	docker service COMMAND

Manage services

Commands:
  create      Create a new service
  inspect     Display detailed information on one or more services
  logs        Fetch the logs of a service or task
  ls          List services
  ps          List the tasks of one or more services
  rm          Remove one or more services
  rollback    Revert changes to a service's configuration
  scale       Scale one or multiple replicated services
  update      Update a service

灰度发布:金丝雀发布!

docker service create -p 8080:8080 --name my-tomcat tomcat
docker run 容器启动!不具有扩缩容器
docker service 服务! 具有扩缩容器,滚动更新!

查看服务 REPLICAS

[root@VM-0-12-centos ~]# docker service ls #查看所有服务
[root@VM-0-12-centos ~]# docker service ps my-tomcat
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
k9fcid3zpcjy        my-tomcat.1         tomcat:latest       VM-0-13-centos      Running             Running 4 minutes ago                       
[root@VM-0-12-centos ~]# docker service ps my-nginx
ID                  NAME                IMAGE               NODE                DESIRED STATE       CURRENT STATE           ERROR               PORTS
pbnh86jz54qy        my-nginx.1          nginx:latest        VM-0-12-centos      Running             Running 9 minutes ago                       

动态扩缩容

[root@VM-0-7-centos ~]# docker service update --replicas 5 my-tomcat 
my-tomcat
overall progress: 5 out of 5 tasks 
1/5: running   
2/5: running   
3/5: running   
4/5: running   
5/5: running   
verify: Service converged 

[root@VM-0-7-centos ~]# docker service scale my-tomcat=3
my-tomcat scaled to 3
overall progress: 3 out of 3 tasks 
1/3: running   
2/3: running   
3/3: running   
verify: Service converged 

动态分配到各个集群

服务,集群中任意的节点都可以访问。服务可以有多个副本动态扩缩容实现高可用!
弹性、扩缩容!

移除!

docker service rm my-nginx  

概念总结

swarm
集群的管理和编号。 docker可以初始化一个 swarm 集群,其他节点可以加入。(管理、工作者)
Node
就是一个docker节点。多个节点就组成了一个网络集群。(管理、工作者)
Service
任务,可以在管理节点或者工作节点来运行。核心。!用户访问!
Task
容器内的命令,细节任务

在这里插入图片描述
逻辑是不变的。
命令 -> 管理 -> api -> 调度 -> 工作节点(创建Task容器维护创建!)

服务副本与全局服务

在这里插入图片描述
调整service以什么方式运行

--mode string
Service mode (replicated or global) (default "replicated")
docker service create --mode replicated --name mytom tomcat:7 默认的
docker service create --mode global --name haha alpine ping baidu.com
#场景?日志收集
每一个节点有自己的日志收集器,过滤。把所有日志最终再传给日志中心
服务监控,状态性能。

拓展:网络模式: “PublishMode”: "ingress"
Swarm:
Overlay:
ingress : 特殊的 Overlay 网络! 负载均衡的功能! IPVS VIP!
虽然docker在4台机器上,实际网络是同一个! ingress 网络 ,是一个特殊的 Overlay 网络

[root@VM-0-7-centos ~]# docker network ls
NETWORK ID          NAME                DRIVER              SCOPE
2d788dc9dbc6        bridge              bridge              local
5a5ab4346035        docker_gwbridge     bridge              local
1982820176d1        host                host                local
ji61n7tj94a7        ingress             overlay             swarm
9b6f4429e433        none                null                local
[root@VM-0-7-centos ~]# docker network inspect ji61n7tj94a7  
[
    {
        "Name": "ingress",
        "Id": "ji61n7tj94a7tqy1m1wa4ob0z",
        "Created": "2020-10-19T16:52:16.838436339+08:00",
        "Scope": "swarm",
        "Driver": "overlay",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "10.0.0.0/24",
                    "Gateway": "10.0.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": true,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "cc16928459d1ad42ac79b70d674f28fe64e12390459c2ad41d58ca31760c9613": {
                "Name": "my-tomcat.2.1bzaep0xakmrvpervblsnse6x",
                "EndpointID": "aa7fe3c9238ee92c80467bde4ab39fc60a8e296106976bebeb6b4ab19767955b",
                "MacAddress": "02:42:0a:00:00:13",
                "IPv4Address": "10.0.0.19/24",
                "IPv6Address": ""
            },
            "ingress-sbox": {
                "Name": "ingress-endpoint",
                "EndpointID": "233a43a946287ad5bd6e7b99bf1e647d4a43ff98737fa444a29eaa46e967befa",
                "MacAddress": "02:42:0a:00:00:02",
                "IPv4Address": "10.0.0.2/24",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.driver.overlay.vxlanid_list": "4096"
        },
        "Labels": {},
        "Peers": [
            {
                "Name": "374cf87a5fe7",
                "IP": "172.27.0.7"
            },
            {
                "Name": "b0b1f2a7d2ad",
                "IP": "172.27.0.12"
            },
            {
                "Name": "6f3d97ed8ed2",
                "IP": "172.27.0.13"
            }
        ]
    }
]

整体!

Docker Stack

docker-compose 单机部署项目!
Docker Stack部署,集群部署!

# 单机
docker-compose up -d wordpress.yaml
# 集群
docker stack deploy wordpress.yaml
# docker-compose 文件
version: '3.4'
services:
mongo:
image: mongo
restart: always
networks:
- mongo_network
deploy:
restart_policy:
condition: on-failure
replicas: 2
mongo-express:
image: mongo-express
restart: always
networks:
- mongo_network
ports:
- target: 8081
published: 80
protocol: tcp
mode: ingress
environment:
ME_CONFIG_MONGODB_SERVER: mongo
ME_CONFIG_MONGODB_PORT: 27017
deploy:
restart_policy:
condition: on-failure
replicas: 1
networks:
mongo_network:
external: true

Docker Secret

安全!配置密码,证书!

[root@VM-0-7-centos ~]# docker secret --help

Usage:	docker secret COMMAND

Manage Docker secrets

Commands:
  create      Create a secret from a file or STDIN as content
  inspect     Display detailed information on one or more secrets
  ls          List secrets
  rm          Remove one or more secrets

Run 'docker secret COMMAND --help' for more information on a command.

Docker Config

配置

[root@VM-0-7-centos ~]# docker config --help

Usage:	docker config COMMAND

Manage Docker configs

Commands:
  create      Create a config from a file or STDIN
  inspect     Display detailed information on one or more configs
  ls          List configs
  rm          Remove one or more configs


三、跨主机通信

1. 创建overlay网络

在swarm-manager上操作

[root@swarm-manager ~]# docker network ls

NETWORK ID          NAME                DRIVER              SCOPE

ae19b054c5a9        bridge              bridge              local

1e329e08d287        docker_gwbridge    bridge              local

34963f83928c        host                host                local

nlzfjhwvxn0s        ingress            overlay            swarm

a79f72191b90        none                null                local

[root@swarm-manager ~]# docker network create  --driver overlay  --subnet 172.70.1.0/24  --opt encrypted  my-swarm-network

j7pqlzjl2kg1cfrxotai6gyq1

[root@swarm-manager ~]# docker network ls

NETWORK ID          NAME                DRIVER              SCOPE

ae19b054c5a9        bridge              bridge              local

1e329e08d287        docker_gwbridge    bridge              local

34963f83928c        host                host                local

nlzfjhwvxn0s        ingress            overlay            swarm

j7pqlzjl2kg1        my-swarm-network    overlay            swarm

a79f72191b90        none                null                local

从节点查看主节点建立的overlay网络

[root@docker02 ~]# docker network ls

NETWORK ID          NAME                DRIVER              SCOPE

8caad6e42ca4        bridge              bridge              local

3f910d13065f        docker_gwbridge    bridge              local

34963f83928c        host                host                local

nlzfjhwvxn0s        ingress            overlay            swarm

a79f72191b90        none                null                local
从节点查看不到新建立的overlay网络

2. 创建一个Service,挂载到新建立的ovarlay网络上

[root@swarm-manager ~]# docker service create  --replicas 3  --name my-ovs  --network my-swarm-network  cirros

g4dz7deo73rsus6us8vevkdts

overall progress: 3 out of 3 tasks

1/3: running  [==================================================>]

2/3: running  [==================================================>]

3/3: running  [==================================================>]

verify: Service converged

1.> 主节点刚刚启动的服务

[root@swarm-manager ~]# docker ps -a

CONTAINER ID        IMAGE              COMMAND            CREATED            STATUS              PORTS              NAMES

2d4d266b77b7        cirros:latest      "/sbin/init"        27 seconds ago      Up 25 seconds                          my-ovs.3.w03awm4w1obxhyjz6587mgsgy

[root@swarm-manager ~]# docker exec -it 2d /bin/sh

/ # ifconfig

eth0      Link encap:Ethernet  HWaddr 02:42:AC:46:01:07 

inet addr:172.70.1.7 Bcast:172.70.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1424  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:12:00:03 

          inet addr:172.18.0.3  Bcast:172.18.255.255  Mask:255.255.0.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:16 errors:0 dropped:0 overruns:0 frame:0

          TX packets:80 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:984 (984.0 B)  TX bytes:5664 (5.5 KiB)

2.> 从节点node1查看服务

[root@node1 ~]# docker ps -a

CONTAINER ID        IMAGE              COMMAND            CREATED              STATUS              PORTS              NAMES

44ede4813ba8        cirros:latest      "/sbin/init"        About a minute ago  Up About a minute                      my-ovs.1.wt5eio81fh3wol77fe5193atn

[root@node1 ~]# docker exec -it 44 /bin/sh

/ # ifconfig

eth0      Link encap:Ethernet  HWaddr 02:42:AC:46:01:08 

inet addr:172.70.1.8Bcast:172.70.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1424  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:12:00:03 

          inet addr:172.18.0.3  Bcast:172.18.255.255  Mask:255.255.0.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:23 errors:0 dropped:0 overruns:0 frame:0

          TX packets:97 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:1740 (1.6 KiB)  TX bytes:6983 (6.8 KiB)

3.> 从节点node2查看服务

[root@docker02 ~]# docker ps -a

CONTAINER ID        IMAGE              COMMAND            CREATED              STATUS              PORTS              NAMES

1885b73a4871        cirros:latest      "/sbin/init"        About a minute ago  Up About a minute                      my-ovs.2.kii95woaije6mu255s3myc2mq

[root@docker02 ~]# docker exec -it 18 /bin/sh

/ # ifconfig

eth0      Link encap:Ethernet  HWaddr 02:42:AC:46:01:06 

inet addr:172.70.1.6Bcast:172.70.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1424  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:12:00:03 

          inet addr:172.18.0.3  Bcast:172.18.255.255  Mask:255.255.0.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:23 errors:0 dropped:0 overruns:0 frame:0

          TX packets:97 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:1740 (1.6 KiB)  TX bytes:6983 (6.8 KiB)

3.检查网络是否互通

进入swarm-manager中的容器内

[root@swarm-manager ~]# docker exec -it 2d /bin/sh

/ # ifconfig

eth0      Link encap:Ethernet  HWaddr 02:42:AC:46:01:07 

inet addr:172.70.1.7Bcast:172.70.1.255  Mask:255.255.255.0

          UP BROADCAST RUNNING MULTICAST  MTU:1424  Metric:1

          RX packets:0 errors:0 dropped:0 overruns:0 frame:0

          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

eth1      Link encap:Ethernet  HWaddr 02:42:AC:12:00:03 

          inet addr:172.18.0.3  Bcast:172.18.255.255  Mask:255.255.0.0

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:16 errors:0 dropped:0 overruns:0 frame:0

          TX packets:80 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:984 (984.0 B)  TX bytes:5664 (5.5 KiB)

1.> ping  外网

/ # ping -c 2 www.baidu.com

PING www.baidu.com (61.135.169.125): 56 data bytes

64 bytes from 61.135.169.125: seq=0 ttl=56 time=3.850 ms

64 bytes from 61.135.169.125: seq=1 ttl=56 time=6.953 ms

--- www.baidu.com ping statistics ---

2.> ping 宿主机

/ # ping -c 2 192.168.0.10

PING 192.168.0.10 (192.168.0.10): 56 data bytes

64 bytes from 192.168.0.10: seq=0 ttl=63 time=0.468 ms

64 bytes from 192.168.0.10: seq=1 ttl=63 time=0.238 ms

--- 192.168.0.10 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.238/0.353/0.468 ms

/ # ping -c 2 192.168.0.20

PING 192.168.0.20 (192.168.0.20): 56 data bytes

64 bytes from 192.168.0.20: seq=0 ttl=63 time=0.408 ms

64 bytes from 192.168.0.20: seq=1 ttl=63 time=0.462 ms

--- 192.168.0.20 ping statistics ---

/ # ping -c 2 192.168.0.30

PING 192.168.0.30 (192.168.0.30): 56 data bytes

64 bytes from 192.168.0.30: seq=0 ttl=64 time=0.084 ms

64 bytes from 192.168.0.30: seq=1 ttl=64 time=0.101 ms

--- 192.168.0.30 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.084/0.092/0.101 ms

3.> ping容器

node1上的容器

/ # ping -c 2 172.70.1.6

PING 172.70.1.6 (172.70.1.6): 56 data bytes

64 bytes from 172.70.1.6: seq=0 ttl=64 time=3.604 ms

64 bytes from 172.70.1.6: seq=1 ttl=64 time=1.015 ms

--- 172.70.1.6 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 1.015/2.309/3.604 ms

node2上的容器

/ # ping -c 2 172.70.1.8

PING 172.70.1.8 (172.70.1.8): 56 data bytes

64 bytes from 172.70.1.8: seq=0 ttl=64 time=1.087 ms

64 bytes from 172.70.1.8: seq=1 ttl=64 time=0.984 ms

--- 172.70.1.8 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.984/1.035/1.087 ms

4.> 从节点node1

node2上的容器

/ # ping -c 2 172.70.1.6

PING 172.70.1.6 (172.70.1.6): 56 data bytes

64 bytes from 172.70.1.6: seq=0 ttl=64 time=1.451 ms

64 bytes from 172.70.1.6: seq=1 ttl=64 time=0.973 ms

--- 172.70.1.6 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.973/1.212/1.451 ms

swarm-manager上的容器

/ # ping -c 2 172.70.1.7

PING 172.70.1.7 (172.70.1.7): 56 data bytes

64 bytes from 172.70.1.7: seq=0 ttl=64 time=0.650 ms

64 bytes from 172.70.1.7: seq=1 ttl=64 time=0.693 ms

--- 172.70.1.7 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.650/0.671/0.693 ms

5.> 从节点node2

swarm-manager上的容器

/ # ping -c 2 172.70.1.7

PING 172.70.1.7 (172.70.1.7): 56 data bytes

64 bytes from 172.70.1.7: seq=0 ttl=64 time=3.170 ms

64 bytes from 172.70.1.7: seq=1 ttl=64 time=0.934 ms

--- 172.70.1.7 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.934/2.052/3.170 ms

node1上的容器

/ # ping -c 2 172.70.1.8

PING 172.70.1.8 (172.70.1.8): 56 data bytes

64 bytes from 172.70.1.8: seq=0 ttl=64 time=0.615 ms

64 bytes from 172.70.1.8: seq=1 ttl=64 time=0.642 ms

--- 172.70.1.8 ping statistics ---

2 packets transmitted, 2 packets received, 0% packet loss

round-trip min/avg/max = 0.615/0.628/0.642 ms

版权声明:本文来源CSDN,感谢博主原创文章,遵循 CC 4.0 by-sa 版权协议,转载请附上原文出处链接和本声明。
原文链接:https://blog.csdn.net/zk86547462/article/details/109163373
站方申明:本站部分内容来自社区用户分享,若涉及侵权,请联系站方删除。

0 条评论

请先 登录 后评论

官方社群

GO教程

猜你喜欢