zl程序教程

您现在的位置是:首页 >  工具

当前栏目

使用Docker部署常用中间件

Docker中间件部署 使用 常用
2023-06-13 09:15:31 时间

docker安装

方法一:一键安装

1

curl -fsSL https://get.docker.com | bash -s docker --mirror Aliyun

方法二:常规安装

1.安装yum-utilsyum-utils提供了yum-config-manager管理工具

1

yum install -y yum-utils

2.配置国内镜像源

1

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

3.安装Docker Engine-Community 和 containerd :

1

yum install docker-ce docker-ce-cli containerd.io

方法三:本地安装

1.查看操作系统内核,从官网下载对应版本

1

uname -r

官网下载地址:https://download.docker.com/linux/static/stable/

下载好的文件解压并放置到/usr/bin目录中

1234

# 解压tar -zxvf docker-20.10.x.tgz# 移动解压出来的二进制文件到 /usr/bin 目录中mv docker/* /usr/bin/

2.配置添加systemd,编辑docker的系统服务文件vim /usr/lib/systemd/system/docker.service

12345678910111213141516171819

[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notifyExecStart=/usr/bin/dockerdExecReload=/bin/kill -s HUP $MAINPIDLimitNOFILE=infinityLimitNPROC=infinityTimeoutStartSec=0Delegate=yesKillMode=processRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target

3.重新加载系统服务和重启docker

12

systemctl daemon-reloadsystemctl restart docker

基础命令

启动docker服务和开机自启:

12

systemctl start dockersystemctl enable docker

123456789101112

# 查看镜像docker images# 删除镜像docker rmi <image_id># 查看容器docker container ls# 启动/重启容器docker start/restart <container_id># 删除容器docker rm <container_id># 查看挂载目录docker inspect <container_id> | grep "Mounts" -A 20

运行容器

如需指定端口映射使用-p,如需暴露容器所有端口(和宿主机共享网络),使用--net=host

Tomcat8

app根目录上传至服务器,然后将根目录映射至容器/usr/local/tomcat/webapps/目录下:

1234

docker run --name={app_name} --net=host -v /root/tomcat/webapps/{app_name}:/usr/local/tomcat/webapps/{app_name} -d tomcat:8.5.38-jre8

Mysql

123456789101112

docker run --name=mysql -p 3306:3306 -v /root/mysql/conf:/etc/mysql/conf.d -v /root/mysql/logs:/logs -v /root/mysql/data:/var/lib/mysql -e MYSQL_ROOT_PASSWORD='password' -d mysql:5.7 mysqld --innodb-buffer-pool-size=80M --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci --default-time-zone=+8:00 --lower-case-table-names=1

Redis

1234

docker run --name=redis --net=host -d redis:latest redis-server --requirepass "password"

ActiveMQ

12345678

docker run --name=activemq -p 8161:8161 # web页面管理端口 -p 61616:61616 # 业务端口 -v /root/activemq/data:/data/activemq -v /root/activemq/log:/var/log/activemq -e ACTIVEMQ_ADMIN_LOGIN=admin -e ACTIVEMQ_ADMIN_PASSWORD={passowrd} -d webcenter/activemq:latest

MongoDB

12345

docker run --name=mongodb -p 27017:27017 -v /root/mongodb/data:/data/db -d mongo:4.0.6 --auth

进入容器设置超管账号密码:

12

docker exec -it mongodb mongo admindb.createUser({ user: 'admin', pwd: 'password', roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] });

为其他库设置访问权限:

123

db.auth("admin","password"); #验证超管身份use yourdatabase;db.createUser({user:'user',pwd:'password',roles:[{role:'dbOwner',db:'yourdatabase'}]});

ElasticSearch

123456

docker run --name=es --net=host -v /root/es/data:/usr/share/elasticsearch/data -v /root/es/logs:/usr/share/elasticsearch/logs -e "discovery.type=single-node" -d elasticsearch:7.3.2

设置访问密码

进入容器:

1

docker exec -it es /bin/bash

编辑配置文件elasticsearch.yml,添加如下内容:

123

xpack.security.enabled: truexpack.license.self_generated.type: basicxpack.security.transport.ssl.enabled: true

重启容器,之后执行如下命令开始设置密码:

1

bin/elasticsearch-setup-passwords interactive

安装分词插件ik

进入容器执行如下命令,然后重启容器,注意选择和es版本匹配的分词插件:

1

bin/elasticsearch-plugin install https://github.com/medcl/elasticsearch-analysis-ik/releases/download/v7.3.2/elasticsearch-analysis-ik-7.3.2.zip

Kibana

注意镜像使用和es使用一样的版本号:

1

docker pull kibana:7.3.2

宿主机创建配置文件/root/elk/kibana.yml

123456

server.name: kibanaserver.host: "0"elasticsearch.hosts: [ "http://172.17.0.1:9200" ] # 注意是容器内访问elasticsearch.username: "kibana" #ES中配置的kibana账号和密码elasticsearch.password: "password"xpack.monitoring.ui.container.elasticsearch.enabled: true

运行:

12345678

docker run --name=kibana --net=host -v /root/elk/kibana.yml:/usr/share/kibana/config/kibana.yml -d kibana:7.3.2 --log-driver json-file --log-opt max-size=100m --log-opt max-file=2 --restart=always

Nacos

事先准备好配置文件/root/nacos/conf/application.properties

1234567891011

docker run --name=nacos -p 8848:8848 -p 9848:9848 -p 9849:9849 -v /root/nacos/logs/:/home/nacos/logs -v /root/nacos/conf/application.properties:/home/nacos/conf/application.properties -e MODE=standalone -e JVM_XMS=512m -e JVM_XMX=512m -e JVM_XMN=256m -d nacos/nacos-server

application.properties样例:

123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233

## Copyright 1999-2021 Alibaba Group Holding Ltd.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.##*************** Spring Boot Related Configurations ***************#### Default web context path:server.servlet.contextPath=/nacos### Default web server port:server.port=8848#*************** Network Related Configurations ***************#### If prefer hostname over ip for Nacos server addresses in cluster.conf:# nacos.inetutils.prefer-hostname-over-ip=false### Specify local server's IP:# nacos.inetutils.ip-address=#*************** Config Module Related Configurations ***************#### If use MySQL as datasource:# spring.datasource.platform=mysql### Count of DB:# db.num=1### Connect URL of DB:# db.url.0=jdbc:mysql://127.0.0.1:3306/nacos?characterEncoding=utf8&connectTimeout=1000&socketTimeout=3000&autoReconnect=true&useUnicode=true&useSSL=false&serverTimezone=UTC# db.user.0=nacos# db.password.0=nacos### Connection pool configuration: hikariCPdb.pool.config.connectionTimeout=30000db.pool.config.validationTimeout=10000db.pool.config.maximumPoolSize=20db.pool.config.minimumIdle=2#*************** Naming Module Related Configurations ***************#### Data dispatch task execution period in milliseconds: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.delayMs# nacos.naming.distro.taskDispatchPeriod=200### Data count of batch sync task: Will removed on v2.1.X. Deprecated# nacos.naming.distro.batchSyncKeyCount=1000### Retry delay in milliseconds if sync task failed: Will removed on v2.1.X, replace with nacos.core.protocol.distro.data.sync.retryDelayMs# nacos.naming.distro.syncRetryDelay=5000### If enable data warmup. If set to false, the server would accept request without local data preparation:# nacos.naming.data.warmup=true### If enable the instance auto expiration, kind like of health check of instance:# nacos.naming.expireInstance=true### will be removed and replaced by `nacos.naming.clean` propertiesnacos.naming.empty-service.auto-clean=truenacos.naming.empty-service.clean.initial-delay-ms=50000nacos.naming.empty-service.clean.period-time-ms=30000### Add in 2.0.0### The interval to clean empty service, unit: milliseconds.# nacos.naming.clean.empty-service.interval=60000### The expired time to clean empty service, unit: milliseconds.# nacos.naming.clean.empty-service.expired-time=60000### The interval to clean expired metadata, unit: milliseconds.# nacos.naming.clean.expired-metadata.interval=5000### The expired time to clean metadata, unit: milliseconds.# nacos.naming.clean.expired-metadata.expired-time=60000### The delay time before push task to execute from service changed, unit: milliseconds.# nacos.naming.push.pushTaskDelay=500### The timeout for push task execute, unit: milliseconds.# nacos.naming.push.pushTaskTimeout=5000### The delay time for retrying failed push task, unit: milliseconds.# nacos.naming.push.pushTaskRetryDelay=1000### Since 2.0.3### The expired time for inactive client, unit: milliseconds.# nacos.naming.client.expired.time=180000#*************** CMDB Module Related Configurations ***************#### The interval to dump external CMDB in seconds:# nacos.cmdb.dumpTaskInterval=3600### The interval of polling data change event in seconds:# nacos.cmdb.eventTaskInterval=10### The interval of loading labels in seconds:# nacos.cmdb.labelTaskInterval=300### If turn on data loading task:# nacos.cmdb.loadDataAtStart=false#*************** Metrics Related Configurations ***************#### Metrics for prometheus#management.endpoints.web.exposure.include=*### Metrics for elastic searchmanagement.metrics.export.elastic.enabled=false#management.metrics.export.elastic.host=http://localhost:9200### Metrics for influxmanagement.metrics.export.influx.enabled=false#management.metrics.export.influx.db=springboot#management.metrics.export.influx.uri=http://localhost:8086#management.metrics.export.influx.auto-create-db=true#management.metrics.export.influx.consistency=one#management.metrics.export.influx.compressed=true#*************** Access Log Related Configurations ***************#### If turn on the access log:server.tomcat.accesslog.enabled=true### The access log pattern:server.tomcat.accesslog.pattern=%h %l %u %t "%r" %s %b %D %{User-Agent}i %{Request-Source}i### The directory of access log:server.tomcat.basedir=#*************** Access Control Related Configurations ***************#### If enable spring security, this option is deprecated in 1.2.0:#spring.security.enabled=false### The ignore urls of auth, is deprecated in 1.2.0:nacos.security.ignore.urls=/,/error,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-ui/public/**,/v1/auth/**,/v1/console/health/**,/actuator/**,/v1/console/server/**### The auth system to use, currently only 'nacos' and 'ldap' is supported:nacos.core.auth.system.type=nacos### If turn on auth system:nacos.core.auth.enabled=false### worked when nacos.core.auth.system.type=ldap,{0} is Placeholder,replace login username# nacos.core.auth.ldap.url=ldap://localhost:389# nacos.core.auth.ldap.userdn=cn={0},ou=user,dc=company,dc=com### The token expiration in seconds:nacos.core.auth.default.token.expire.seconds=18000### The default token:nacos.core.auth.default.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789### Turn on/off caching of auth information. By turning on this switch, the update of auth information would have a 15 seconds delay.nacos.core.auth.caching.enabled=true### Since 1.4.1, Turn on/off white auth for user-agent: nacos-server, only for upgrade from old version.nacos.core.auth.enable.userAgentAuthWhite=false### Since 1.4.1, worked when nacos.core.auth.enabled=true and nacos.core.auth.enable.userAgentAuthWhite=false.### The two properties is the white list for auth and used by identity the request from other server.nacos.core.auth.server.identity.key=serverIdentitynacos.core.auth.server.identity.value=security#*************** Istio Related Configurations ***************#### If turn on the MCP server:nacos.istio.mcp.server.enabled=false#*************** Core Related Configurations ***************#### set the WorkerID manually# nacos.core.snowflake.worker-id=### Member-MetaData# nacos.core.member.meta.site=# nacos.core.member.meta.adweight=# nacos.core.member.meta.weight=### MemberLookup### Addressing pattern category, If set, the priority is highest# nacos.core.member.lookup.type=[file,address-server]## Set the cluster list with a configuration file or command-line argument# nacos.member.list=192.168.16.101:8847?raft_port=8807,192.168.16.101?raft_port=8808,192.168.16.101:8849?raft_port=8809## for AddressServerMemberLookup# Maximum number of retries to query the address server upon initialization# nacos.core.address-server.retry=5## Server domain name address of [address-server] mode# address.server.domain=jmenv.tbsite.net## Server port of [address-server] mode# address.server.port=8080## Request address of [address-server] mode# address.server.url=/nacos/serverlist#*************** JRaft Related Configurations ***************#### Sets the Raft cluster election timeout, default value is 5 second# nacos.core.protocol.raft.data.election_timeout_ms=5000### Sets the amount of time the Raft snapshot will execute periodically, default is 30 minute# nacos.core.protocol.raft.data.snapshot_interval_secs=30### raft internal worker threads# nacos.core.protocol.raft.data.core_thread_num=8### Number of threads required for raft business request processing# nacos.core.protocol.raft.data.cli_service_thread_num=4### raft linear read strategy. Safe linear reads are used by default, that is, the Leader tenure is confirmed by heartbeat# nacos.core.protocol.raft.data.read_index_type=ReadOnlySafe### rpc request timeout, default 5 seconds# nacos.core.protocol.raft.data.rpc_request_timeout_ms=5000#*************** Distro Related Configurations ***************#### Distro data sync delay time, when sync task delayed, task will be merged for same data key. Default 1 second.# nacos.core.protocol.distro.data.sync.delayMs=1000### Distro data sync timeout for one sync data, default 3 seconds.# nacos.core.protocol.distro.data.sync.timeoutMs=3000### Distro data sync retry delay time when sync data failed or timeout, same behavior with delayMs, default 3 seconds.# nacos.core.protocol.distro.data.sync.retryDelayMs=3000### Distro data verify interval time, verify synced data whether expired for a interval. Default 5 seconds.# nacos.core.protocol.distro.data.verify.intervalMs=5000### Distro data verify timeout for one verify, default 3 seconds.# nacos.core.protocol.distro.data.verify.timeoutMs=3000### Distro data load retry delay when load snapshot data failed, default 30 seconds.# nacos.core.protocol.distro.data.load.retryDelayMs=30000

Nginx

1234567

docker run --name=nginx --net=host -v /root/nginx/html:/usr/share/nginx/html -v /root/nginx/conf/nginx.conf:/etc/nginx/nginx.conf -v /root/nginx/logs:/var/log/nginx -v /root/nginx/conf.d:/etc/nginx/conf.d -d nginx

注意:如果指定了-v,则宿主机目录会覆盖容器目录(-v的参数只能是目录)。如果需要使用自定义配置,则应在nginx.confconf.d存入配置文件再启动,否则应取消这2个-v,以使用默认配置。

Gitlab

拉取Gitlab镜像

1

docker pull gitlab/gitlab-ce:latest

运行容器

1234567

docker run --name=gitlab -p 9980:80 -p 9922:22 -v /opt/docker/gitlab/config:/etc/gitlab -v /opt/docker/gitlab/logs:/var/log/gitlab -v /opt/docker/gitlab/data:/var/opt/gitlab -d gitlab/gitlab-ce

注意:因为容器里会使用到22端口,所以这里不要使用--net=host,以免和宿主机的22端口发生冲突。

修改配置

进入容器内部

1

docker exec -it gitlab /bin/bash

修改gitlab.rb,填写gitlab访问地址,这里应当填写宿主机的ip地址

123456789101112

vi /etc/gitlab/gitlab.rb #加入如下内容:#gitlab访问地址,可以写域名。如果端口不写的话默认为80端口external_url 'http://192.168.124.194'#ssh主机ipgitlab_rails['gitlab_ssh_host'] = '192.168.124.194'#ssh连接端口gitlab_rails['gitlab_shell_ssh_port'] = 9922 # 退出vi,让配置生效gitlab-ctl reconfigure

修改访问端口号:

1

vi /opt/gitlab/embedded/service/gitlab-rails/config/gitlab.yml

找到以下内容,修改默认的80端口为自定义

123456

... gitlab: host: 192.168.124.194 port: 80 # 这里改为9980 https: false...

重启gitlab后退出容器

12

gitlab-ctl restartexit

访问地址为:http://192.168.124.194:9980/

首次进入需要使用root账号,可以使用如下命令查看root账号密码

1

docker exec -it gitlab cat /etc/gitlab/initial_root_password

重置密码

可通过gitlab控制台直接设置用户密码

1234567891011121314

# 进入容器内部docker exec -it gitlab /bin/bash # 进入控制台gitlab-rails console -e production # 查询id为1的用户,id为1的用户是超级管理员user = User.where(id:1).first# 修改密码为abc123456user.password='abc123456'# 保存user.save!# 退出exit

其他

修改容器时区为中国时区

1

docker cp /usr/share/zoneinfo/Asia/Shanghai {container_id}:/etc/localtime

FAQ

Centos安装docker报错

如果安装docker时出现了container-selinux >= 2.9错误,类似如下:

123456

Error: Package: containerd.io-1.2.13-3.2.el7.x86_64 (docker-ce-stable)Requires: container-selinux >= 2:2.74Error: Package: 3:docker-ce-19.03.12-3.el7.x86_64 (docker-ce-stable)Requires: container-selinux >= 2:2.74You could try using --skip-broken to work around the problemYou could try running: rpm -Va --nofiles --nodigest

这个报错是container-selinux版本低或者是没安装的原因,yum 安装container-selinux一般的yum源又找不到这个包,需要安装epel源才能yum安装container-selinux,然后再安装docker-ce就可以了。

1234

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repoyum install epel-releaseyum makecacheyum install container-selinux