配置 docker container 连接到主机上的 MySQL 服务

docker-compose.yml 里加入这一条:

extra_hosts:
  - "host.docker.internal:host-gateway"


变成:
services:
  app:
    image: ...
    container_name: ...
    ...
    extra_hosts:
      - "host.docker.internal:host-gateway"


与此同时, MySQL 需要监听在 docker 的网卡上:


# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq state UP group default qlen 1000
    link/ether 00:0c:ff:cc:af:af brd ff:ff:ff:ff:ff:ff
    altname enp0s18
    altname ens18
    inet 172.16.212.135/24 brd 172.16.212.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80:20c:29ff:3::bca/64 scope global
       valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 76:1f:8c:eb:62:f8 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
25: br-c5516318dfee: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
    link/ether 7e:e9:0e:61:6e:cb brd ff:ff:ff:ff:ff:ff
    inet 172.18.0.1/16 brd 172.18.255.255 scope global br-c5516318dfee
       valid_lft forever preferred_lft forever
    inet6 fe80::7ce9:eff:fe61:6ecb/64 scope link
       valid_lft forever preferred_lft forever


这里可以看到 docker0 网卡的地址是 172.17.0.1, 也就是容器内 host.docker.internal 会解析到的地址


打开 MySQL 配置文件, 位于:
/etc/mysql/mysql.conf.d/mysqld.cnf


将
bind-address            = 127.0.0.1
改为
bind-address            = 127.0.0.1,172.17.0.1
重启 MySQL:
systemctl restart mysql
即可完成配置

docker挂载宿主机文件修改后容器内不同步问题

通过docker -v 参数挂载了配置文件到容器内。
发现在宿主机上改完后,进入容器内查看此文件并没有同步过来。

实际是因为-v 挂载文件(包括文件夹)是挂载的inode号,通过vi修改文件后inode号发生变化,但容器内还是旧的inode号。

参考:https://blog.csdn.net/biao0309/article/details/105186106 文章内容。
当使用vim之类的编辑器进行保存时,它不是直接保存文件,而是采用了备份、替换的策略,
就是编辑时,是创建一个新的文件,在保存的时候,把备份文件替换源文件,这个时候文件的 inode 就发生了变化,
而原来 inode 对应的文件其实并没有修改,也就是容器内的文件没有变化。
当重启容器的时候,会挂载新的 inode

# 测试,查看当前配置文件inode号为 4068031
root@server:/data/www/rgs/conf# stat www.conf 
  File: www.conf
  Size: 810       	Blocks: 8          IO Block: 4096   regular file
Device: fd00h/64768d	Inode: 4068031     Links: 1
Access: (0644/-rw-r--r--)  Uid: (   33/www-data)   Gid: (   33/www-data)
Access: 2023-10-07 05:50:30.326634010 +0000
Modify: 2023-10-07 05:50:30.326634010 +0000
Change: 2023-10-07 05:50:30.326634010 +0000
 Birth: -

# 随便改点东西,添加个空行都可以。
root@server:/data/www/rgs/conf# vi www.conf 

# 再次查看inode号变为 4068032
root@server:/data/www/rgs/conf# stat www.conf 
  File: www.conf
  Size: 811       	Blocks: 8          IO Block: 4096   regular file
Device: fd00h/64768d	Inode: 4068032     Links: 1
Access: (0644/-rw-r--r--)  Uid: (   33/www-data)   Gid: (   33/www-data)
Access: 2023-10-07 05:58:27.874527154 +0000
Modify: 2023-10-07 05:58:27.874527154 +0000
Change: 2023-10-07 05:58:27.874527154 +0000
 Birth: -


# 在容器内查看,还是4068031
root@404b0ceb1aea:/var/www/html# cd /usr/local/etc/php-fpm.d/
root@404b0ceb1aea:/usr/local/etc/php-fpm.d# stat www.conf
  File: www.conf
  Size: 810       	Blocks: 8          IO Block: 4096   regular file
Device: fd00h/64768d	Inode: 4068031     Links: 0
Access: (0644/-rw-r--r--)  Uid: (   33/www-data)   Gid: (   33/www-data)
Access: 2023-10-07 05:58:22.458398126 +0000
Modify: 2023-10-07 05:50:30.326634010 +0000
Change: 2023-10-07 05:58:27.874527154 +0000
 Birth: 2023-10-07 05:50:30.326634010 +0000
root@404b0ceb1aea:/usr/local/etc/php-fpm.d# 

docker push到harbor私库中报unknown blob错误

本地构建好镜像推送到私库。
执行docker push到harbor私库中时报unknown blob错误

[root@node1 ~]# docker push reg.xxx.com/prod/filebeat-java:0426
The push refers to repository [reg.xxx.com/prod/filebeat-java]
e52576dc1f49: Pushing [==================================================>]  3.584kB
ffdb94571df7: Pushing [==================================================>]  372.7kB
47fc804728ff: Pushing [>                                                  ]  545.3kB/80.14MB
dab4e68f20a2: Pushing [==================================================>]  6.144kB
6435966ee18f: Pushing [==================================================>]  4.096kB
b1c057a951b8: Waiting 
df654d36e69e: Waiting 
77b174a6a187: Waiting 
unknown blob

网上搜索后说屏蔽掉nginx反向代理中添加的请求头解决
#proxy_set_header Host $host;
具体原因也没查为什么。
PS:我这里用的harbor修改过端口号,用nginx反向代理了harbor。

harbor修改默认的80端口

之前做私库的时候会在harbor中直接加入https证书,但是会占用80和443,如果再添加其它的虚拟主机比较麻烦,这里直接改为其它端口。
再通过物理机上的nginx去反向代理harbor(https)。

harbor修改步骤:
1、配置文件

[root@node1 harbor]# vim harbor.cfg 
ui_url_protocol = http 这里只用http,不像之前改为https

2、服务编排文件修改

[root@node1 harbor]# vim docker-compose.yml 
  proxy:
    image: goharbor/nginx-photon:v1.7.4
    container_name: nginx
    restart: always
    cap_drop:
      - ALL
    cap_add:
      - CHOWN
      - SETGID
      - SETUID
      - NET_BIND_SERVICE
    volumes:
      - ./common/config/nginx:/etc/nginx:z
    networks:
      - harbor
    dns_search: .
    ports:
      - 7080:80  # 80改掉
      - 3443:443 # 443改掉
      - 4443:4443

3、私库模板配置文件

[root@node1 harbor]# vim common/templates/registry/config.yml 
auth:
  token:
    issuer: harbor-token-issuer
    realm: $public_url:7080/service/token
    rootcertbundle: /etc/registry/root.crt
    service: harbor-registry

把 realm: $public_url/service/token 改为 realm: $public_url:7080/service/token
这个地方不改的话,会报错:
Error response from daemon: Get https://reg.xxx.com/v2/: unauthorized: authentication required

# 创建harbor

[root@node1 harbor]# sh install.sh 

#物理机配置nginx反向代理7080端口,加上https。
location / {
	proxy_pass http://127.0.0.1:7080;
}

再docker login即可。

fluentd接收docker日志转存到kafka (elasticsearch+fluentd+kafka+logstash+kibana)

系统版本:centos7
IP:192.168.10.74

组件版本:
logstash 6.6.2
elasticsearch 6.5.4
kibana 6.5.4
fluentd 1.3.2
kafka 2.12-2.3.0
都安装在本机。

目的:
docker容器(应用日志输出为json格式)日志通过log-driver直接输出到fluentd。
fluentd将接收的日志生产转存到kafka消息队列。
logstash从kafka中消费日志,经过处理后输出到elasticsearch中用于检索。
kibana展示。

# kafka消息队列
配置参考:https://www.rootop.org/pages/4508.html
此处略过

# 启动一个fluentd

[root@localhost]# docker pull fluentd
[root@localhost]# docker run -dit --name fluentd -p 24224:24224 -p 24224:24224/udp docker.io/fluent/fluentd

# 进入fluentd容器,配置转存kafka
fluentd docker版安装kafka插件 官网文档:https://docs.fluentd.org/output/kafka
1、安装插件

# fluent-gem install fluent-plugin-kafka

2、编辑配置文件

# vi /fluentd/etc/fluent.conf

<source>
  @type  forward
  @id    input1
  @label @mainstream
  port  24224
</source>

<filter **>
  @type stdout
</filter>

<label @mainstream>
  <match docker.**>
    @type file
    @id   output_docker1
    path         /fluentd/log/docker.*.log
    symlink_path /fluentd/log/docker.log
    append       true
    time_slice_format %Y%m%d
    time_slice_wait   1m
    time_format       %Y%m%dT%H%M%S%z
  </match>

  <match **>
    @type kafka2

    # list of seed brokers,这个地方可以通过逗号写多个地址比如 host1:9092,host2:9092
    brokers 192.168.10.74:9092
    use_event_time true

    # buffer settings
    <buffer topic>
	@type file
	# 下面的path可能需要手动创建目录,并给写入权限,我直接给了777
	path /var/log/td-agent/buffer/td
	flush_interval 3s
    </buffer>

    # data type settings
    <format>
	@type json
    </format>

    # kafka中创建的topic
    topic_key test
	# 默认topic
    default_topic test
    get_kafka_client_log true
    # producer settings
    required_acks -1
    compression_codec gzip
  </match>
</label>

保存退出,重启容器。

# docker容器通过log-driver输出到fluentd

[root@localhost]# docker run -dit --name name-api-2 -v /home/dockermount/api:/mnt --publish-all --log-driver=fluentd --log-opt fluentd-address=192.168.10.74:24224 --log-opt fluentd-async-connect java8t

docker fluentd官方资料:https://docs.docker.com/config/containers/logging/fluentd/
fluentd-async-connect # 此参数可以防止连不上fluentd导致容器退出。
Docker connects to Fluentd in the background. Messages are buffered until the connection is established. Defaults to false.
If container cannot connect to the Fluentd daemon, the container stops immediately unless the fluentd-async-connect option is used.
目前为止,容器日志已经可以写入kafka了。

# 安装es、kibana

[root@localhost log]# docker run -dit --name es -p 9200:9200 -p 9200:9200/udp elasticsearch:6.5.4
[root@localhost log]# docker run -dit --name kibana -e ELASTICSEARCH_HOST=http://192.168.10.74:9200 -e ELASTICSEARCH_URL=http://192.168.10.74:9200 -p 5601:5601 kibana:6.5.4

# 配置logstash配置文件,直接rpm安装,过程略。

[root@localhost ~]# cd /usr/share/logstash/
[root@localhost logstash]# cat kafka.conf 
input {

	kafka {
		bootstrap_servers => ["192.168.10.74:9092"]
		client_id => "test1"
		group_id => "test1"
		auto_offset_reset => "latest"
		consumer_threads => 1
		decorate_events => false
		topics => ["test"]
		type => "fromk"
	}
}

filter {
	
    json {
		# 将message字段的key及value(json格式)导入到es,在根部生成新字段。
        source => "message"
		# 添加新列,便于下面再执行source
		add_field => { "@javalog" => "%{log}" }
    }

	# 将json中的json字段、值导入到es中(json嵌套json)
	# {"xxx":"xxx","log":{"time":"xxx","path":"xxx"}}
	# 即导入json中的log


	# 第二次解析json串
    json {
		source => "@javalog"
		# 移除没用的字段
		remove_field => [ "log","@javalog" ]
    }
}

output
{

	elasticsearch {
		hosts => "192.168.10.74"
		index => "jar-log-%{+YYYY.MM.dd}"
	}

	stdout {
		codec => rubydebug
	}

}

# 启动logstash

[root@localhost logstash]# logstash -f kafka.conf

# 容器中java应用日志输出格式为:

{"@timestamp":"2019-08-22T15:09:26.801+08:00","@version":"1","message":"运行时报错:","logger_name":"com.sailei.modules.test.controller.TestController","threa"level":"INFO","level_value":20000}

# docker fluentd日志输出会将应用产生的日志再加入4个元数据发给fluentd(仍然是json格式):
container_id #The full 64-character container ID.
container_name #The container name at the time it was started. If you use docker rename to rename a container, the new name is not reflected in the journal entries.
source #stdout or stderr
log #The container log 应用日志会在log字段中做为value

所以要在logstash中处理json嵌套,获取log字段中数据(提取到根部,便于检索)。
(本来想用filebeat处理json问题,结果因为log字段和filebeat内部一个方法关键词冲突,导致log字段中的数据无法加到根部,才换的logstash)