Jan 052022
 

对容器镜像的安全扫描,这个话题也越来越多,DevSecOps,也开始受到关注,想验证一下当前的K8s群集里,是否有Log4j的漏洞,应该怎么做呢?现在也不少厂商交流容器安全,先提高一下自己的能力,交流的时候有话题聊。

Trivy是aqua(专注云原生场景下的安全)公司的一款开源工具。

至少目前来看,他可以针对

  • 容器的镜像
  • 容器的tar包
  • k8s和terraform的deployment file 检测

安装

官方提供多种方式安装,

官方文档

对我来说,就选择rpm包的方式安装,最简单。

wget https://hub.fastgit.org/aquasecurity/trivy/releases/download/v0.21.2/trivy_0.21.2_Linux-64bit.rpm
rpm -ivh trivy_0.21.2_Linux-64bit.rpm

# which trivy
/usr/local/bin/trivy

 trivy -v
Version: 0.21.2

用go开发的,就一个执行文件搞定。

安全扫描通常都是需要下载安全漏洞的数据库。trivy的下载数据库比较频繁,而且从github下载,速度很慢。所以需要搞成离线版本。

  • https://github.com/aquasecurity/trivy-db/releases

下载 trivy-offline.db.tgz 放到trivy cache目录。默认的cache目录的位置上

# trivy -h
NAME:
   trivy - A simple and comprehensive vulnerability scanner for containers

USAGE:
   trivy [global options] command [command options] target

VERSION:
   0.21.2

COMMANDS:
   image, i          scan an image
   filesystem, fs    scan local filesystem for language-specific dependencies and config files
   rootfs            scan rootfs
   repository, repo  scan remote repository
   client, c         client mode
   server, s         server mode
   config, conf      scan config files
   plugin, p         manage plugins
   help, h           Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --quiet, -q        suppress progress bar and log output (default: false) [$TRIVY_QUIET]
   --debug, -d        debug mode (default: false) [$TRIVY_DEBUG]
   --cache-dir value  cache directory (default: "/root/.cache/trivy") [$TRIVY_CACHE_DIR]
   --help, -h         show help (default: false)
   --version, -v      print the version (default: false)

trivy-offline.db.tgz 复制到cache目录下解压就可以

cp /root/trivy-offline.db.tgz .cache/trivy/db/
cd .cache/trivy/db/
tar zxvf trivy-offline.db.tgz 
# ls
metadata.json  trivy.db  trivy-offline.db.tgz
rm trivy-offline.db.tgz

这个时候离线的db就准备好了。当扫描镜像的时候,记得加上参数:–skip-update

软件更新很快,参数也变化很大,需要留意版本号,0.22 和 0.21 参数都有差异。

扫描第一个镜像

# trivy image --skip-update  alpine:3.15.0
2022-01-05T10:34:24.858+0800	INFO	Detected OS: alpine
2022-01-05T10:34:24.858+0800	INFO	Detecting Alpine vulnerabilities...
2022-01-05T10:34:24.858+0800	INFO	Number of language-specific files: 0

alpine:3.15.0 (alpine 3.15.0)
=============================
Total: 0 (UNKNOWN: 0, LOW: 0, MEDIUM: 0, HIGH: 0, CRITICAL: 0)

找一个log4j的漏洞的镜像验证一下

docker pull elasticsearch:5.6.13
docker tag elasticsearch:5.6.13 hub.bj.sugon.tech:5000/elasticsearch:5.6.13
docker push hub.bj.sugon.tech:5000/elasticsearch:5.6.13

# trivy image --skip-update --severity CRITICAL hub.bj.sugon.tech:5000/elasticsearch:5.6.13 | grep 'CVE-2021-44228'
| org.apache.logging.log4j:log4j-api  | CVE-2021-44228   |          | 2.11.1            | 2.15.0        | log4j-core: Remote code execution     |
| org.apache.logging.log4j:log4j-core | CVE-2021-44228   |          |                   | 2.15.0        | log4j-core: Remote code execution     |

我是把镜像拉回本地,放到仓库来扫描,直接扫描docker hub,速度比较慢。

你可以把当前k8s群集运行的镜像镜像扫描,思路就是先找出来群集的所有镜像,然后进行扫描,看看是否有log4j的漏洞。

参考文章

  • https://medium.com/linkbynet/cve-2021-44228-finding-log4j-vulnerable-k8s-pods-with-bash-trivy-caa10905744d
Dec 012021
 

记录一下日常的使用

##创建:
screen -S abc

##查看有多少会话:

screen -ls

##重新连接

screen -r abc

#离开

control+a+d


##删除 
screen -S -X abc quit
screen -wipe

参考文章

  • https://www.cnblogs.com/mchina/archive/2013/01/30/2880680.html
Nov 262021
 

这里记录一下简单的过程,后续再完善,已经知道如何操作。

export LIBGUESTFS_BACKEND=direct
export image_name='focal-server-cloudimg-amd64.img' 

virt-customize -a $image_name --run-command 'sudo cp -a /etc/apt/sources.list /etc/apt/sources.list.bak'

virt-customize -a $image_name --run-command 'sudo sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list'

virt-customize -a $image_name --run-command 'sudo sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list'

virt-customize -a $image_name --install qemu-guest-agent
virt-customize -a $image_name --run-command 'systemctl enable qemu-guest-agent'
virt-customize -a $image_name --timezone "Asia/Shanghai" 

virt-customize -a $image_name --edit '/etc/ssh/sshd_config:s/PasswordAuthentication no/PasswordAuthentication yes/'
virt-customize -a $image_name --edit '/etc/ssh/sshd_config:s/#PermitRootLogin prohibit-password/PermitRootLogin yes/'

#zstack
virt-customize -a $image_name --firstboot-command 'sudo /bin/bash -c "$(curl -s -S http://169.254.169.254/vm-tools.sh)"'
virt-customize -a $image_name --firstboot-command "sudo sed -i 's/9100/9104/g' /usr/local/zstack/zwatch-vm-agent/conf.yaml"
virt-customize -a $image_name --firstboot-command "sudo /bin/systemctl restart zwatch-vm-agent.service"
virt-customize -a $image_name --run-command 'mv /usr/lib/virt-sysprep/scripts/0001-sudo--bin-systemctl-restart-zwatch-vm-agent-service /usr/lib/virt-sysprep/scripts/0002-sudo--bin-systemctl-restart-zwatch-vm-agent-service'

#DIB 部分需要调整
export image_name='ubuntu-20.04.qcow2'
export DIB_RELEASE=focal
disk-image-create -a amd64 -o  $image_name -x --image-size 80 vm ubuntu dhcp-all-interfaces

#extend disk
qemu-img info focal-server-cloudimg-amd64.img
virt-filesystems --long --parts --blkdevs -h -a focal-server-cloudimg-amd64.img 
qemu-img resize focal-server-cloudimg-amd64.img +78g
qemu-img info focal-server-cloudimg-amd64.img
Nov 242021
 

一个pod,容器,要两块网卡,我以前也觉得很变态,吃饱没事撑着。当你真的实际场景,真的有这样的需求

对着官方文档,先来一遍,体验一下

我需要实现的就是下图的数据网络

我是使用kubekey来进行安装,用2个vm来做试验。使用calico的网络

# kubectl get nodes
NAME       STATUS   ROLES                         AGE     VERSION
master01   Ready    control-plane,master,worker   3h54m   v1.22.1
work01     Ready    worker                        3h54m   v1.22.1

下载multus代码

git clone https://github.com/k8snetworkplumbingwg/multus-cni.git && cd multus-cni
cat ./deployments/multus-daemonset.yml | kubectl apply -f -

没搞明白,为啥官方文档使用multus-daemonset-thick-plugin.yml,外面的文档都是:multus-daemonset.yml

如果顺利

# kubectl get pods -n kube-system | grep -i multus
kube-multus-ds-tdcf4                           1/1     Running   0          133m
kube-multus-ds-xlf4m                           1/1     Running   0          133m

macvlan-conf.yaml

master :eth0, 这个地方设置,应该是重点,如果机器有另外一块网卡,我应该可以考虑指向它。

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: macvlan-conf
spec:
  config: '{
      "cniVersion": "0.3.0",
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "192.168.4.0/24",
        "rangeStart": "192.168.4.200",
        "rangeEnd": "192.168.4.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "192.168.4.1"
      }
    }'

根据自己需求,修改一下macvlan的网段

kubectl create -f macvlan-conf.yaml
kubectl get network-attachment-definitions
kubectl describe network-attachment-definitions macvlan-conf

验证

samplepod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: samplepod
  annotations:
    k8s.v1.cni.cncf.io/networks: macvlan-conf
spec:
  containers:
  - name: samplepod
    command: ["/bin/ash", "-c", "trap : TERM INT; sleep infinity & wait"]
    image: alpine

kubectl create -f samplepod.yaml
kubectl exec -it samplepod -- ip a

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
2: tunl0@NONE: <NOARP> mtu 1480 qdisc noop state DOWN qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
4: eth0@if19: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1440 qdisc noqueue state UP 
    link/ether c6:c1:90:ac:c4:e4 brd ff:ff:ff:ff:ff:ff
    inet 10.233.106.10/32 brd 10.233.106.10 scope global eth0
       valid_lft forever preferred_lft forever
5: net1@tunl0: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 82:84:7e:da:6f:b8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.4.200/24 brd 192.168.4.255 scope global net1
       valid_lft forever preferred_lft forever

查看一下pod

# kubectl describe pod samplepod
Name:         samplepod
Namespace:    default
Priority:     0
Node:         master01/192.168.20.21
Start Time:   Wed, 24 Nov 2021 16:48:04 +0800
Labels:       <none>
Annotations:  cni.projectcalico.org/containerID: 75777c27377709cfba50d9f8ef59209869c26d0707f0ad38343351eae904d16a
              cni.projectcalico.org/podIP: 10.233.106.10/32
              cni.projectcalico.org/podIPs: 10.233.106.10/32
              k8s.v1.cni.cncf.io/network-status:
                [{
                    "name": "k8s-pod-network",
                    "ips": [
                        "10.233.106.10"
                    ],
                    "default": true,
                    "dns": {}
                },{
                    "name": "default/macvlan-conf",
                    "interface": "net1",
                    "ips": [
                        "192.168.4.200"
                    ],
                    "mac": "82:84:7e:da:6f:b8",
                    "dns": {}
                }]
              k8s.v1.cni.cncf.io/networks: macvlan-conf
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "k8s-pod-network",
                    "ips": [
                        "10.233.106.10"
                    ],
                    "default": true,
                    "dns": {}
                },{
                    "name": "default/macvlan-conf",
                    "interface": "net1",
                    "ips": [
                        "192.168.4.200"
                    ],
                    "mac": "82:84:7e:da:6f:b8",
                    "dns": {}
                }]
Status:       Running
IP:           10.233.106.10
IPs:
  IP:  10.233.106.10
Nov 242021
 

最近搬运了2次docker 镜像,有点体会。你肯定需要一台国外的机器才行。

如果有docker

docker pull ghcr.io/k8snetworkplumbingwg/multus-cni:stable
docker save ghcr.io/k8snetworkplumbingwg/multus-cni > ./multus-cni.tar
##把tar包下载到有docker服务的节点
docker load < ./multus-cni.tar
docker images

如果海外的机器上没有docker,那么有一个另外一种玩法

# 可以直接yum install,版本低,我是编译,1.3的版本
./skopeo --insecure-policy copy docker://ghcr.io/k8snetworkplumbingwg/multus-cni:stable docker-archive:multus-cni.tar

遗憾的是skopeo,还不能通过proxy去下载。

  • https://github.com/containers/skopeo/issues/1433

Nov 072021
 

其实现在大家都应该用的是64bit的系统,那么对于64bit的系统,你要同时把这两个文件的32bit和64bit,都装上才行。记录一下。

  • https://docs.microsoft.com/en-US/cpp/windows/latest-supported-vc-redist?view=msvc-170

把Visual Studio 2015, 2017, 2019, or 2022 x86和64的两个安装包都装一遍就可以解决这个问题。

VCRUNTIME140.DLL

https://www.dll-files.com/vcruntime140.dll.html

一定要把x86和x64,都装上,才能解决问题。

msvcp140.dll

这个也是一样的道理

https://www.sts-tutorial.com/download/msvcp140

-- 64-Bit Windows:
          - 32-Bit Version - C:\Windows\SysWOW64
          - 64-Bit Version - C:\Windows\System32

要下载两个文件,分别放到不同的地方,就可以。