分类目录归档:LINUX

CentOS搭建k8s集群(5)—Helm包管理工具

Helm是用来管理Kubernetes发布包的工具,使用方法类似于yum、npm工具
原来:手写yaml –> xxx.yaml –> kubectl apply -f –> 获取xxx组件
现在:values.yaml –> xxx.yaml –> helm install/upgrade –> 获取xxx组件

PS:以下操作都在master机器上
1、安装Helm
Helm3.x版本移除了tiller依赖,所以只有一个helm文件
cd /tmp
wget https://get.helm.sh/helm-v3.3.0-rc.1-linux-amd64.tar.gz
tar -zxvf helm-v3.3.0-rc.1-linux-amd64.tar.gz
mv linux-amd64/helm /usr/local/bin/helm

2、配置环境变量
vi /etc/profile
加入:
export KUBECONFIG=/root/.kube/config
执行:
source /etc/profile

3、安装Helm Chart仓库
helm repo add stable https://kubernetes-charts.storage.googleapis.com/
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

4、显示仓库charts信息
helm repo update
helm search repo stable

未完待续。。。

参考资料:
https://github.com/helm/helm/releases
https://helm.sh/docs/intro/quickstart/

CentOS搭建k8s集群(4)—添加工作节点

1、在node1和node2执行
kubeadm join 192.168.101.1:6443 --token mu949z.xkhkw4tq7t79z4v6 \
--discovery-token-ca-cert-hash sha256:0a381d7f750bda8d639b7132bf4db942710d2042b2cef0c6ffe6aa49a4603f5d \
--ignore-preflight-errors=Swap

2、返回
W0713 04:55:55.810886 12707 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING Swap]: running with swap on is not supported. Please disable swap
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "mu949z"
To see the stack trace of this error execute with --v=5 or higher

继续阅读CentOS搭建k8s集群(4)—添加工作节点

CentOS搭建k8s集群(3)—安装pod网络

1、安装flannel network
cd ~
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

2、pod概念
pod是k8s最小管理单位,它是一个或多个容器的组合

3、flannel network
用于集群中各个pod互相通讯的网络,Kubernetes支持Flannel、Calico、Weave network等多种cni网络Drivers

4、查看集群节点状态,变为Ready
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 2d21h v1.18.5

继续阅读CentOS搭建k8s集群(3)—安装pod网络

CentOS搭建k8s集群(2)—初始化主节点

三、创建单个控制面板(主节点)集群
1、初始化主节点
由于测试环境原因,忽略一些错误
kubeadm init --pod-network-cidr=10.122.0.0/16 \
--ignore-preflight-errors=Swap

参数说明:
–apiserver-advertise-address:指定用master的哪个IP地址与cluster的其他节点通信
–service-cidr:指定service网络的范围,即负载均衡VIP使用的IP地址段
–pod-network-cidr:指定pod网络的范围,即pod的IP地址段
–ignore-preflight-errors=:忽略运行时的错误

2、如果初始化失败,查看原因
systemctl status kubelet
journalctl -xeu kubelet

重置:
systemctl stop kubelet
kubeadm reset
systemctl daemon-reload

继续阅读CentOS搭建k8s集群(2)—初始化主节点

CentOS搭建k8s集群(1)—软件安装

一、环境准备(所有节点)
1、节点信息
k8s-master:192.168.101.1
k8s-node1:192.168.101.2
k8s-node2:192.168.101.3

2、系统信息
CentOS Linux release 7.8.2003 (Core)

3、关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config
setenforce 0

4、关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

5、关闭swap(测试环境可不关闭)
vi /etc/fstab
去除swap配置,并reboot
PS:为了应用发生OOM时,使其被系统kill掉,及时发现问题

6、设置主机名和hosts
主节点:k8s-master
工作节点1:k8s-node1
工作节点2:k8s-node2
vi /etc/hosts
1)k8s-master节点
127.0.0.1 k8s-master
192.168.101.2 k8s-node1
192.168.101.3 k8s-node2

2)k8s-node1节点
127.0.0.1 k8s-node1
192.168.101.1 k8s-master
192.168.101.3 k8s-node2

3)k8s-node2节点
127.0.0.1 k8s-node2
192.168.101.1 k8s-master
192.168.101.2 k8s-node1

继续阅读CentOS搭建k8s集群(1)—软件安装

npm私服搭建verdaccio

verdaccio是管理npm包的私服,类似于maven私服吧。自己开发的包可以放上去,也可以作为公共包的缓存,本地没有再到中央库去拉

1、安装node.js
略过

2、安装verdaccio
npm install -g verdaccio

3、启动服务
1)启动
verdaccio -l 0.0.0.0:4873
*** WARNING: Verdaccio doesn't need superuser privileges. Don't run it under root! ***
warn --- config file - /root/.config/verdaccio/config.yaml
warn --- Verdaccio started
warn --- Plugin successfully loaded: verdaccio-htpasswd
warn --- Plugin successfully loaded: verdaccio-audit
warn --- http address - http://0.0.0.0:4873/ - verdaccio/4.7.2

外网可以通过http://IP:4873来访问了,但是现在只是运行在控制台上,日志也是往控制台打的
2)添加配置
vi /root/.config/verdaccio/config.yaml
在最后加入:
listen: 0.0.0.0:4873

继续阅读npm私服搭建verdaccio

CentOS7日志管理工具journal

CentOS7的systemd使用journal来记录管理日志,journal记录的日志为二进制格式,可以通过管理工具维护。对应的服务为systemd-journald
你会发现在/var/log/下少了很多日志文件,而/var/log/journal占用了大量的磁盘空间

1、查看服务
[root@docker log]# systemctl status systemd-journald.service
● systemd-journald.service - Journal Service
Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static; vendor preset: disabled)
Active: active (running) since 五 2020-06-26 03:38:23 UTC; 1 day 11h ago
Docs: man:systemd-journald.service(8)
man:journald.conf(5)
Main PID: 79 (systemd-journal)
Status: "Processing requests…"
Tasks: 1
Memory: 7.0M
CGroup: /system.slice/systemd-journald.service
└─79 /usr/lib/systemd/systemd-journald
6月 19 13:54:37 docker systemd-journal[77]: Journal stopped
6月 19 13:54:52 docker systemd-journal[77]: Runtime journal is using 8.0M (max allowed 307.2M, trying to leave 460.8M free of 2.9G available → current limit 307.2M).
6月 19 13:54:52 docker systemd-journal[77]: Journal started
6月 19 13:54:53 docker systemd-journal[77]: Permanent journal is using 328.0M (max allowed 4.0G, trying to leave 4.0G free of 74.2G available → current limit 4.0G).
6月 19 13:54:53 docker systemd-journal[77]: Time spent on flushing to /var is 56.281ms for 31 entries.
6月 26 03:38:12 docker systemd-journal[77]: Journal stopped
6月 26 03:38:23 docker systemd-journal[79]: Runtime journal is using 8.0M (max allowed 307.2M, trying to leave 460.8M free of 2.9G available → current limit 307.2M).
6月 26 03:38:23 docker systemd-journal[79]: Journal started
6月 26 03:38:24 docker systemd-journal[79]: Permanent journal is using 328.0M (max allowed 4.0G, trying to leave 4.0G free of 47.7G available → current limit 4.0G).
6月 26 03:38:24 docker systemd-journal[79]: Time spent on flushing to /var is 51.714ms for 22 entries.

2、查看使用磁盘空间
[root@docker log]# journalctl --disk-usage
Archived and active journals take up 336.0M on disk.

3、清理命令
格式:
journalctl --vacuum-size=BYTES:保留多大的日志
journalctl --vacuum-time=TIME:移除多久以前的日志

例子:
保留一周的日志
journalctl --vacuum-time=1w
保留500MB的日志
journalctl --vacuum-size=500M

4、查看当前日志
journalctl -f

5、查看指定应用日志
journalctl -xeu httpd

参考资料:
https://www.cnblogs.com/leigepython/p/10302056.html

GitLab访问报错:HTTP 502: Whoops, GitLab is taking too much time to respond.

发现一个奇怪的现象,gitlab有时打开报502,多刷几次可以正常打开。所以并不是端口被占用
网上提供了一个分析的方法:
1、gitlab-ctl status
查看进程是否正常启动
2、/var/log/gitlab
下的日志有没有报错
3、gitlab-ctl tail [process name]
查看对应进程的情况

参考资料:
https://www.liangzl.com/get-article-detail-27713.html
https://segmentfault.com/a/1190000017436142

rpmdb: BDB0113 Thread/process 9751/139849321973824 failed: BDB1507 Thread died in Berkeley DB library

yum update时报错:
错误:rpmdb: BDB0113 Thread/process 9751/139849321973824 failed: BDB1507 Thread died in Berkeley DB library
错误:db5 错误(-30973) 来自 dbenv->failchk:BDB0087 DB_RUNRECOVERY: Fatal error, run database recovery
错误:无法使用 db5 - (-30973) 打开 Packages 索引
错误:无法从 /var/lib/rpm 打开软件包数据库
CRITICAL:yum.main:
Error: rpmdb open failed

解决办法:
cd /var/lib/rpm
rm -rf __db*
rpm --rebuilddb
yum repolist

再执行:
yum update