K8S

K8s Openshift

v3.11.0->k8s 1.11 openshift all-in-one curl https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz tar zxf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz cd openshift export PATH="$(pwd)":$PATH sudo ./openshift start master oc setup export KUBECONFIG="$(pwd)"/openshift.local.config/master/admin.kubeconfig export CURL_CA_BUNDLE="$(pwd)"/openshift.local.config/master/ca.crt sudo chmod +r "$(pwd)"/openshift.local.config/master/admin.kubeconfig openshift complition bash > /usr/share/bash-completion/completions/openshift.complition.sh master and node configuration after installation /etc/origin/master/master-config.yaml identityProviders: - name: my_allow_provider challenge: true login: true provider: apiVersion: v1 kind: AllowAllPasswordIdentityProvider corsAllowedOrigins Identity Providers The OpenShift master includes a built-in OAuth server the Deny All identity provider is used by default, which denies access for all user names and passwords.

K8S SDK Setup

安装Golang Dep go get -v github.com/tools/godep 安装client-go go get k8s.io/client-go/kubernetes cd $GOPATH/src/k8s.io/client-go git checkout v10.0.0 godep restore ./... 集群外开发 集群内开发

K8s CNI之Kube Router实现

准备 搭建测试环境 可以参考从源代码构件K8S开发环境

K8s Development Streamline with draft

准备 初始化 draft init ... Installing default plugins... Preparing to install into /home/bigo/.draft/plugins/draft-pack-repo draft-pack-repo installed into /home/bigo/.draft/plugins/draft-pack-repo/draft-pack-repo Installed plugin: pack-repo Installation of default plugins complete Installing default pack repositories... Installing pack repo from https://github.com/Azure/draft Installed pack repository github.com/Azure/draft Installation of default pack repositories complete $DRAFT_HOME has been configured at /home/bigo/.draft. ... 设置docker镜像寄存器 draft config set registry registry.cn-beijing.aliyuncs.com/k4s or skip the push process entirely using the –skip-image-push flag

K8S CSI

PersistentVolume A PersistentVolume (PV) is a piece of storage in the cluster that has been manually provisioned by an administrator, or dynamically provisioned by Kubernetes using a StorageClass. Many cluster environments have a default StorageClass installed. When a StorageClass is not specified in the PersistentVolumeClaim, the cluster’s default StorageClass is used instead Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet

kubectl cheat sheet

Set namespace preference kubectl config set-context $(kubectl config current-context) --namespace=<bigo> watch pod kubectl get pods pod1 --watch Check Performance kubectl top node kubectl top pod copy file between pod and local kubectl cp ~/f1 <namespace>/<pod-name>:/tmp/ kubectl cp <namespace>/<pod-name>:/tmp/ ~/ enable RBAC kube-apiserver - --authorization-mode=RBAC User CRUD openssl genrsa -out bigo.key 2048 openssl req -new -key bigo.key -out bigo.csr -subj "/CN=wubigo/O=bigo LLC" sudo openssl x509 -req -in bigo.

K8S notes

节点维护 kubectl drain <node name> 维护有DaemonSet-managed pod的节点 kubectl drain <node name> --delete-local-data --force --ignore-daemonsets kubectl delete node <node name> sudo iptables -F sudo iptables -S create a regular pod 必须使用–restart=Never kubectl run -it curl --image=curlimages/curl:7.72.0 --restart=Never -- sh Never acts like a cronjob which is scheduled immediately. Always creates a deployment and the deployment monitors the pod and restarts in case of failure. kubeadm install mirror in china apt-get update && apt-get install -y apt-transport-https curl https://mirrors.

K8s Istio Pilot as envoy control place

# side car proxy 方法1 Namespace labels kubectl label ns servicea istio-injection=enabled Istio watches over all the deployments and adds the side car container to our pods.This is achieved by leveraging what is called MutatingAdmissionWebhooks, this feature was introduced in Kubernetes 1.9. So before the resources get created, the web hook intercepts the requests, checks if “Istio injection” is enabled for that namespace, and then adds the side car container to the pod

K8s DNS

Before Kubernetes version 1.11, the Kubernetes DNS service was based on kube-dns. Version 1.11 introduced CoreDNS to address some security and stability concerns with kube-dns. Regardless of the software handling the actual DNS records, both implementations work in a similar manner: A service named kube-dns and one or more pods are created. The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed.

K8s Kubelet

PodUID kubectl get pod <PID_NAME> -o=jsonpath='{.metadata.uid}' POD on disk /var/lib/kubelet/pods/<PodUID>/ /var/log/pods/<PodUID>/<container_name> ls -l /var/log/pods/<PodUID>/<container_name>/ lrwxrwxrwx 1 root root 165 3月 30 06:52 0.log -> /var/lib/docker/containers/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021-json.log In a production cluster, logs are usually collected, aggregated, and shipped to a remote store where advanced analysis/search/archiving functions are supported. In kubernetes, the default cluster-addons includes a per-node log collection daemon, fluentd. To facilitate the log collection, kubelet creates symbolic links to all the docker containers logs under /var/log/containers with pod and container metadata embedded in the filename.

K8s Helm Setup

Enable Helm in cluster Create a Service Account tiller for the Tiller server (in the kube-system namespace). Service Accounts are meant for intra-cluster processes running in Pods. Bind the cluster-admin ClusterRole to this Service Account. ClusterRoleBindings to be applicable in all namespaces. Tiller to manage resources in all namespaces. Update the existing Tiller deployment (tiller-deploy) to associate its pod with the Service Account tiller. kubectl create serviceaccount tiller --namespace kube-system kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' or

K8s Private Registry

Configuring Nodes to Authenticate to a Private Registry Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported. Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If there are files in the search paths list below, kubelet uses it as the credential provider when pulling images. {–root-dir:-/var/lib/kubelet}/config.json {cwd of kubelet}/config.

K8s HA Setup With Kubeadm

setup external ETCD install docker, kubelet, and kubeadm Configure the kubelet to be a service manager for etcd Create configuration files for kubeadm /tmp/${HOST0}/kubeadmcfg.yaml apiVersion: "kubeadm.k8s.io/v1beta1" kind: ClusterConfiguration etcd: local: serverCertSANs: - "192.168.1.10" peerCertSANs: - "192.168.1.10" extraArgs: initial-cluster: infra0=https://192.168.1.10:2380 initial-cluster-state: new name: infra0 listen-peer-urls: https://192.168.1.10:2380 listen-client-urls: https://192.168.1.10:2379 advertise-client-urls: https://192.168.1.10:2379 initial-advertise-peer-urls: https://192.168.1.10:2380 Generate the certificate authority sudo kubeadm init phase certs etcd-ca export HOST0="192.168.1.10" sudo kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.

K8S微服务治理

准备 docker pull istio/proxyv2:1.0.6 docker tag istio/proxyv2:1.0.6 gcr.io/istio-release/proxyv2:release-1.0-latest-daily docker push registry.cn-beijing.aliyuncs.com/co1/istio_proxyv2:1.0.6 docker pull istio/pilot:1.0.6 docker tag istio/pilot:1.0.6 gcr.io/istio-release/pilot:release-1.0-latest-daily docker pull istio/mixer:1.0.6 docker tag istio/mixer:1.0.6 gcr.io/istio-release/mixer:release-1.0-latest-daily docker pull istio/galley:1.0.6 docker tag istio/galley:1.0.6 gcr.io/istio-release/galley:release-1.0-latest-daily docker pull istio/citadel:1.0.6 docker tag istio/citadel:1.0.6 gcr.io/istio-release/citadel:release-1.0-latest-daily docker pull istio/sidecar_injector:1.0.6 docker tag istio/sidecar_injector:1.0.6 gcr.io/istio-release/sidecar_injector:release-1.0-latest-daily git clone https://github.com/istio/istio.git cd istio git checkout 1.0.6 -b 1.0.6 安装 Istio by default uses LoadBalancer service object types. Some platforms do not support LoadBalancer service objects.

K8S Monitor

setup prometheus prepare pv for prometheus https://wubigo.com/post/2018-01-11-kubectlcheatsheet/#pvc–using-local-pv install helm install --name prometheus1 stable/prometheus --set server.persistentVolume.storageClass=local-hdd,alertmanager.enabled=false

K8SCNI之L2 网络实现

准备 搭建测试环境 可以参考从源代码构件K8S开发环境

K8s日志EFK

namespace kube-logging.yaml kind: Namespace apiVersion: v1 metadata: name: kube-logging headless service kubectl create -f kube-logging.yaml elasticsearch_svc.yaml kind: Service apiVersion: v1 metadata: name: elasticsearch namespace: kube-logging labels: app: elasticsearch spec: selector: app: elasticsearch clusterIP: None ports: - port: 9200 name: rest - port: 9300 name: inter-node PROVISION local PV for EFK local PV Creating the StatefulSet elasticsearch_statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: es-cluster namespace: kube-logging spec: serviceName: elasticsearch replicas: 1 selector: matchLabels: app: elasticsearch template: metadata: labels: app: elasticsearch spec: containers: - name: elasticsearch image: docker.

K8S网络基础

K8S网络基础 K8S简介 K8S是自动化部署和监控容器的容器编排和管理工具。各大云厂商和应用开发平台都提供基于K8S的容器服务。 如果觉得K8S托管服务不容易上手或者和本公司的业务场景不很匹配,现在也有很多工具帮助在自己的数据 中心或私有云平台搭建K8S运行环境。 Minikube kops kubeadm 如果你想搭建一个测试环境,请参考 从K8S源代码构建容器集群(支持最新稳定版V1.13.3) 一个脚步部署K8S Kubernetes主要构件: 主节点: 主要的功能包括管理工作节点集群,服务部署,服务发现,工作调度,负载均衡等。 工作节点: 应用负载执行单元。 服务规范: 无状态服务,有状态服务,守护进程服务,定时任务等。 K8S网络基础 K8S网络模型 每一个POD拥有独立的IP地址 任何两个POD之间都可以互相通信且不通过NAT 集群每个节点上的代理(KUBELET)可以和该节点上的所有POD通信 K8S网络模型从网络端口分配的角度为容器建立一个干净的,向后兼容的规范,极大的方便和简化应用从虚拟机往容器迁移的流程。 K8S解决的网络问题: 容器间通信问题: 由POD和localhost通信解决 POD间通信问题: 由CNI解决 POD和服务的通信问题: 由SERVICE解决 外部系统和SERVICE的通信问题: 由SERVICE解决

K8S local development setup from source code

Setup a local development environment from source code with kubeadm