K8S

Deprecate Dockershim

Kubernetes is removing the “dockershim”, which is special in-process support the kubelet has for docker. However, the kubelet still has the CRI (container runtime interface) to support arbitrary runtimes. containerd is currently supported via the CRI, as is every runtime except docker. Docker is being moved from having special-case support to being the same in terms of support as other runtimes. Does that mean using docker as your runtime is deprecated?

Kubeadm Note Update

准备 iptables cat <<EOF | sudo tee /etc/modules br_netfilter EOF sudo modprobe br_netfilter lsmod | grep br_netfilter cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sudo sysctl --system install container runtime curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - sudo add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/ubuntu \ $(lsb_release -cs) \ stable" sudo apt-get update apt-cache madison docker-ce sudo apt-get install docker-ce=17.12.1~ce-0~ubuntu sudo usermod -aG docker bigo setup vpn and proxy

K8s高效应用开发工具集

本地流线型开发 本地流线型开发 集成开发,测试部署 IDE

K8s Pod Command Override

K8S POD Command Override OCR docker Entrypoint vs k8s command docker k8s entry ENTRYPOINT command arguments CMD args k8s command and args override the default OCR Entrypoint and Cmd Dockerfile FROM alpine:3.8 RUN apk add --no-cache curl ethtool && rm -rf /var/cache/apk/* CMD ["--version"] ENTRYPOINT ["curl"] cmd-override-pod.yaml apiVersion: v1 kind: Pod metadata: name: command-override labels: purpose: override-command spec: containers: - name: command-override-container image: bigo/curl:v1 command: ["curl"] args: ["--help"] restartPolicy: Never docker run -it bigo/curl:v1 curl 7.

K8s Logging

Node-level Logging System component logs RUN IN CONTAINER(Y/N) Systemd(W/WO) LOGGER LOCATION kubelet and container runtime W/O /var/log kubelet and container runtime W journald scheduler Y /var/log kube-proxy Y /var/log /var/lib/kubelet/pods/<PodUID>/ /var/log/pods/<PodUID>/<container_name> ls -l /var/log/pods/<PodUID>/<container_name>/ lrwxrwxrwx 1 root root 165 3月 30 06:52 0.log -> /var/lib/docker/containers/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021-json.log Cluster-level logging Use a node-level logging agent that runs on every node.

K8S CORE DEVELOPMENT

git clone [email protected]:wubigo/kubernetes.git git remote add upstream https://github.com/kubernetes/kubernetes.git git fetch --all git checkout tags/v1.13.3 -b v1.13.3 git branch -av|grep 1.13 * fix-1.13 4807084f79 Add/Update CHANGELOG-1.13.md for v1.13.2. remotes/origin/release-1.13 4807084f79 Add/Update CHANGELOG-1.13.md for v1.13.2. 管理POD func (kl *Kubelet) syncPod(o syncPodOptions) error {

通过SDK操纵公有云

基于腾讯云Go SDK开发 下载开发工具集 go get -u github.com/tencentcloud/tencentcloud-sdk-go 为集群准备CVM 从本地开发集群K8S读取安全凭证secretId和secretKey配置信息, 然后把安全凭证传送给SDK客户端 secretId, secretKey:= K8SClient.Secrets("namespace=tencent").Get("cloud-pass") credential := CloudCommon.NewCredential("secretId", "secretKey") client, _ := cvm.NewClient(credential, regions.Beijing) request := cvm.NewAllocateHostsRequest() request.FromJsonString(K8SClient.Configs("namespace=tencent").Get("K8S-TENCENT-PROD")) response, err := client.AllocateHosts(request) 通过ANSIBLE在CVM搭建K8S集群 Ansible.Hosts().Get(response.ToJsonString()) 调用ANSIBLE开始在CVM部署K8S集群

K8S CNI操作指引

简介 CNI是K8S的网络插件实现规范,与docker的CNM并不兼容,在K8S和docker的博弈过程中, K8S把docker作为默认的runtime并没有换来docker对K8S的支持。K8S决定支持CNI规范。 许多网络厂商的产品都提供同时都支持CNM和CNI的产品。 在容器网络环境,经常看到docker看不到K8S POD的IP网络配置, DOCKER容器有时候和POD无法通信。 CNI相对CNM是一个轻量级的规范。网络配置是基于JSON格式, 网络插件支持创建和删除指令。POD启动的时候发送创建指令。 POD运行时首先为分配一个网络命名空间,并把该网络命名空间制定给容器ID, 然后把CNI配置文件传送给CNI网络驱动。网络驱动连接容器到自己的网络, 并把分配的IP地址通过JSON文件报告给POD运行时POD终止的时候发送删除指令。 当前CNI指令负责处理IPAM, L2和L3, POD运行时处理端口映射(L4) K8S网络基础 K8S网络基础 CNI插件 CNI实现方式 CNI有很多实现,在这里之列举熟悉的几个实现。并提供详细的说明文档。 Flannel Kube-router Kube-router OpenVSwitch Calico Calico可以以非封装或非覆盖方式部署以支持高性能,高扩展扩展性数据中心网络需求 CNI-Calico Weave Net 网桥 CNI 网桥

kubeamd cheat sheet

version notes some only works on 1.13 kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-16T15:29:34Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Starting with Kubernetes 1.12, the K8S.gcr.io/kube-${ARCH}, K8S.gcr.io/etcd and K8S.gcr.io/pause images don’t require an -${ARCH} suffix get all Pending pods kubectl get pods --field-selector=status.phase=Pending images list kubeadm config images list -v 4 I0217 07:28:13.305268 14495 interface.go:384] Looking for default routes with IPv4 addresses I0217 07:28:13.307275 14495 interface.go:389] Default route transits interface "enp0s3" I0217 07:28:13.

微服务安全:认证,授权和审计(AAA)

微服务安全要点 通信链路加密 灵活的服务访问控制,包括细粒度访问策略 访问日志审计 服务提供方可替代性(batteries included)和可集成性 基本概念 安全标识 在K8S,安全标识(service account)代表一个用户,一个服务或一组服务。 安全命名 安全命名定义可运行服务的安全标识 微服务认证 传输层认证 终端用户认证 每一个终端请求通过JWT(JSON Web Token)校验, 支持Auth0, Firebase。 https://medium.facilelogin.com/securing-microservices-with-oauth-2-0-jwt-and-xacml-d03770a9a838

MicroK8S

Normally, ${SNAP_DATA} points to /var/snap/microK8S/current. snap.microK8S.daemon-docker, is the docker daemon started using the arguments in ${SNAP_DATA}/args/dockerd $snap start microK8S $microK8S.docker pull registry.cn-beijing.aliyuncs.com/google_containers/pause:3.1 $microK8S.docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 K8S.gcr.io/pause:3.1 for resource under namespace kube-system all-namespaces don’t include kube-system $microK8S.kubectl describe po calico-node-4sq5r --namespace=kube-system https://events.static.linuxfound.org/sites/events/files/slides/2016%20-%20Linux%20Networking%20explained_0.pdf

Choosing a CNI Network Provider for Kubernetes

The Container Network Interface (CNI) is a library definition, and a set of tools under the umbrella of the Cloud Native Computing Foundation project. For more information visit their GitHub project. Kubernetes uses CNI as an interface between network providers and Kubernetes networking. Why Use CNI Kubernetes default networking provider, kubenet, is a simple network plugin that works with various cloud providers. Kubenet is a very basic network provider, and basic is good, but does not have very many features.

K8s CNI之Calico实现

准备 搭建测试环境 可以参考从源代码构件K8S开发环境

避开Tiller使用Helm部署K8S应用

避开Tiller使用Helm部署K8S应用 Tiller存在的问题 破坏RBAC访问机制 全局的Tiller拥有cluster-admin角色,所以在安装过程中,服务以cluster-admin 角色可以越权访问资源 部署名字不能重复且唯一 部署名字唯一且很多chart中部署名字也添加到服务名中,导致服务名字混乱。 独立使用helm 获取模板 使用配置修改模板 生产yaml文件 git clone https://github.com/istio/istio.git cd istio git checkout 1.0.6 -b 1.0.6 helm template install/kubernetes/helm/istio --name istio --namespace istio-system \ --set security.enabled=false \ --set ingress.enabled=false \ --set gateways.istio-ingressgateway.enabled=false \ --set gateways.istio-egressgateway.enabled=false \ --set galley.enabled=false \ --set sidecarInjectorWebhook.enabled=false \ --set mixer.enabled=false \ --set prometheus.enabled=false \ --set global.proxy.envoyStatsd.enabled=false \ --set pilot.sidecar=false > $HOME/istio-minimal.yaml kubectl create namespace istio-system kubectl apply -f $HOME/istio-minimal.

K8s External Services

Headless services Without POD selectors This creates a service, but it doesn’t know where to send the traffic. This allows you to manually create an Endpoints object that will receive traffic from this service. kind: Endpoints apiVersion: v1 metadata: name: mongo subsets: - addresses: - ip: 10.240.0.4 ports: - port: 2701 CNAME records for ExternalName This service does a simple CNAME redirection at the kernel level, so there is very minimal impact on performance.

GRPC Load Balancing on Kubernetes Without Tears

https://kubernetes.io/blog/2018/11/07/grpc-load-balancing-on-kubernetes-without-tears/

K8s Envoy Example

envoy.yaml.tmpl admin: access_log_path: /tmp/admin_access.log address: socket_address: { address: 0.0.0.0, port_value: 9901 } static_resources: listeners: - name: listener_0 address: socket_address: { address: 0.0.0.0, port_value: 80 } filter_chains: - filters: - name: envoy.http_connection_manager config: stat_prefix: ingress_http route_config: name: local_route virtual_hosts: - name: local_service domains: ["*"] routes: - match: { prefix: "/" } route: { host_rewrite: nginx, cluster: nginx_cluster, timeout: 60s } http_filters: - name: envoy.router clusters: - name: nginx_cluster connect_timeout: 0.25s type: STRICT_DNS dns_lookup_family: V4_ONLY lb_policy: ${ENVOY_LB_ALG} hosts: [{ socket_address: { address: ${SERVICE_NAME}, port_value: 80 }}] docker-entrypoint.

K8s Service Headless

To ensure stable network ID , need to define a headless service for stateful applications StatefulSets are valuable for applications that require one or more of the following. Stable, unique network identifiers. Stable, persistent storage. Ordered, graceful deployment and scaling. Ordered, automated rolling updates `headless-nginx.yaml’ apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # has to match .

Envoy NOTES

术语 端点 Envoy discovers the cluster members via EDS Management server: A logical server implementing the v2 Envoy APIs Upstream: An upstream host receives connections and requests from Envoy and returns responses xDS: CDS/EDS/HDS/LDS/RLS/RDS/SDS APIs. Configuration Cache: cache Envoy configurations in memory in an attempt to provide fast response to consumer Envoys The simplest way to use Envoy without providing the control plane in the form of a dynamic API is to add the hardcoded configuration to a static yaml file.

K8s IDE Tool: code extension

for developers building applications to run in Kubernetes clusters, and for DevOps staff troubleshooting Kubernetes applications. Features include: View your clusters in an explorer tree view, and drill into workloads, services, pods and nodes. Browse Helm repos and install charts into your Kubernetes cluster. Intellisense for Kubernetes resources and Helm charts and templates. Edit Kubernetes resource manifests and apply them to your cluster. Build and run containers in your cluster from Dockerfiles in your project.