Kubernetes is removing the “dockershim”, which is special in-process support the kubelet has for docker.
However, the kubelet still has the CRI (container runtime interface) to support arbitrary runtimes. containerd is currently supported via the CRI, as is every runtime except docker. Docker is being moved from having special-case support to being the same in terms of support as other runtimes.
Does that mean using docker as your runtime is deprecated?
Node-level Logging System component logs RUN IN CONTAINER(Y/N) Systemd(W/WO) LOGGER LOCATION kubelet and container runtime W/O /var/log kubelet and container runtime W journald scheduler Y /var/log kube-proxy Y /var/log /var/lib/kubelet/pods/<PodUID>/
/var/log/pods/<PodUID>/<container_name>
ls -l /var/log/pods/<PodUID>/<container_name>/ lrwxrwxrwx 1 root root 165 3月 30 06:52 0.log -> /var/lib/docker/containers/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021-json.log Cluster-level logging Use a node-level logging agent that runs on every node.
version notes
some only works on 1.13
kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.3", GitCommit:"721bfa751924da8d1680787490c54b9179b1fed0", GitTreeState:"clean", BuildDate:"2019-02-16T15:29:34Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"} Starting with Kubernetes 1.12, the K8S.gcr.io/kube-${ARCH}, K8S.gcr.io/etcd and K8S.gcr.io/pause images don’t require an -${ARCH} suffix
get all Pending pods
kubectl get pods --field-selector=status.phase=Pending images list
kubeadm config images list -v 4 I0217 07:28:13.305268 14495 interface.go:384] Looking for default routes with IPv4 addresses I0217 07:28:13.307275 14495 interface.go:389] Default route transits interface "enp0s3" I0217 07:28:13.
Normally, ${SNAP_DATA} points to /var/snap/microK8S/current. snap.microK8S.daemon-docker, is the docker daemon started using the arguments in ${SNAP_DATA}/args/dockerd
$snap start microK8S $microK8S.docker pull registry.cn-beijing.aliyuncs.com/google_containers/pause:3.1 $microK8S.docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 K8S.gcr.io/pause:3.1 for resource under namespace kube-system all-namespaces don’t include kube-system
$microK8S.kubectl describe po calico-node-4sq5r --namespace=kube-system https://events.static.linuxfound.org/sites/events/files/slides/2016%20-%20Linux%20Networking%20explained_0.pdf
The Container Network Interface (CNI) is a library definition, and a set of tools under the umbrella of the Cloud Native Computing Foundation project. For more information visit their GitHub project. Kubernetes uses CNI as an interface between network providers and Kubernetes networking.
Why Use CNI Kubernetes default networking provider, kubenet, is a simple network plugin that works with various cloud providers. Kubenet is a very basic network provider, and basic is good, but does not have very many features.
Headless services Without POD selectors This creates a service, but it doesn’t know where to send the traffic. This allows you to manually create an Endpoints object that will receive traffic from this service.
kind: Endpoints apiVersion: v1 metadata: name: mongo subsets: - addresses: - ip: 10.240.0.4 ports: - port: 2701 CNAME records for ExternalName This service does a simple CNAME redirection at the kernel level, so there is very minimal impact on performance.
To ensure stable network ID , need to define a headless service for stateful applications
StatefulSets are valuable for applications that require one or more of the following.
Stable, unique network identifiers. Stable, persistent storage. Ordered, graceful deployment and scaling. Ordered, automated rolling updates `headless-nginx.yaml’
apiVersion: v1 kind: Service metadata: name: nginx labels: app: nginx spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx --- apiVersion: apps/v1 kind: StatefulSet metadata: name: web spec: selector: matchLabels: app: nginx # has to match .
术语 端点 Envoy discovers the cluster members via EDS Management server: A logical server implementing the v2 Envoy APIs Upstream: An upstream host receives connections and requests from Envoy and returns responses xDS: CDS/EDS/HDS/LDS/RLS/RDS/SDS APIs. Configuration Cache: cache Envoy configurations in memory in an attempt to provide fast response to consumer Envoys The simplest way to use Envoy without providing the control plane in the form of a dynamic API is to add the hardcoded configuration to a static yaml file.
for developers building applications to run in Kubernetes clusters, and for DevOps staff troubleshooting Kubernetes applications. Features include:
View your clusters in an explorer tree view, and drill into workloads, services, pods and nodes. Browse Helm repos and install charts into your Kubernetes cluster. Intellisense for Kubernetes resources and Helm charts and templates. Edit Kubernetes resource manifests and apply them to your cluster. Build and run containers in your cluster from Dockerfiles in your project.