Posts

the architecture of gRPC is layered: The lowest layer is the transport: gRPC uses HTTP/2 as its transport protocol. HTTP/2 provides the same basic semantics as HTTP 1.1 (the version with which nearly all developers are familiar), but aims to be more efficient and more secure. The new features in HTTP/2 that are most obvious at first glance are (1) that it can multiplex many parallel requests over the same network connection and (2) that it allows full-duplex bidirectional communication.

使用一个没有被占用的网段设置DOCKER_GATEWAY export DOCKER_GATEWAY=172.28.0.1 URL=https://github.com/istio/istio/releases/download/1.1.1/istio-1.1.1-linux.tar.gz curl -L "$URL" | tar xz cd istio-1.1.1 docker-compose -f install/consul/istio.yaml up -d Configure kubectl to use mapped local port for the API server: kubectl config set-context istio --cluster=istio kubectl config set-cluster istio --server=http://localhost:8080 kubectl config use-context istio docker-compose -f samples/bookinfo/platform/consul/bookinfo.yaml up -d kubectl apply -f samples/bookinfo/platform/consul/destination-rule-all.yaml kubectl get destinationrules -o yaml kubectl apply -f samples/bookinfo/platform/consul/virtual-service-all-v1.yaml docker-compose -f bookinfo.yaml exec details-v1 sh #cat /etc/resolv.

Microservice platform Spring-cloud VS Kubernetes

https://developers.redhat.com/blog/2016/12/09/spring-cloud-for-microservices-compared-to-kubernetes/

Before Kubernetes version 1.11, the Kubernetes DNS service was based on kube-dns. Version 1.11 introduced CoreDNS to address some security and stability concerns with kube-dns. Regardless of the software handling the actual DNS records, both implementations work in a similar manner: A service named kube-dns and one or more pods are created. The kube-dns service listens for service and endpoint events from the Kubernetes API and updates its DNS records as needed.

set date

FROM alpine:3.8
RUN apk add --no-cache tzdata && rm -rf /var/cache/apk/*
ENV TZ Asia/Shanghai
RUN ln -s /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
docker run -it --rm -e TZ=Asia/Shanghai alpine:3.8 ash

创建/etc/localtime

ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

PodUID kubectl get pod <PID_NAME> -o=jsonpath='{.metadata.uid}' POD on disk /var/lib/kubelet/pods/<PodUID>/ /var/log/pods/<PodUID>/<container_name> ls -l /var/log/pods/<PodUID>/<container_name>/ lrwxrwxrwx 1 root root 165 3月 30 06:52 0.log -> /var/lib/docker/containers/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021/e74eafc4b3f0cfe2e4e0462c93101244414eb3048732f409c29cc54527b4a021-json.log In a production cluster, logs are usually collected, aggregated, and shipped to a remote store where advanced analysis/search/archiving functions are supported. In kubernetes, the default cluster-addons includes a per-node log collection daemon, fluentd. To facilitate the log collection, kubelet creates symbolic links to all the docker containers logs under /var/log/containers with pod and container metadata embedded in the filename.

你的老板,才是你最重要的人脉 我相信这世界上,90% 的人都有过诅咒上司的一刻。偏执、变态、有病这些词,都是恨不得改成老板微信标签的定语。 但是说大实话,年纪轻轻就有人促使你学会面对职场残酷的人,那不是一个贵人是什么? 但是我们该如何面对这个残忍的贵人呢?我掏心掏肺给你一点方向。 上司不是用来喜欢的 他不过是你的资源 最近老同事 Susan 给我打来了电话,很苦恼:我怎么就摊上了这样的破上司呢? 不等我回答,她就直接自问自答:老板觉得我有能力,但是认为我对他不够尊重,就因为这样竟然就要封杀我!我要跳槽了! 我笑了,你的情形再惨,惨得过孙悟空吗?毕竟唐僧动不动就给他念紧箍咒,没两天就叫他收拾包袱走人,他不也一样跟唐僧一起去取了个经回来吗。 这无非是因为,孙悟空是足够聪明的猴子。 他知道,唐僧是他老孙的一个重要资源,而不仅仅是他的上司。哪怕师傅再迂腐无能,再是非不分,他还是能调动观音菩萨、如来 佛祖来到身边的人。 「要换做是我」,我笑着说,「尊重还不容易吗,态度好点就行了,直到他把行业诸多诀窍都教会我,再走。」 Susan 开始若有所思了。她和大部分年轻人一样,对这个道理都不太懂。 大部分人觉得:老板是资方,自己是劳方,天生就是对立的两端。毕竟在现实生活中逼你干活、骂你不对、给你 Mission impossible 的人就是他啊。 但是聪明人却是这么看的:老板是小宇宙,自己也是小宇宙,只要懂得挖,他就是我的合作方,我的资源库。 话说大名鼎鼎的格力董明珠,也有一个相处得不太顺当的前老板朱江洪。 1991 年,朱江洪是格力的总经理,董明珠是一名普通的销售。一个技术狂,一个铁娘子,免不了争吵。 然而争吵归争吵,董明珠还是在朱江洪的提拔下,从经营部长、大区经理、副总、总裁做到到格力集团董事长。两个人合作,把格力带进了千亿俱乐部,甚至突破了两千亿销售大关。 董明珠就是这样一个足够机智的人。吵架归吵架,矛盾归矛盾,心底里骂过对方多少句「傻X」没关系,互相依赖、相互合作做出成绩来才是关键。 每一个老板,无论多差劲,好歹都有一丝半点闪光点,否则他坐不到老板的位置上 这样一个人,就是你人生长河中一个重要的资源库。 多骂他几句「有病」,形成对立关系,无非是绝了自己获取人脉、信息、协助、建议的后路,这不是很笨吗? 所有的职场恩怨,都起源于沟通不够 我刚开始工作的公司是个外企,沟通风格相当自由。 上司和下属之间,有工作谈工作,没工作谈目标,没什么不能直接说的。很多人,哪怕跨了级,情同姐妹、称兄道弟的却比比皆是。 然而当我到第一个民企工作的时候就感觉很压抑了。 老板喜欢拍马屁,下属喜欢越位拍马屁,办公室流行着欲言又止的气氛,私下拉小群组帮派却热火朝天。所谓的人事厚黑学、信息不对称无处不在,我也算是开眼界了。 然而我始终是一个好学宝宝,遇到问题和困难都喜欢探究个为什么。 我发现,造成这么多职场政治的终极原因,无非就是沟通的问题。 上司和下属之间沟通不明确,不拿出真心来说清楚要求,说一句留半句,剩下 9 句让你猜,这才是致命的低效源头 俄国作家契科夫曾经写过小说《公务员之死》。这个倒霉蛋公务员就是不会沟通的典型。 他不小心打喷嚏的时候把鼻涕喷在前座的将军身上,公务员连忙道歉,将军撇了撇嘴说「没关系,算了」。 然而,公务员却把将军的「撇嘴」看成很生气,接下来每天都去将军那里道歉,将军烦不胜烦,更加不想理他。结果小公务员就一直提心吊胆,到最后竟然吓死了。 这就是「说者无心,听者有意」的典型。小学语文课不好好做阅读理解,跑到职场上对老板表情做那么多阅读理解干嘛? 上司很糟糕不跟你沟通,你不也很糟糕地没去主动找上司沟通吗? 这世界上沟通是相互的,依赖别人终究被动,依赖自己才是王道 糟糕上司永远没有最后,跳槽也解决不了 不知道大家发现了没有,身边的朋友,只要在一个公司遇到委屈,很可能就会跳槽跑路。 他们常常寄望的是,下一个公司遇到一个圣人一样的妈妈老板,天天怕你冷了怕你饿了,更怕你的玻璃心掉一地碎了,像养蛙游戏的老母亲一样日日念叨着你快回来。 实际上,很可能下一个老板更奇葩,更凶猛,KPI 更难完成。大概这世界上遇到好老板的几率,和媳妇遇到好婆婆的几率差不多低。 大家熟知的偶像刘德华,在成为天王巨星之前也跟不少导演合作过了。 80 年代和杜琪峰合作拍《鹿鼎记》,由始到终都被杜老板骂不如男一号梁朝伟。 90 年代写了第一首歌《情是那么笨》,被知名作词人黄霑毫不留情地指责:「没见过写情这么笨的作词人。」 当他进军歌坛的时候,还是被歌坛的老大哥谭咏麟一脸不客气地说他不会唱歌,劝他打消进军歌坛的念头。 你看,哪怕是个有点名声的小鲜肉换各种角色都可能被前辈痛骂,何况你这个籍籍无名的小土豆,怎么可能通过跳槽就彻底换了一个毫不糟心的环境呢? 糟糕上司哪里都有,如果你能死不要脸地撑到最后,妥妥用能力告诉他「老大,你看走眼了」,那才叫做真的威风 职场说到底也是成年人的生意场,每个人都是自己的老板。 你的老板,你的同事,你的合作伙伴,都是你的供应商。 他们身上的精华可以吸取,糟点可以避开,那你的装备才会越来越多,在升职加薪的路上一路打怪冲关。 这年头流行让别人给自己「赋能」,现代管理学之父彼得·德鲁克比我们早数十年就看到上司的这个「功能」。他说: Making the strength of the boss productive is a key to the subordinate’s own effectiveness.

模块 A module is a collection of related Go packages that are versioned together as a single unit. Modules record precise dependency requirements and create reproducible builds. go.mod A module is defined by a tree of Go source files with a go.mod file in the tree’s root directory. Module source code may be located outside of GOPATH. There are four directives: module, require, replace, exclude. 显示当前的模块和依赖 go list -m all 显示特定模块的所有版本标签 go list -m -versions github.

Enable Helm in cluster Create a Service Account tiller for the Tiller server (in the kube-system namespace). Service Accounts are meant for intra-cluster processes running in Pods. Bind the cluster-admin ClusterRole to this Service Account. ClusterRoleBindings to be applicable in all namespaces. Tiller to manage resources in all namespaces. Update the existing Tiller deployment (tiller-deploy) to associate its pod with the Service Account tiller. kubectl create serviceaccount tiller --namespace kube-system kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' or

Configuring Nodes to Authenticate to a Private Registry Note: Kubernetes as of now only supports the auths and HttpHeaders section of docker config. This means credential helpers (credHelpers or credsStore) are not supported. Docker stores keys for private registries in the $HOME/.dockercfg or $HOME/.docker/config.json file. If there are files in the search paths list below, kubelet uses it as the credential provider when pulling images. {–root-dir:-/var/lib/kubelet}/config.json {cwd of kubelet}/config.

ssh client config ~/.ssh/config host * StrictHostKeyChecking no Enables forwarding of the authentication agent connection client config .ssh/config ForwardAgent yes Enable ssh-agent on main device .bashrc SSH_ENV="$HOME/.ssh/environment" function start_agent { echo "Initialising new SSH agent..." /usr/bin/ssh-agent | sed 's/^echo/#echo/' > "${SSH_ENV}" echo succeeded chmod 600 "${SSH_ENV}" . "${SSH_ENV}" > /dev/null /usr/bin/ssh-add; } # Source SSH settings, if applicable if [ -f "${SSH_ENV}" ]; then .

setup external ETCD install docker, kubelet, and kubeadm Configure the kubelet to be a service manager for etcd Create configuration files for kubeadm /tmp/${HOST0}/kubeadmcfg.yaml apiVersion: "kubeadm.k8s.io/v1beta1" kind: ClusterConfiguration etcd: local: serverCertSANs: - "192.168.1.10" peerCertSANs: - "192.168.1.10" extraArgs: initial-cluster: infra0=https://192.168.1.10:2380 initial-cluster-state: new name: infra0 listen-peer-urls: https://192.168.1.10:2380 listen-client-urls: https://192.168.1.10:2379 advertise-client-urls: https://192.168.1.10:2379 initial-advertise-peer-urls: https://192.168.1.10:2380 Generate the certificate authority sudo kubeadm init phase certs etcd-ca export HOST0="192.168.1.10" sudo kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.

tl;dr

uget https://osdn.net/projects/systemrescuecd/storage/releases/6.0.2/systemrescuecd-6.0.2.iso
sudo mount -t tmpfs  tmpfs /takeover/
sudo mount -o loop,ro -t iso9660 ~/systemrescuecd-6.0.2.iso /mnt/cd
cp -rf /mnt/cd/* /takeover/
curl -L https://www.busybox.net/downloads/binaries/1.26.2-defconfig-multiarch/busybox-x86_64 > busybox
chmod u+x /takeover/busybox
git clone https://github.com/marcan/takeover.sh.git
gcc takeover.sh/fakeinit.c -o ./fakeinit

准备 docker pull istio/proxyv2:1.0.6 docker tag istio/proxyv2:1.0.6 gcr.io/istio-release/proxyv2:release-1.0-latest-daily docker push registry.cn-beijing.aliyuncs.com/co1/istio_proxyv2:1.0.6 docker pull istio/pilot:1.0.6 docker tag istio/pilot:1.0.6 gcr.io/istio-release/pilot:release-1.0-latest-daily docker pull istio/mixer:1.0.6 docker tag istio/mixer:1.0.6 gcr.io/istio-release/mixer:release-1.0-latest-daily docker pull istio/galley:1.0.6 docker tag istio/galley:1.0.6 gcr.io/istio-release/galley:release-1.0-latest-daily docker pull istio/citadel:1.0.6 docker tag istio/citadel:1.0.6 gcr.io/istio-release/citadel:release-1.0-latest-daily docker pull istio/sidecar_injector:1.0.6 docker tag istio/sidecar_injector:1.0.6 gcr.io/istio-release/sidecar_injector:release-1.0-latest-daily git clone https://github.com/istio/istio.git cd istio git checkout 1.0.6 -b 1.0.6 安装 Istio by default uses LoadBalancer service object types. Some platforms do not support LoadBalancer service objects.

GITHUB两种主要的pull request的开发模式 分叉拉取模式 任何开发人员可以在项目源仓库(upstream)分叉,然后仓库该分叉(origin)到本地文件系统进行开发 测试,测试完毕提交到分叉origin,并发送pull request到源仓库upstream, 源仓库维护人员评审 更改,并最终决定是否合并该更改到源仓库 在发送pull request之前,好几个开发人员共同为一个特性协作开发, 互相从对方的仓库拉取代码。 这时,从对方的仓库拉取代码简化重新定义一个remote,该remote把本地的分叉指向对方仓库地址。 https://github.com/wubigo/wubigo.github.io 单击Fork按钮(右上角) GITHUB把该仓库代码复制到自己的github账号,建立分叉仓库 打开git命令行客户端,把分叉仓库克隆到本地环境 git clone https://github.com/$USER_NAME/wubigo.github.io.git cd wubigo.github.io git remote add upstream [email protected]:wubigo/wubigo.github.io.git # Never push to upstream master git remote set-url --push upstream no_push # Confirm that your remotes make sense: git remote -v origin https://github.com/Fuang/wubigo.github.io.git (fetch) origin https://github.com/Fuang/wubigo.github.io.git (push) upstream [email protected]:wubigo/wubigo.github.io.git (fetch) upstream [email protected]:wubigo/wubigo.github.io.git (push) 同步本地代码到upstream git fetch upstream git checkout master git rebase upstream/master git push 查看各个分支的最新提交ID

准备 创建角色和授权 kubectl create clusterrolebinding "cluster-admin-faas" \ --clusterrole=cluster-admin \ --user="cluster-admin-faas" 分别为FAAS核心服务和函数创建名字空间 kubectl apply -f https://raw.githubusercontent.com/openfaas/faas-netes/master/namespaces.yml 创建凭证 # generate a random password PASSWORD=$(head -c 12 /dev/urandom | shasum| cut -d' ' -f1) kubectl -n openfaas create secret generic basic-auth \ --from-literal=basic-auth-user=admin \ --from-literal=basic-auth-password="$PASSWORD" 在本地helm仓库增加openfaas helm repo add openfaas https://openfaas.github.io/faas-netes/ "openfaas" has been added to your repositories 开始安装 helm repo update \ && helm upgrade openfaas --install openfaas/openfaas \ --namespace openfaas \ --set basic_auth=true \ --set functionNamespace=openfaas-fn 默认通过NodePorts方式访问openfaas控制台

容器网络

容器网络方案 = 接入 + 流控 + 通道

docker默认的网络

桥接网络

Docker网络macvlan

网络macvlan

Docker宿主网络

宿主网络

Docker覆盖网络

宿主端口绑定

绑定方式: -p

绑定形式

ip:hostPort:containerPort| ip::containerPort
| hostPort:containerPort | containerPort

containerPort必须指定

docker run --rm --name web -p 80:80 -v /home/bigo/site:/usr/share/nginx/html:ro -d nginx:1.14-alpine

docker 会为端口绑定的容器自动启动docker-proxy进程

docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.2 -container-port 80

Getting an SSL Certificate and CloudFront Create CloudFront Distribution Navigate to CloudFront in your AWS console and click “Create Distribution”. Click “Get Started” under the Web option (not the RTMP). You’ll arrive on the Create Distribution page. Here you need to change three things: 1. Click inside the input field for “Origin Domain Name”. A list of your Amazon S3 buckets should pop up. Select the S3 bucket you want to use.