Posts

The Container Network Interface (CNI) is a library definition, and a set of tools under the umbrella of the Cloud Native Computing Foundation project. For more information visit their GitHub project. Kubernetes uses CNI as an interface between network providers and Kubernetes networking. Why Use CNI Kubernetes default networking provider, kubenet, is a simple network plugin that works with various cloud providers. Kubenet is a very basic network provider, and basic is good, but does not have very many features.

https://serverless.com/framework/docs/providers/aws/events/

get s3 object creation notification create queue awslocal s3 mb s3://localstack awslocal sqs create-queue --queue-name localstack get queue arn awslocal sqs get-queue-attributes --queue-url http://localhost:4576/queue/localstack --attribute-names All { "Attributes": { "ApproximateNumberOfMessagesNotVisible": "0", "ApproximateNumberOfMessagesDelayed": "0", "CreatedTimestamp": "1574152022", "ApproximateNumberOfMessages": "1", "ReceiveMessageWaitTimeSeconds": "0", "DelaySeconds": "0", "VisibilityTimeout": "30", "LastModifiedTimestamp": "1574152022", "QueueArn": "arn:aws:sqs:us-east-1:000000000000:localstack" } } create s3 notification config cat notification.json { "QueueConfigurations": [ { "QueueArn": "arn:aws:sqs:local:000000000000:localstack", "Events": [ "s3:ObjectCreated:*" ] } ] } make notification effect

AWS Lambda By default, all native logs within a Lambda function are stored in the function execution result within Lambda. Additionally, if you would like to review log information immediately after executing a function, invoking the Lambda function with the LogType parameter will retrieve the last 4KB of log data generated by the function. This information is returned in the x-amz-log-results header in the HTTP response. While these methods are great ways to test and debug issues associated with individual function calls, they do not do much by way of analysis or alerting.

serverless install -u https://github.com/serverless/examples/tree/master/aws-node-upload-to-s3-and-postprocess -n aws-node-upload-to-s3-and-postprocess sls deploy -s local awslocal logs describe-log-groups { "logGroups": [ { "arn": "arn:aws:logs:us-east-1:1:log-group:/aws/lambda/uload-local-postprocess", "creationTime": 1573867924377.624, "metricFilterCount": 0, "logGroupName": "/aws/lambda/upload-local-postprocess", "storedBytes": 0 } ] } awslocal logs describe-log-streams --log-group-name /aws/lambda/uload-local-postprocess { "logStreams": [] } serverless install -u https://github.com/serverless/examples/tree/master/aws-node-s3-file-replicator -n aws-node-s3-file-replicator sls deploy -s local awslocal s3api get-bucket-notification-configuration --bucket bbbb awslocal s3api get-bucket-acl --bucket output-bucket-12345 lambda_function.py import json def my_handler(event, context): print("Received event: " + json.

install nodejs install serverless npm install -g serverless npm install serverless-localstack check serverless version serverless -v Framework Core: 1.57.0 Plugin: 3.2.3 SDK: 2.2.1 Components Core: 1.1.2 Components CLI: 1.4.0 create serverless function serverless create --template aws-nodejs --path my-service cd my-service serverless.yml functions: hello: handler: handler.hello events: - http: path: ping method: get plugins: - serverless-localstack custom: localstack: debug: true stages: - local - dev host: http://localhost endpoints: S3: http://localhost:4572 DynamoDB: http://localhost:4570 CloudFormation: http://localhost:4581 Elasticsearch: http://localhost:4571 ES: http://localhost:4578 SNS: http://localhost:4575 SQS: http://localhost:4576 Lambda: http://localhost:4574 Kinesis: http://localhost:4568 APIGateway: http://localhost:4567 CloudWatch: http://localhost:4582 CloudWatchLogs: http://localhost:4586 CloudWatchEvents: http://localhost:4587 deploy redeploy if all Functions, Events or Resources

Note: Starting with TensorFlow 1.6, binaries use AVX instructions which may not run on older CPUs Have to build 1.6 or higher from source to run on older CPU Bazel 0.19.0 doesn’t read tools/bazel.rc anymore WARNING: The following rc files are no longer being read, please transfer their contents or import their path into one of the standard rc files: tensorflow-1.12.0/tools/bazel.rc $bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package --cxxopt="-D_GLIBCXX_USE_CXX11_ABI=0" --sandbox_debug > build.

putting /tmp on tmpfs https://blog.ubuntu.com/2016/01/20/data-driven-analysis-tmp-on-tmpfs Interrupt Coalescence ubuntu 16 default Interrupt Coalescence (IC) $ethtool -c enp0s25 Coalesce parameters for enp0s25: Adaptive RX: off TX: off Pause frames $ethtool -a enp0s25 Pause parameters for enp0s25: Autonegotiate: on RX: on TX: on network Tuning the network adapter (NIC) use Jumbo frames ifconfig eth0 mtu 9000 ip result for a healthy system with no packet drops ip -s link show eth0 stop irqbalance for home user

Improve docker container detection and resource configuration usage

https://blog.softwaremill.com/docker-support-in-new-java-8-finally-fd595df0ca54

https://www.oracle.com/technetwork/java/javase/8u191-relnotes-5032181.html

 awslocal lambda add-permission --function-name ServerlessExample --action lambda:InvokeFunction --statement-id sns-topic --principal apigateway.amazonaws.com --source-arn "arn:aws:execute-api:us-east-1:123456789012:pmte6kdjb6/*/*"

Status-Line The first line of a Response message is the Status-Line, consisting of the protocol version followed by a numeric status code and its associated textual phrase, with each element separated by SP characters. No CR or LF is allowed except in the final CRLF sequence. Status-Line = HTTP-Version SP Status-Code SP Reason-Phrase CRLF status code vs status in body https://www.codetinkerer.com/2015/12/04/choosing-an-http-status-code.html https://httpstatuses.com/ The main choice is do you want to treat the HTTP status code as part of your REST API or not.

事件 事件源发送的事件是JSON格式,LAMBDA运行时把原始JSON事件转换为 对象并发送给函数代码。 事件的结构和内容由事件源决定 支持事件源的服务 Kinesis DynamoDB Simple Queue Service 权限 通过权限策略(permissions policy)来管理IAM用户,组或者角色对lambda API和资源(函数或函数层)访问权限。 权限策略也可以授权给资源本身,让资源或服务访问lambda。 每一个lamdba函数都有一个执行角色(execution role), 该角色授权lamdba函数 本身对其他资源和服务的访问。执行角色至少包含对CLOUDWATCH日志的访问权限。 lambda也通过执行角色请求对事件源的读取权限。 资源 函数 版本 别名 层级 举例:授权SNS 调用 my-function aws lambda add-permission --function-name my-function --action lambda:InvokeFunction --statement-id sns \ > --principal sns.amazonaws.com --output text {"Sid":"sns","Effect":"Allow","Principal":{"Service":"sns.amazonaws.com"},"Action":"lambda:InvokeFunction","Resource":"arn:aws:lambda:ap-northeast-1:465691908928:function:my-function"} serveless backend Lambda allows to trigger execution of code in response to events in AWS, enabling serverless backend solutions. The Lambda Function itself includes source code

根据多年在IAAS/PAAS平台的建议经验,并帮助多个行业例如医疗,电信, 建筑行业客户向互联网IAAS/PAAS平台的迁移,在这里做一些直观的分享, 希望能为中小企业上云提供一些有价值的建议。

本帖中部分建议只是针对没有自己数据中心的中小企业,如果已经拥有自己的数据中心, 需要根据目前的计算能力选择相应的云服务厂商

云服务模型

RNA

从云服务模型可以看出,企业上云有以下路径可选。 但导向是一致的,专注业务,IT外包。

  • 从自建数据中心向IaaS平台迁移
  • 从IaaS平台向PaaS迁移
  • 在公有云厂家间迁移
  • 从IaaS平台向FaaS迁移
  • 混合云

上云的初级阶段: 购买云主机, 购买云主机的性价比指标和物理主机的指标完全不同, 如何选择最高性价比的云主机,参考

云主机主要性价比指标

硬盘转速的指标对云主机不再适用,云主机的性价比指标主要有以下几个:

  • 云主机内网/外网平均带宽(最核心指标)
  • 存储(网络硬盘)IO吞吐量
  • 初始化完成后首次资源利用率

其中内网平均带宽是最核心的指标,不仅关乎到MONEY的问题,而且对你的应用架构 起决定性的影响,原来的应用架构在云主机模式将会长期处于网络IO阻塞状态。造成 这种问题的根本原因不是你原来的架构不好,而是公有云厂商的网络性能太差,在2016 年的时候,阿里的平均带宽还不到100M每秒。AWS大概在200M每秒。

网络带宽是很多企业战略转型决定上云遇到的第一个大坑

内网平均带宽也是衡量私有云厂商服务能力的核心指标

所以公有云厂商又开始向你兜售混合云概念。混合云不符合公有云厂商的核心利益

国内外云厂商主要产品基准测试对比图

上云注意事项

上云的中级阶段:PaaS

如何选择PAAS供应商

上云的终级阶段: 不需要服务器。

如何选择FAAS供应商

https://github.com/wubigo/localstack-examples

Listen Notes(以下简称LN)是一个iPod试听资料搜索引擎和数据库,

它使用的技术是非常过时的,没有AI,没有深度学习,

没有区块链。“如果有谁一定要说我在使用AI,那他一定没有

使用真正的AI”。

通过阅读本帖,你能完全复制一个Listen Notes和其他相似的网站。

你不需要雇佣很多工程师。是否还记得,当脸书以5700万美元收购

Instagram的时候,Instagram总共只有13名员工,包括非工程技术

人员。Instagram是在2012初成立的,现在是2019,云计算技术更加

成熟,站在巨人的肩膀上,一个规模较小的工程团队更有可能做出一

些有意义的产品,即使是像我这样的OPC(一个人独立成立的公司)。

我发现本帖已经在HN和reddit上被广泛分享,对此,我在这里做些

澄清:

  • 本帖并不是最新的。LN使用的技术栈在不停的演进,经过过去两年的

    全职的开发,技术开始变得有点复杂了。LN在2017年初启动的时候

    只用了3台VPS.这里所说的“过时”是指我只使用了我熟悉的技术来快速

    的开发产品,聚焦到业务侧

  • 现在我每天只花20%的时间在工程技术上,其他的时间都在进行业务沟通,

    回复邮件,每天不断总结思考

  • 如果因为没有使用你推荐的技术,或者没有回复你提出的问题,而冒犯了

    你的话,请你原谅。我不能做到让每个人都开心满意

  • 本帖只是告诉你做互联网产品的一种方法,但并不是唯一方法。而且它可能

    不是最好的方法。它通过使用数据手段帮你了解技术世界。

https://broadcast.listennotes.com/the-boring-technology-behind-listen-notes-56697c2e347b

Create function index.js exports.handler = async function(event, context) { console.log("ENVIRONMENT VARIABLES\n" + JSON.stringify(process.env, null, 2)) console.log("EVENT\n" + JSON.stringify(event, null, 2)) return context.logStreamName } 打包 zip function.zip index.js aws lambda create-function --function-name my-function --zip-file fileb://function.zip --handler index.handler --runtime nodejs10.x --role arn:aws:iam::123456789012:role/lambda-cli-role --endpoint-url=http://localhost:4574 aws lambda get-function --function-name my-function --endpoint-url=http://localhost:4574 { "Code": { "Location": "http://localhost:4574/2015-03-31/functions/my-function/code" }, "Configuration": { "TracingConfig": { "Mode": "PassThrough" }, "Version": "$LATEST", "CodeSha256": "3d149vplmMjIEgZuPhQgnFJ+tndL4I9D11GL1qdgT6M=", "FunctionName": "my-function", "LastModified": "2019-09-29T01:16:43.

在windows,启动卷必须线启用共享驱动

启用共享驱动

1: Open "Settings" in Docker Desktop -> 
   "Shared Drives" -> 
   "Reset Credentials" -> 
   select drive "D" -> "Apply"

检查测试卷

docker run --rm -v d:/tmp:/data alpine ls /data

安装AWS CLI (venv) d:\code\venv>pip install awscli pip install awscli-local awslocal = aws –endpoint-url=http://localhost: 可以安装到系统环境 配置AWS CLI (venv) d:\code\venv>aws configure AWS Access Key ID [None]: any-id-is-ok AWS Secret Access Key [None]: fake-key Default region name [local]: local Default output format [None]: 命令行自动完成 $which aws_completer ~/code/venv/bin/aws_completer tee ~/.bashrc <<-'EOF' complete -C '~/code/venv/bin/aws_completer' aws EOF 安装AWS SAM CLI (venv) d:\code>pip install aws-sam-cli (venv) d:\code>sam --version SAM CLI, version 0.

bind eip gatsby develop -- --host=0.0.0.0 Prettier VS Code plugin JSX The hybrid “HTML-in-JS” is actually a syntax extension of JavaScript, for React, called JSX In pure JavaScript, it looks more like this: src/pages/index.js import React from "react" export default () => React.createElement("div", null, "Hello world!") Now you can spot the use of the ‘react’ import! But wait. You’re writing JSX, not pure HTML and JavaScript. How does the browser read that?

glide

To upgrade dependencies, please make the necessary modifications in glide.yaml and run glide update.