SDN

Cloud Native Data Center Network

Inline Versus Overlay Virtual Networks In the inline model, every hop between the source and destination is aware of the virtual network the packet belongs to and uses this information to do lookups in the forwarding table. In the overlay network model, only the edges of the network keep track of the virtual networks; the core of the network is unaware of virtual networks. VLAN and VRF are examples of the inline model of virtual networks, whereas MPLS, VXLAN, and other IP-based VPNs are examples of the overlay model.

Aws EC2 MACVLAN

Amazon EC2 networking doesn’t allow to use private ips in the containers through bridges or macvlan. Dedicating a network interface to a container makes it directly unreachable from the host. docker network create -d macvlan --subnet 172.30.80.0/20 --gateway 172.30.80.1 -o parent=eth0 pub_net docker run -d --network pub_net --ip 172.30.80.10 busybox

SR-IOV vs DPDK/VPP for NFV

Linux Bridge supported GRE Tunnels, but not the newer and more scalable VXLAN model https://vincent.bernat.ch/en/blog/2017-vxlan-linux This post will talk about the various building blocks available to speed up packet processing both hardware based e.g.SR-IOV, RDT, QAT, VMDq, VTD and software based e.g. DPDK, Fd.io/VPP, OVS etc and give hands on lab experience https://www.telcocloudbridge.com/blog/dpdk-vs-sr-iov-for-nfv-why-a-wrong-decision-can-impact-performance/

Vpn With Wireguard

安装 sudo add-apt-repository ppa:wireguard/wireguard sudo apt-get update sudo apt-get install wireguard -y 打开安全组 配置 创建key wg genkey | tee privatekey | wg pubkey > publickey private_key=$(wg genkey) public_key=$(echo $private_key | wg pubkey) 配置 ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9001 qdisc pfifo_fast state UP group default qlen 1000 link/ether 0a:81:39:72:97:90 brd ff:ff:ff:ff:ff:ff inet 10.

Vpn客户端设置参考

how-to-allow-local-network-when-using-wireguard-vpn-tunnel 允许非隧道流量 Open the WireGaurd Windows client. In the left pane, select the tunnel that you want local network routing to work, if you have more than one tunnel. Hit the Edit button. Uncheck Block untunneled traffic (kill-switch) option 增加本地的网络 AllowedIPs = 192.168.0.0/16, 0.0.0.0/1, 128.0.0.0/1, ::/1, 8000::/1 安装 https://download.wireguard.com/windows-client/wireguard-amd64-0.0.38.msi 配置 更改公钥 Endpoint所在的vpn服务器地址 https://github.com/Nyr/openvpn-install https://github.com/hwdsl2/setup-ipsec-vpn https://wireguard.isystem.io/ https://github.com/meshbird/meshbird https://www.tinc-vpn.org/ https://github.com/isystem-io/wireguard-aws Download and install the TunSafe, which is a Wireguard client for Windows.

NFV Notes

VLAN VLAN(802.1Q)是一个局域网技术,能够将一个局域网的广播域隔离为多个广播域,常被用来实现一个站点内不同的部门间的隔离 数据中心网络虚拟化——NVo3技术端到端隧道 NVo3(Network Virtualization over Layer 3),是IETF 2014年十月份提出的数据中心虚拟化技术框架。 NVo3基于IP/MPLS作为传输网,在其上通过隧道连接的方式,构建大规模的二层租户网络。NVo3的技术模型如下所示, PE设备称为NVE(Network Virtualization Element),VN Context作为Tag标识租户网络,P设备即为普通的IP/MPLS路由器。 NVo3在设计之初,VxLAN与SDN的联合部署已经成为了数据中心的大趋势,因此NVo3的模型中专门画出了 NVA(Network Virtualization Authority)作为NVE设备的控制器负责隧道建立、地址学习等控制逻辑 VxLAN(Virtual eXtensible LAN,RFC 7348) Vmware和Cisco联合提出的一种二层技术,突破了VLAN ID只有4k的限制,允许通过现有的IP网络进行隧道的传输。 别看VxLAN名字听起来和VLAN挺像,但是两者技术上可没什么必然联系。VxLAN是一种MACinUDP的隧道. NvGRE NvGRE(Network virtualization GRE,RFC draft)是微软搞出来的数据中心虚拟化技术,是一种MACinGRE隧道。它对传统的GRE报头进行了改造,增加了24位的VSID字段标识租户,而FlowID可用来做ECMP。由于去掉了GRE报头中的Checksum字段,因此NvGRE不支持校验和检验。NvGRE封装以太网帧,外层的报头可以为IPv4也可以为IPv6 https://www.sdnlab.com/nv-subject/

Overlay vs Underlay Networks

Overlay vs underlay in summary An Underlay network is the physical network responsible for the delivery of packets like DWDM, L2, L3, MPLS, or internet, etc. while an overlay is a logical network that uses network virtualization to built connectivity on top of physical infrastructure using tunneling encapsulations such as VXLAN, GRE, IPSec. https://telcocloudbridge.com/blog/overlay-vs-underlay-networks/

Linux Bridge

Some things worth noting in br_add_if: Only ethernet like devices can be added to bridge, as bridge is a layer 2 device. Bridges cannot be added to a bridge. New interface is set to promiscuous mode: dev_set_promiscuity(dev, 1) https://goyalankit.com/blog/linux-bridge

Aws EC2 多网卡配置

We can no longer assign a public IP address to your instance The auto-assign public IP address feature for this instance is disabled because you specified multiple network interfaces. Public IPs can only be assigned to instances with one network interface. To re-enable the auto-assign public IP address feature, please specify only the eth0 network interface. ip MAC=`curl http://169.254.169.254/latest/meta-data/mac` curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/${MAC}/local-ipv4s 配置第二块网卡 ip a | grep ^[[:digit:]] tee -a /etc/network/interfaces.

Kernel Bypass Networking

RDMA (Remote Direct Memory Access), TOE (TCP Offload Engine), and OpenOnload. More recently, DPDK (Data Plane Development Kit) has been used in some applications to bypass the kernel, and then there are new emerging initiatives such as FD.io (Fast Data Input Output) based on VPP (Vector Packet Processing). More will likely emerge in the future. Technologies like RDMA and TOE create a parallel stack in the kernel and solve the first problem (namely, the “kernel is too slow”) while OpenOnload, DPDK and FD.