Skip to main content

Qemu and Virt-manager in macOS (Intel-based)

· 4 min read
Adekabang
Tukang Ngoprek

So, I found this old iMac (Retina 5K, 27-inch, 2017) lying around with pretty decent specs. I'm thinking of turning it into my dedicated x86 playground since my main machine is an M1 MacBook. First thing's first: getting Qemu and Virt-manager up and running for some virtual machine action. Yeah, I know there are other VM solutions like VMware or VirtualBox, but I want to stay familiar with what's common in enterprise environments (even if nobody's running macOS in production, haha!).

Specification

iMac (Retina 5K, 27-inch, 2017)

  • Processor: 4.2 GHz Quad-Core Intel Core i7
  • Memory: 16 GB 2400 MHz DDR4
  • OS Version: Ventura 13.7.4 (22H420)

Step by Step: Qemu

  1. First, we need Homebrew to install everything.

    Get it here: https://brew.sh/

    /bin/bash -c "$(curl -fsSL <https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh>)"
  2. Now, let's install Qemu itself.

    Check it out here: https://formulae.brew.sh/formula/qemu

    brew install qemu
  3. Verify the installation by checking the Qemu version.

    qemu-system-x86_64 --version

    Check qemu version

  4. Let's get Ubuntu running with Qemu.

    1. Before we dive in, grab the Ubuntu ISO. I'm going with the desktop version from here: https://releases.ubuntu.com/24.04/. I'll use the desktop version (https://releases.ubuntu.com/24.04/ubuntu-24.04.2-desktop-amd64.iso).

    2. Create a disk image using qemu-img. I'm making a 20GB image in the qcow2 format.

      qemu-img create -f qcow2 ubuntu-24.04.2-desktop-amd64.qcow2 20G
    3. Fire up the VM, booting from the ISO we just downloaded (mounted as a CD drive) and attaching the virtual disk we created.

      qemu-system-x86_64 \\
      -machine type=q35,accel=hvf \\
      -smp 2 \\
      -hda ubuntu-24.04.2-desktop-amd64.qcow2 \\
      -cdrom ./ubuntu-24.04.2-desktop-amd64.iso \\
      -m 4G \\
      -vga virtio \\
      -usb \\
      -device usb-tablet \\
      -display default,show-cursor=on

      Here's a breakdown of what those flags actually mean:

      • qemu-system-x86_64: Tells QEMU we want to emulate a 64-bit Intel/AMD system.
      • machine type=q35,accel=hvf
        • type=q35: Sets the machine chipset to Q35, emulating a modern PC with PCI Express.
        • accel=hvf: Uses the Hypervisor Framework (HVF) for hardware acceleration on macOS. This makes things much faster!
      • smp 2: Gives the VM 2 CPU cores.
      • hda ubuntu-24.04.2-desktop-amd64.qcow2: Specifies the QCOW2 disk image as the primary hard drive.
      • cdrom ./ubuntu-24.04.2-desktop-amd64.iso: Mounts the ISO as a virtual CD-ROM for booting/installing.
      • m 4G: Allocates 4GB of RAM to the VM.
      • vga virtio: Uses a Virtio-based video card for better performance.
      • usb: Enables USB support.
      • device usb-tablet: Adds a virtual USB tablet for smoother mouse input.
      • display default,show-cursor=on: Makes sure the mouse cursor is visible.
    4. The VM should boot up from the ISO. Go ahead and install the OS to the virtual disk we created earlier.

    5. Once the installation is done, you can boot the VM without the ISO.

      qemu-system-x86_64 \\
      -machine type=q35,accel=hvf \\
      -smp 2 \\
      -hda ubuntu-24.04.2-desktop-amd64.qcow2 \\
      -m 4G \\
      -vga virtio \\
      -usb \\
      -device usb-tablet \\
      -display default,show-cursor=on

      Ubuntu Desktop VM in qemu

    6. To shut down the VM, do it cleanly from within the OS, or, if you're feeling lazy, just hit Ctrl+C in the terminal (but I wouldn't recommend it!).

Step by Step: Libvirt and Virt-manager

  1. Install libvirt.

    libvirt is a toolkit for managing virtualization platforms like Qemu, KVM, Xen, etc.

    Check it out here: https://formulae.brew.sh/formula/libvirt

    brew install libvirt
  2. Start the libvirt service.

    brew services start libvirt
  3. Install virt-manager. To make managing VMs easier, we'll use virt-manager as a GUI.

    Check it out here: https://formulae.brew.sh/formula/virt-manager

    brew install virt-manager
  4. Start a virt-manager session.

    virt-manager -c "qemu:///session" --no-fork
  5. Once the window pops up, you can install Ubuntu VMs and mess around.

    virt-manager window

Conclusion

So there you have it! Qemu, libvirt, and virt-manager all set up on macOS. It wasn't too bad, right? Now you can spin up VMs to your heart's content. This setup is pretty handy for testing different operating systems, playing around with software, or just generally tinkering without messing up your main system. Plus, getting familiar with these tools can be a real boost if you ever find yourself working with virtualization in a more "serious" environment. Happy virtualizing!

Reference

https://www.arthurkoziel.com/qemu-ubuntu-20-04/

https://www.arthurkoziel.com/running-virt-manager-and-libvirt-on-macos/

Setup Non HA Kubernetes using K3S

· 9 min read
Adekabang
Tukang Ngoprek

Last Update: 25 February 2025

K3S

Resources

Non-HA Control Plan Setup and Configuration

Non HA Control Plane Kubernetes

The implementation Scenario will be configured:

  • 1 Master Node
  • 2 Worker Node

These three nodes will be deployed in Proxmox VE, with each specification of node:

  • 4 vCPU
  • 16 GB of memory
  • 50 GB of storage
  • Ubuntu 20.04

Step by step

VM Setup

  1. Update and upgrade the package repository to the newest version.

    sudo apt update -y
    sudo apt upgrade -y
  2. (optional) rename and set /etc/host for each node became:

    1. Master node → kube-master-1
    2. Worker node → kube-worker-1, kube-worker-2
    #edit /etc/hosts and add this at the end of file (adjust ip address)
    10.0.2.200 kube-master-1
    10.0.2.199 kube-worker-1
    10.0.2.198 kube-worker-2
  3. Reboot to finalize the upgrade and apply the hostname.

  4. Install docker with this script (Docker: Install using the conveniencescript)

    curl -fsSL https://get.docker.com -o get-docker.sh
    sudo sh get-docker.sh
  5. Check docker command and version

    docker ps
    docker version

    Check Docker Version

Master Node Setup

  1. Defined the K3s token. Save this token and make sure this token is the same for all nodes

    export K3S_TOKEN=kubernetesdemo123
  2. The installation of master node, using this script:

    1. install the server as master node
    2. disable Traefik for ingress → will use Nginx ingress
    3. disable Servicelb → will use MetalLB
    4. disable local-storage → will use Ceph CSI
    5. declare to use docker
    curl -sfL https://get.k3s.io | sh -s - server --disable traefik --disable servicelb --disable local-storage --docker

    K3S Installation

  3. Check the service status, make sure it is active

    systemctl status k3s.service

    K3S Master Node Status

  4. Check kubectl command. It will display 1 node only (which is the master node).

    kubectl get node

    Get Node Information using kubectl

    ⚠️ if the get error unable to read /etc/rancher/k3s/k3s.yml , you can fix with this step.

    mkdir .kube
    sudo cp /etc/rancher/k3s/k3s.yaml .kube/config.yaml
    sudo chown $USER:$GROUP .kube/config.yaml
    export KUBECONFIG=~/.kube/config.yaml

    or if you have not start the installation, you can add --write-config 644 in the end of the script, like this:

    curl -sfL https://get.k3s.io | sh -s - server --disable traefik --disable servicelb --disable local-storage --docker --write-config 644

Worker Node Setup

Do this step for all the worker node, in this scenario will be kube-worker-1 and kube-worker-2.

  1. Defined the K3s token. Use the same token as defined in master node.

    export K3S_TOKEN=kubernetesdemo123

    ⚠️ If you forget the token on the master node, you can check and on the master node on this file. Copy all the string.

    cat /var/lib/rancher/k3s/server/node-token
  2. The installation of worker nodes, using this script:

    1. install the server as worker node (agent)
    2. declare to use docker
    3. define the server endpoint to master node ip or domain
    curl -sfL https://get.k3s.io | sh -s - agent --docker --server https://kube-master-1:6443
  3. Check the service in the worker nodes

    systemctl status k3s-agent.service

    K3S Worker Node Status

  4. Check kubectl command in master node. Now, it will display more than 1 node (which is the includes all the installed worker nodes).

    kubectl get node

    Get Nodes Information using kubectl

⚠️ If you stuck in when starting k3s-agent. Make sure worker node and master node can be connecting. it could be firewall, proxy server, or incorrect/mismatched MTU. Script below can be use for check the connectivity.

curl -ks https://ipaddress:6443/ping

Install and Configure services in the Master Node

Because we disable some service in the master node. Now we are going to install it the replacement services.

HELM - Package Manager Installation

  1. Install Helm using script below (the latest script can be seen on the Helm docs):

    curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
    sudo apt-get install apt-transport-https --yes
    echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
    sudo apt-get update
    sudo apt-get install helm
  2. Check the version

    helm version

    Check Helm Version

MetalLB - Load Balancer Installation and Configuration

Load Balancer Diagram

Services

  • As default, all resources in Kubernetes are isolated, specifically pods.
  • Pods are isolated by default. To enable communication between pods or with the external network, you need to configure Services.
  • There are several type of service:
    • ClusterIP → Default services, ClusterIP services provide an internal IP address within the cluster that other pods can use to communicate. They are not directly accessible from outside the cluster.
    • NodePort (Port 30000-32767) → NodePort services expose a port on each node in the cluster, allowing external access. The port range 30000-32767 is the default range.
    • LoadBalancer → LoadBalancer services provision an external load balancer that distributes traffic to multiple pods.

Installation Step by Step

  1. Add metallb repository to helm

    helm repo add metallb https://metallb.github.io/metallb
  2. Check the repo list

    helm repo ls
  3. Search the metallb

    helm search repo metallb

    Search MetalLB in Helm

  4. Pull the metallb from the repository, it will download tgz file.

    # run this in the home dir or other directory
    helm pull metallb/metallb
    # extract the tgz
    tar xvf metallb-*
  5. Change directory to metallb and if there any configuration changes, you can edit values.yaml

    cd metallb

    #optional
    vim values.yaml
  6. Install metallb using helm

    • Set the chart name to metallb
    • Define the file to values.yaml
    • Put the namespace to metallb-system
    • Enable debug mode
    • Create metallb-system namespace
    helm install metallb -f values.yaml . -n metallb-system --debug --create-namespace
  7. Check the status, if the status still init, wait until running.

    kubectl -n metallb-system get all
    kubectl -n metallb-system get pod -w

    Get All using kubectl

Configuration

  1. First, define the address pool on ipaddresspool.yaml

    apiVersion: metallb.io/v1beta1
    kind: IPAddressPool
    metadata:
    name: default-pool
    namespace: metallb-system
    spec:
    addresses:
    - 10.0.2.11-10.0.2.100 #adjust this range
  2. Apply the configuration

    kubectl apply -f ipaddresspool.yaml

    Apply IP Address Pool using kubectl

  3. Then, we define the L2 Advertisement config on l2advertisement.yaml

    apiVersion: metallb.io/v1beta1
    kind: L2Advertisement
    metadata:
    name: default
    namespace: metallb-system
    spec:
    ipAddressPools:
    - default-pool
  4. Apply the configuration

    kubectl apply -f l2advertisement.yaml

    Apply L2 Advertisement using kubectl

Test the Load Balancer

  1. Run demo app for the testing, the demo app using nginx image.

    kubectl run app-demo-1 --image=nginx --port=80

    #check the pod status, wait until running
    kubectl get pod
  2. Since by default pod cannot communicate to outside, we need to create the service to expose the pods.

    kubectl expose pod app-demo-1 --type=LoadBalancer --target-port=80 --port=80 --name app-demo-1

    #check the pods and services
    kubectl get all

    Get All using kubectl

  3. You can access the app using EXTERNAL-IP of service/app-demo-1 and Nginx landing page will show up.

    Access External IP

Nginx - Ingress Controller Installation and Configuration

Ingress Diagram

Installation Step by Step

  1. Add nginx repository to helm

    helm repo add nginx-stable https://helm.nginx.com/stable
  2. Check the repo list

    helm repo ls
  3. Search the nginx

    helm search repo nginx
  4. Pull the nginx-ingress from the repository, it will download tgz file.

    # run this in the home dir or other directory
    helm pull nginx-stable/nginx-ingress
    # extract the tgz
    tar xvf nginx-ingress-*
  5. Change directory to nginx-ingress and edit values.yaml file.

    cd nginx-ingress

    # edit values.yaml
    vim values.yaml

    # locate ingressClass and change the variable below to true
    ...
    ingressClass:
    ...
    setAsDefaultIngress: true
    ...
  6. Install nginx-ingress using helm

    helm -n ingress install nginx-ingress -f values.yaml . --debug --create-namespace
  7. Check the installation

    kubectl -n ingress get all

    Get All Ingress using kubectl

  8. The EXTERNAL-IP is available and reachable, but since no resources use it, it display 404

    Access External IP but Not Found

Testing the Ingress

  1. Add bitname repository to helm

    helm repo add bitnami https://charts.bitnami.com/bitnami
  2. Check the repo list

    helm repo ls
  3. Search the nginx, we will use bitnami/nginx for the webserver nginx

    helm search repo nginx
  4. Pull the nginx from the repository, it will download tgz file.

    # run this in the home dir or other directory
    helm pull bitnami/nginx
    # extract the tgz
    tar xvf nginx-* #make sure the nginx not nginx-ingress
  5. Change directory to metallb and edit values.yaml file.

    cd nginx

    # edit
    vim values.yaml

    #locate these variables
    ...
    ingress:
    enabled: true
    ...
    hostname: nginx.demo.local # make sure this FQDN is pointing to the ingress IP
    ...
    ingressClassName: "nginx" # check using `kubectl get ingressclass`
    ...
  6. Install metallb using helm

    helm -n demo install demo-app -f values.yaml . --debug --create-namespace
  7. Check the status, if the status still init, wait until running.

    kubectl -n demo get all
    kubectl -n demo get ingress

    Get All and Ingress using kubectl

  8. If you configure the FQDN in DNS or /etc/hosts correctly to ingress IP address, it will show like this

    Access using FQDN

CEPH CSI - StorageClass Installation and Configuration

StorageClass Diagram

TODO - i don't have CEPH cluster yet 🙈

Personal ASN and IPv6 Journey

· 13 min read
Adekabang
Tukang Ngoprek

Last Update: 25 March 2024

Part 1: ASN Application & Preparation

ASN Application via Lagrange

⛔ Disclaimer: Tidak ada sponsor dari Lagrange cuman discount

Lagrange Website

Lagrange Banner

Pendaftaran ASN melalui Lagrange per tanggal 25 Maret 2024 ada discount 40% off dari £25 ke £15.

  1. Untuk memulai aplikasi ASN, bisa langsung “Try now” dan lakukan Register akun Lagrange.

  2. Langsung aja “Create an ASN” di page LIR Services - Lagrange

    Lagrange LIR Service

  3. Ikutin proses purchasingnya.

  4. Setelah itu akan ada Applicationnya, pilih guided, nanti akan diarahkan untuk proses pembuatan akun dan pembuatan object di RIPE. kalau bingung apa yang di isi bisa juga mengacu ke video dibawah:

    1. Video pembuatan object di RIPE : Rifqi Arief in IPv6 Indonesia Telegram Group

    2. Pastikan utk mencatat beberapa informasi dibawah saat pembuatan:

      1. RIPE ADMIN-C: [Singkatan_Nama][Identifier]-RIPE (generate dari RIPE)
      2. RIPE MNT (mnt-by): bebas tapi unique
      3. Address: alamat personal
      4. abuse-c: ACR[Identifier]-RIPE(generate saat mengisi abuse-c melalui logo lonceng)
      5. mnt-ref: INFERNO-MNT (dari Lagrange) dan mnt-by (punya sendiri)
      6. Organisation: ORG-[Singkatan_Nama][IdentifierOrg]-RIPE (generate dari RIPE)
  5. Di proses ini akan ada beberapa submission di portal Lagrange yang membutuhkan informasi diatas

VPS support BGP Session

Lagrange ASN Form

Jika sudah sampai page ini, proses selanjutnya adalah pembelian VPS yang support BGP session.

⛔ Pastikan VPS yang disewa berada pada region EU, seperti Amsterdam, Frankfurt, UK

⛔ Disclaimer: Tidak ada sponsor dari iFog (cuman harga cukup terjangkau)

  1. Pilih VPS pada region EU (kasus saya pilih di Amsterdam 1) iFog Website

    iFog VPS Pricelist

  2. Ikutin proses purchasingnya (payment via CC atau Paypal)

  3. Beberapa catatan saat konfigurasi VPS:

    1. Hostname: FQDN [hostname].[domain].[tld]
    2. Password: optional
    3. BGP Session: ✅
    4. FogIXP Peering port: ✅
    5. Additional v6 BGP Transit via ASxxxx: optional
  4. Simpan Invoice untuk nanti di upload ke ticket Lagrange

  5. untuk pengisian Lagrange bisa menggunakan panduan berikut:

    1. Partner 1 ASN: ASN punya iFog atau VPS provider lainnya. iFog AMS VPS

      iFOg BGP Details

    2. Partner 1 NOC: email dari iFog atau VPS provider lainnya. iFog NOC Contact

      iFog Contact

    3. Partner 2 ASN dan Partner 2 NOC: bisa request sponsor ke grup telegram https://t.me/IPv6_Indonesia. Saya request sponsor ke om @malhuda (Thanks om bantuannya)

    4. Setelah submit, upload invoice dari iFog atau VPS provider lainnya di halaman ticket

    5. Jika terima email seperti ini dari VPS provider bisa balas “My ASN is still in the process of application, I will contact you after it finished.”

      iFog Email

    6. Tunggu deh sampe di ASN diterbitkan.

Issuing Process

Lagrange Process

Setelah ini akan ada beberapa proses approval yang dibutuhkan dari Lagrange:

  1. Document Agreement: Tanda tangan menggunakan DocuSign. Pastikan masukkan kode akses yang sesuai, ada pada tiket. Position pada kolom tanda tangan bisa di isi Individual.
  2. Upload ID: KTP/SIM/Paspor

👨‍🦳 Catatan Pendahulu

  • Tapi kalau mau lepas VPS EU, wajib cari upstream ke EU, Ke netassist atau udn aja atau bgptunnel
  • my.bgptunnel.com pake ini om buat presence di EU kalo nanti lepas VPS EU

Peering Application

Setelah ASN terbit, akan ada permintaan utk pengisian Application From dari FogIXP (jika VPS di iFog):

FogIXP Application Form:

  • Company/Organization: Full Name
  • Main person of contact (first+surname): Full Name
  • Personal email address of contact: Email sendiri
  • ASN: ASxxxxxx
  • AS-SET: ASxxxxxx:AS-SET (buat object baru di RIPE-DB dan add ASN milik sendiri sebagai member, + button)
  • IPv4 Max-Prefix Limit: 0 atau sesuai yg ingin di announce
  • IPv6 Max-Prefix Limit: 2 atau sesuai yg ingin di announce
  • Email Peering contact: Email sendiri
  • Email NOC contact: Email sendiri
  • 24/7 NOC phone: No Telp Sendiri
  • A list of prefixes you plan to announce: Prefix yang didapat dari Lagrange atau sponsor

👨‍🦳 Catatan Pendahulu

  • AS-SET full itu ada ASNnya didepan
    • Sudah gak bisa pakai AS-SET pendek
    • Karena yah ada drama sblmnya hahhaa
    • Tapi kalo penasaran, coba aja
  • AS-UPSTREAMS itu untuk upstream ya
    • JANGAN DIMASUKKAN KE AS-SET UTAMA
    • Pisah ya om
  • AS-SET khusus ASN sendiri dan downstream dan AS-SET khusus upstream
  • ...Untuk ASXXXXXX:AS-UPSTREAMS itu ditaruh di whois ASN aja, buat import export....
    • Jangan dimasukkan / dikirimkan ke peeringdb maupun upstream, langsung digaplok, 100%
    • Yang dikirim ke peeringdb maupun upstream, yang ASXXXXXX:AS-SET
    • Nah, pengisiannya juga membernya hanya ASN itu, dan downstream
    • Selain itu jangan coba coba masukkin, nnti bisa bisa ditabok lagi

PeeringDB

  1. Lakukan registrasi di PeeringDB.
  2. Masukkan email yang terdaftar di RIPE.
  3. Masukkan ASN dan Organization (Nama Lengkap) pada bagian Affiliate with Organization.

Setelah Affiliate sudah di approve, dapat melengkapi informasi di organization (full name, di menu dekat profile ) dan networks (ASN, ada di dalam organization)

Configure route6 in RIPE DB

Setelah mendapatkan alokasi IP, buat object baru di RIPE DB dengan type route6

mnt-by: mnt_sendiri dan yang memberi alokasi

route6: prefix IPv6 (xxxx:xxx::/48)

origin: AS_number

Jika saat membuat route6 gagal (kasus saya alokasi dari Lagrange dengan mnt INFERNO-MNT), dapat membuat tiket di portal Lagrange dengan menyertakan prefix dan mnt_sendiri.

Request RPKI pada block IP

Setelah mendapatkan block IP bisa request ke penyedia prefix untuk setup RPKI melalui open ticket.

Install VPS

Jika berlangganan di iFog, dapat melakukan instalasi VM secara manual setelah mendapatkan akses. rekomendasi OS adalah debian 12 dan disetup secara minimal (tanpa gui).

Untuk mempermudah dapat menambahkan konfigurasi dibawah pada file ~/.ssh/config.

Host bgp-server
HostName external_ip
User username
IdentityFile /path/to/private/key

dan dapat melakukan ssh dengan command dibawah:

ssh bgp-server

Part 2: Config BGP

Preparation

  1. Edit network interfaces & sesuaikan masing masing parameter < > (nama interface mungkin berbeda)

    vim /etc/network/interfaces
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).

    source /etc/network/interfaces.d/*

    # The loopback network interface
    auto lo
    iface lo inet loopback

    # The primary network interface
    allow-hotplug ens18
    iface ens18 inet static
    address <IPv4_Public>/<Subnet_mask>
    gateway <IPv4_Gateway>
    # dns-* options are implemented by the resolvconf package, if installed
    dns-nameservers 9.9.9.9
    dns-search <domain>
    # disable multicast
    up ip link set dev $IFACE mtu 1500
    up ip link set multicast off dev $IFACE

    iface ens18 inet6 static
    address <IPv6 Public>
    netmask <Subnet_mask>
    gateway <IPv6_Gateway>
    # dns-* options are implemented by the resolvconf package, if installed
    dns-nameservers 2620:fe::fe
    dns-search <domain>
    # disable multicast
    up ip link set dev $IFACE mtu 1500
    up ip link set multicast off dev $IFACE

    # The secondary network interface to FogIXP
    iface ens19 inet6 static
    address <IPv6 Public>
    netmask <Subnet_mask>
    gateway <IPv6_Gateway>
    # dns-* options are implemented by the resolvconf package, if installed
    dns-nameservers 2620:fe::fe
    dns-search <domain>
    # disable multicast
    up ip link set dev $IFACE mtu 1500
    up ip link set multicast off dev $IFACE
    systemctl restart networking
  2. Edit sysctl.conf (backup default jika diperlukan)

    # cp /etc/sysctl.conf{,.default} 
    vim /etc/sysctl.conf
    fs.file-max = 16777216
    fs.nr_open = 1073741824
    kernel.hung_task_timeout_secs = 0
    kernel.msgmax = 65536
    kernel.msgmnb = 65536
    net.core.default_qdisc = cake
    net.core.netdev_max_backlog = 30000
    net.core.rmem_default = 67108864
    net.core.rmem_max = 67108864
    net.core.somaxconn = 65536
    net.core.wmem_max = 67108864
    net.ipv4.conf.all.accept_redirects = 0
    net.ipv4.conf.all.accept_source_route = 0
    net.ipv4.conf.all.arp_announce = 1
    net.ipv4.conf.all.arp_filter = 1
    net.ipv4.conf.all.arp_ignore = 2
    net.ipv4.conf.all.forwarding = 1
    net.ipv4.conf.all.ignore_routes_with_linkdown = 1
    net.ipv4.conf.all.rp_filter = 0
    net.ipv4.conf.all.secure_redirects = 0
    net.ipv4.conf.all.send_redirects = 0
    net.ipv4.fib_multipath_use_neigh = 1
    net.ipv4.icmp_echo_ignore_broadcasts = 1
    net.ipv4.icmp_errors_use_inbound_ifaddr = 1
    net.ipv4.icmp_ignore_bogus_error_responses = 1
    net.ipv4.icmp_msgs_per_sec = 2500
    net.ipv4.icmp_ratelimit = 0
    net.ipv4.igmp_max_memberships = 100
    net.ipv4.ip_forward = 1
    net.ipv4.ip_local_port_range = 1024 65535
    net.ipv4.neigh.default.base_reachable_time_ms = 14400000
    net.ipv4.neigh.default.gc_thresh1 = 1024
    net.ipv4.neigh.default.gc_thresh2 = 2048
    net.ipv4.neigh.default.gc_thresh2 = 8192
    net.ipv4.neigh.default.gc_thresh3 = 4096
    net.ipv4.neigh.default.gc_thresh3 = 16384
    net.ipv4.route.max_size = 1073741824
    net.ipv4.tcp_congestion_control = bbr
    net.ipv4.tcp_dsack = 1
    net.ipv4.tcp_fin_timeout = 30
    net.ipv4.tcp_keepalive_time = 120
    net.ipv4.tcp_l3mdev_accept = 1
    net.ipv4.tcp_max_syn_backlog = 8192
    net.ipv4.tcp_mem = 4194304 16777216 67108864
    net.ipv4.tcp_no_metrics_save = 1
    net.ipv4.tcp_rmem = 4194304 16777216 67108864
    net.ipv4.tcp_sack = 1
    net.ipv4.tcp_syn_retries = 3
    net.ipv4.tcp_synack_retries = 3
    net.ipv4.tcp_syncookies = 1
    net.ipv4.tcp_timestamps = 1
    net.ipv4.tcp_window_scaling = 1
    net.ipv4.tcp_wmem = 4194304 16777216 67108864
    net.ipv6.conf.all.accept_ra_defrtr = 0
    net.ipv6.conf.all.accept_ra_pinfo = 0
    net.ipv6.conf.all.accept_ra_rtr_pref = 0
    net.ipv6.conf.all.accept_redirects = 0
    net.ipv6.conf.all.accept_source_route = 0
    net.ipv6.conf.all.autoconf = 0
    net.ipv6.conf.all.dad_transmits = 0
    net.ipv6.conf.all.forwarding = 1
    net.ipv6.conf.all.ignore_routes_with_linkdown = 1
    net.ipv6.conf.all.router_solicitations = -1
    net.ipv6.icmp.ratelimit = 0
    #net.ipv6.conf.all.send_redirects = 0
    net.ipv6.neigh.default.base_reachable_time_ms = 14400000
    net.ipv6.neigh.default.gc_thresh1 = 1024
    net.ipv6.neigh.default.gc_thresh2 = 2048
    net.ipv6.neigh.default.gc_thresh2 = 8192
    net.ipv6.neigh.default.gc_thresh3 = 4096
    net.ipv6.neigh.default.gc_thresh3 = 16384
    net.ipv6.route.max_size = 32768000
    vm.max_map_count = 1048575
    vm.swappiness = 100

    net.ipv6.conf.all.accept_ra = 0
    #net.ipv6.conf.all.proxy_ndp = 1

    #net.netfilter.nf_conntrack_acct = 1
    #net.netfilter.nf_conntrack_checksum = 0
    #net.netfilter.nf_conntrack_max = 65535
    #net.netfilter.nf_conntrack_tcp_timeout_established = 7440
    #net.netfilter.nf_conntrack_udp_timeout = 60
    #net.netfilter.nf_conntrack_udp_timeout_stream = 180
    #net.netfilter.nf_conntrack_helper = 1

    # net.ipv6.conf.all.disable_ipv6 = 1

    Config sesuai interface IXP yang dimiliki

    ### Disable multicast FogIXP Interface ###
    net.ipv4.conf.<interface_name>.arp_announce = 1
    net.ipv4.conf.<interface_name>.arp_filter = 1
    net.ipv4.conf.<interface_name>.arp_ignore=2
    net.ipv4.conf.<interface_name>.arp_notify=1
    net.ipv4.conf.<interface_name>.proxy_arp = 0
    net.ipv4.conf.<interface_name>.rp_filter=0
    net.ipv4.neigh.default.gc_interval = 30
    net.ipv4.neigh.<interface_name>.base_reachable_time_ms = 14400000
    net.ipv4.neigh.<interface_name>.gc_stale_time = 60
    net.ipv6.conf.<interface_name>.accept_ra = 0
    net.ipv6.conf.<interface_name>.autoconf = 0
    net.ipv6.conf.<interface_name>.router_solicitations = -1
    net.ipv6.neigh.default.gc_interval = 30
    net.ipv6.neigh.<interface_name>.base_reachable_time_ms = 14400000
    net.ipv6.neigh.<interface_name>.gc_stale_time = 60
    sysctl -p
    # or
    reboot
  3. Pre Setup (optional)

    Github: Xeoncross/lowendscript, scope:

    • update_timezone
    • remove_unneeded
    • update_upgrade
    • install_dash
    • install_vim
    • install_nano
    • install_htop
    • install_mc
    • install_iotop
    • install_iftop
    • install_syslogd
    • install_curl
    • apt_clean
    wget --no-check-certificate https://raw.github.com/Xeoncross/lowendscript/master/setup-debian.sh
    chmod a+rx setup-debian.sh
    ./setup-debian.sh system
  4. Timezone and NTP

    apt install ntp -y

    Edit NTP configuration

    vim /etc/ntp.conf
    server 0.debian.pool.ntp.org iburst
    server 1.debian.pool.ntp.org iburst
    server 2.debian.pool.ntp.org iburst
    server 3.debian.pool.ntp.org iburst
    sudo systemctl restart ntp

    Set Timezone and see status

    timedatectl set-timezone Asia/Jakarta
    timedatectl

    timedatectl status

Konfigurasi Pathvector

  1. Install Pathvector dan Bird2

    curl https://repo.pathvector.io/pgp.asc > /usr/share/keyrings/pathvector.asc
    echo "deb [signed-by=/usr/share/keyrings/pathvector.asc] https://repo.pathvector.io/apt/ stable main" > /etc/apt/sources.list.d/pathvector.list
    apt update && apt install -y pathvector bird2
  2. Konfigurasi /etc/pathvector.yml

    vim /etc/pathvector.yml

    first block

    asn: <your_asn>
    bgpq-args: "-S AFRINIC,APNIC,ARIN,LACNIC,RIPE"
    default-route: false
    irr-query-timeout: 30
    irr-server: "rr.ntt.net"
    merge-paths: true
    peeringdb-api-key: "<peeringdb_api_key>"
    peeringdb-query-timeout: 30
    prefixes:
    - "<prefix_plan_to_be_announce>"
    - "<prefix_plan_to_be_announce>"
    router-id: "<an_ipv4_address>"
    rtr-server: "172.65.0.2:8282"

    Catatan

    • Ubahlah your_asn menjadi ASN anda.
    • Lalu, IRR hanya valid untuk AFRINIC, APNIC, ARIN, LACNIC, RIPE
    • Tidak menerima rute default
    • IRR server menggunakan NTT
    • Ubah peeringdb_api_key dengan kunci milik anda sendiri. Hak akses pada PeeringDB gunakan hanya baca
    • Ubahlah prefix_plan_to_be_announce dengan prefix yang akan di gunakan.
    • Ubahlah router-id dengan IPv4 VPS anda (atau gunakan IP lokal jika memang tidak ada IP publiknya (karena hanya sebagai pengenal saja)).
    • rtr-server menggunakan Cloudflare

    Aktivasi kernel karena akan menggunakan full table

    kernel:
    learn: true

    # BGP Large Communities
    # <your_asn>:1:1 - Learned from upstream
    # <your_asn>:1:2 - Learned from route server
    # <your_asn>:1:3 - Learned from peer
    # <your_asn>:1:4 - Learned from downstream
    # <your_asn>:1:5 - Learned from iBGP

    Template (ganti your_asn dengan ASN yang dimiliki gunakan sesuai kebutuhan)

    templates:
    upstream:
    add-on-import:
    - <your_asn>:1:1
    allow-local-as: true
    announce:
    - <your_asn>:1:4
    local-pref: 100
    remove-all-communities: <your_asn>

    routeserver:
    add-on-import:
    - <your_asn>:1:2
    announce:
    - <your_asn>:1:4
    auto-import-limits: true
    enforce-first-as: false
    enforce-peer-nexthop: false
    filter-transit-asns: true
    local-pref: 200
    remove-all-communities: <your_asn>

    peer:
    add-on-import:
    - <your_asn>:1:3
    announce:
    - <your_asn>:1:4
    auto-as-set: true
    auto-import-limits: true
    filter-irr: true
    filter-transit-asns: true
    irr-accept-child-prefixes: true
    local-pref: 300
    remove-all-communities: <your_asn>

    downstream:
    add-on-import:
    - <your_asn>:1:4
    allow-blackhole-community: true
    announce:
    - <your_asn>:1:1
    - <your_asn>:1:2
    - <your_asn>:1:3
    auto-as-set: true
    auto-import-limits: true
    filter-irr: true
    filter-transit-asns: true
    irr-accept-child-prefixes: true
    local-pref: 400
    remove-all-communities: <your_asn>

    ibgp:
    allow-local-as: true
    asn: <your_asn>
    direct: true
    enforce-first-as: false
    enforce-peer-nexthop: false
    filter-irr: false
    filter-rpki: false
    next-hop-self: true
    remove-all-communities: <your_asn>

    Catatan:

    • Template upstream hanya digunakan untuk transit AS
    • Template routeserver hanya digunakan untuk peering ke routeserver
    • Template peer hanya digunakan untuk melakukan bilateral peering
    • Template downstream hanya boleh dipakai untuk melakukan downstream AS pihak lain
    • Template ibgp hanya digunakan untuk AS yang sama

    Template peers (gunakan sesuai kebutuhan)

    peers:
    ############
    # UPSTREAM #
    ############
    upstream_name:
    asn: neigh_asn
    disabled: false
    listen6: "<local_address>"
    local-pref: 100
    neighbors:
    - "<neigh_address>"
    template: upstream

    ###############
    # ROUTESERVER #
    ###############
    routeserver_name:
    asn: neigh_asn
    disabled: false
    listen4: "<local_address>"
    listen6: "<local_address>"
    local-pref: 200
    neighbors:
    - "<neigh_address>"
    - "<neigh_address>"
    template: routeserver

    ###########
    # PEERING #
    ###########
    peering_name:
    asn: neigh_asn
    disabled: false
    listen6: "<local_address>"
    local-pref: 300
    neighbors:
    - "<neigh_address>"
    template: peer

    ############
    # INTERNAL #
    ############
    ibgp_name:
    add-on-import:
    - <your_asn>:1:1
    announce:
    - <your_asn>:1:1
    disabled: false
    listen6: "<local_address>"
    local-port: 179
    local-pref: 150
    neighbor-port: 179
    neighbors:
    - "<neigh_address>"
    template: ibgp

    Generate bird config:

    pathvector generate

    Cek Routing sudah masuk atau blm

    ip -6 route

    Cek status BGP

    birdc show protocol
  3. Set cron untuk update IRR prefix lists dan PeeringDB prefix limits setiap 12 jam.

    #crontab -e
    0 */12 * * * pathvector generate

Reference

Part 3: Set Up Interface

  1. Buat dummy interface script

    vim /usr/local/bin/setup-dummy-interface.sh
    #!/bin/bash
    ip link add dummy0 type dummy
    ip addr add <YOUR_IPV4_INTERFACE_IP>/24 dev dummy0
    ip addr add <YOUR_IPV6_INTERFACE_IP>/64 dev dummy0
    ip link set dummy0 up
    chmod +x /usr/local/bin/setup-dummy-interface.sh
  2. Buat Systemd Service 

    vim /etc/systemd/system/dummy-interface.service
    [Unit]
    Description=Setup Dummy Interface
    After=network.target

    [Service]
    ExecStart=/usr/local/bin/setup-dummy-interface.sh
    RemainAfterExit=yes

    [Install]
    WantedBy=multi-user.target
    systemctl enable dummy-interface.service
    systemctl start dummy-interface.service
  3. Test Koneksi

    1. Ping

      ping -I <YOUR_IPV4_INTERFACE_IP> 8.8.8.8
      ping6 -I <YOUR_IPV6_INTERFACE_IP> 2001:4860:4860::8888
    2. Curl

      curl --interface <YOUR_IPV4_INTERFACE_IP> -4 ifconfig.me
      curl --interface <YOUR_IPV6_INTERFACE_IP> -6 ifconfig.me
    3. mtr

      mtr -I dummy0 google.com

Part 4: Tunnel ke Rumah atau DC

Main Server (BGP Router)

  1. Install wireguard

    apt update
    apt install wireguard
  2. Generate Wireguard Private dan Public key

    wg genkey | tee /etc/wireguard/private.key
    chmod 600 /etc/wireguard/private.key
    cat /etc/wireguard/private.key | wg pubkey | tee /etc/wireguard/public.key

    # Baris Pertama Private Key
    # Baris Kedua Public Key -> simpan untuk konfigurasi client server
  3. Buat Konfigurasi Wireguard di /etc/wireguard/wg0.conf

    [Interface]
    PrivateKey = <Your-Main-Server-Private-Key>
    Address = <Your-Main-Server-IPv6-Allocation>/64
    ListenPort = 51820
    Table = off

    [Peer]
    PublicKey = <Your-Client-Public-Key> #setelah digenerate
    AllowedIPs = <Your-Client-IPv6-Allocation>/128
  4. Enable dan Start Wireguard Setelah Public Key Client di generate

    systemctl enable wg-quick@wg0
    systemctl start wg-quick@wg0
  5. Konfigurasi forwarding

    # Port Forwarding for IPv4 - opsional jika announce IPv4
    net.ipv4.ip_forward=1

    # Port forwarding for IPv6
    net.ipv6.conf.all.forwarding=1
  6. Apply konfigurasi forwarding

    sudo sysctl -p

Client Server (Home Server) - Debian/Ubuntu

  1. Install wireguard

    apt update
    apt install wireguard
  2. Generate Wireguard Private dan Public key

    wg genkey | tee /etc/wireguard/private.key
    chmod 600 /etc/wireguard/private.key
    cat /etc/wireguard/private.key | wg pubkey | tee /etc/wireguard/public.key

    # Baris Pertama Private Key
    # Baris Kedua Public Key -> Masukkan ke konfigurasi main server
  3. Buat Konfigurasi Wireguard di /etc/wireguard/wg0.conf

    [Interface]
    PrivateKey = <Your-Client-Private-Key>
    Address = <Your-Client-IPv6-Allocation>/64
    DNS = 2606:4700:4700::1111
    DNS = 2606:4700:4700::1001

    [Peer]
    PublicKey = <Your-Main-Server-Public-Key>
    Endpoint = <Main-Server-Public-IPv6>:51820
    AllowedIPs = ::/0
    PersistentKeepalive = 25
  4. Enable dan Start Wireguard Setelah Public Key Client digenerate

    systemctl enable wg-quick@wg0
    systemctl start wg-quick@wg0

Uji Coba di Client Server

  1. Ping

    ping6 -I <YOUR_IPV6_INTERFACE_IP> 2001:4860:4860::8888
  2. Curl

    curl --interface <YOUR_IPV6_INTERFACE_IP> -6 ifconfig.me
  3. mtr

    mtr -I dummy0 google.com

Miscellaneous

Generate Wireguard QRCode

apt install qrencode
qrencode -t ansiutf8 < /etc/wireguard/wg-client.conf

https://www.cyberciti.biz/faq/how-to-generate-wireguard-qr-code-on-linux-for-mobile/

Reference

Welcome

· One min read
Adekabang
Tukang Ngoprek

Docusaurus blogging features are powered by the blog plugin.

Simply add Markdown files (or folders) to the blog directory.

Regular blog authors can be added to authors.yml.

The blog post date can be extracted from filenames, such as:

  • 2019-05-30-welcome.md
  • 2019-05-30-welcome/index.md

A blog post folder can be convenient to co-locate blog post images:

Docusaurus Plushie

The blog supports tags as well!

And if you don't want a blog: just delete this directory, and use blog: false in your Docusaurus config.