Skip to content

Blog

How keys was defined in terminal

This had never been a issue until I give up vimr and use kitty + neovim. I found that my <S-Fn> no longer works.

Well, do panic, use infocmp or keybind or keycode to find out how the key is defined (also can use cat or sed -n -l). For kitty <S-F1> key code is ^[[1;2P . Here ^[ means <Esc> or \E

vim and neovim handle key code slight different.

for neovim, S-Fn was map to F(12+n) , e.g. S-f1 mapped to F13. So you can do this:

    map <F13> <S-F1>
vim is slightly different. :help keycode

    set <S-F1>=^[[1;2P
    map <Esc>[1;2P <S-F1>

So put it all together

if !has("gui_running")
  if !has('nvim')
    set <S-F1>=^[[1;2P
    map <Esc>[1;2P <S-F1>
    set <S-F2>=^[[1;2Q
    map <Esc>[1;2Q <S-F2>
    set <S-F3>=^[[1;2R
    map <Esc>[1;2R <S-F3>
  else
    map <F13> <S-F1>
    map <F14> <S-F4>
    map <F15> <S-F5>
    map <F16> <S-F6>
  endif
endif

vim as a programming ide

I used to use slickedit, qt-creator, idea (webstorm, goland), vscode, but I am back to vi now. Thanks for Plug I do not need to configure my setup everytime....... I am still using sublime edit(as a notepad)

vimr is one of the best nvim-gui. But it does not in active development in last 3 months(It is hard for a one developer project), some of the crash durning coding is annoying. I only use nvim/vim + kitty now.

  • nvim+kitty configured with pop menu:

    vim_ide with nvim+kitty

  • nvim clap preview:

    vim_ide with nvim+kitty

  • nvim+kitty coc+ale:

    vim_ide with nvim+kitty

Vim Plugins

I used following plugin a lots

  • Plug

Plugin management tool

  • vim-clap

Best plugin for search anything. I used it to replace fzf, leaderF, leaderP, NerdTree, Ag/Ack/Rg, yank(ring), project management. undolist and many more

  • coc.nvim

I disabled vim-go and turn to coc-go. Replace defx with coc-explorer, use coc-spell for spell check coc-snippet replaced my ultisnips. Also, there are coc for yml, json, prettier, python, rust, PHP (any language vs code supported)......

  • ALE

well, I am still using ALE and configure lots of lint tool with it.

  • Programming support:

YCM (used to be my favourite, only for C++ and python now), but I am using coc.nvim more offen now, vim-go(for go testing, highlight, gopls disabled),CompleteParameter, emmet-vim, tagbar/vista, polygot, and some language specific plugins (e.g html, js/ts, swift), ctags/gutentags, vim-less, govim(macvim only, with some cool AST)

  • Debug:

vimspector

  • Theme, look&feel:

onedark, eleline, devicons, startify, powerline, indentLine(with nerdfont),

  • Color:

nvim-colorizer.lua (display hex and color in highlight), rainbow, log-highlight, limelight, interestingwords

  • Git:

fugitive, gv, coc-git

  • Format:

tabular, coc-prettier(or, sometimes prettier), auto-pair

  • Menu and tab: quickui(created a menu for the function/keybind I used less often. I can not rememeber all the commands and keybinds....) wintab: one of the best buffer management tool

  • Tools: floatterm, coc-todolist

  • Move and Edit:

easymotion, multi-cursor (has ome bugs with auto-complete. check this: You don’t need more than one cursor in vim ), vim-anyfold (better folding)

Shell

  • OMZshell is good, iterm2 is popular, but I turned to zprezto(with powerlevel10) + kitty. It is cooool and faster, check this:

Some of the benfits of kitty:

  • Fully GPU/OpenGL rendering

  • Easy split/tabing

  • Configurable font. You can configure multiple fonts for display. e.g. my configure:

font_family      FiraCode Retina
italic_font      InconsolataLGC Nerd Font Italic
bold_font        FiraCode Semibold
bold_italic_font InconsolataLGC Nerd Font BoldItalic

# Font size (in pts)
font_size        16.0

Why am I doing this:

  • bold font is too heavy… semibold is less distracting

  • Retina font is better than regular (I guess…)

  • nerd font support

  • Some font do not have italic (e.g. Cascadia)

nvim+kitty split view:

vim_ide with nvim+kitty

Check my repo

kafka SASL client setup


layout: post title: “connect to kafka cluster with SASL” subtitle: [“GUI Tools Kafka setup”, “kafka 客户端SASL配置”] author: “Ray” header-style: text tags: - Docker


I found there are two GUI client that can be used to connect to kafka:

Conductor

I have been used this tool for a while, it do have a nice UI design and lots of features. But I need a professional licence to use it to connect to kafka. Conductor UI

Kafka Tool

Kafka Tool also support SASL. Please refer to the document here: Kafka Tool SASL setup JAAS Setup JAAS Setup

The JAAS connection string sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="your name" password="you password";

AKHQ (previously known as KafkaHQ)

Kafka GUI for Apache Kafka to manage topics, topics data, consumers group, schema registry, connect and more… Github

AKHQ

The simplest way to install the software is using docker tchiotludo/akhq

You need to update the application.yml and run

docker run -d \
    -p 8080:8080 \
    -v /tmp/application.yml:/app/application.yml \
    tchiotludo/akhq
Here is a example application.yml

To setup SASL Need to chang bootstrap, security, sasl configure:

akhq:
  server:
    base-path: "" # if behind a reverse proxy, path to akhq without trailing slash (optional). Example: akhq is
                  # behind a reverse proxy with url http://my-server/akhq, set base-path: "/akhq".
                  # Not needed if you're behind a reverse proxy with subdomain http://akhq.my-server/
    access-log: # Access log configuration (optional)
      enabled: true # true by default
      name: org.akhq.log.access # Logger name
      format: "[Date: {}] [Duration: {} ms] [Url: {} {} {}] [Status: {}] [Ip: {}] [Length: {}] [Port: {}]" # Logger format

  # default kafka properties for each clients, available for admin / producer / consumer (optional)
  clients-defaults:
    consumer:
      properties:
        isolation.level: read_committed

  # list of kafka cluster available for akhq
  connections:
    my-cluster-sasl:
      properties:
        bootstrap.servers: "1.236.23.21:9092,3.15.1.12:9092,3.15.16.69:9092"
        security.protocol: SASL_PLAINTEXT
        sasl.mechanism: PLAIN
        sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="your username" password="your password";

  pagination:
    page-size: 25 # number of elements per page (default : 25)
    threads: 16 # Number of parallel threads to resolve page

Docker network

Name space in linux

UTS (Hostname+DomainName), User, Mount, IPC, Pid, Net

In docker virtual network interface was used. Bridged network used to connect the containers

A good reference Understanding Docker Networking Drivers and their use cases

Docker network bridge Docker network bridge(From docker.com)

Connect to docker network Docker network bridge(From docker.com)

The connection between containers that inside different hosts could be through overlay network.

An diagram from docker-k8s-lab shows how overlay works docker overlay network

Docker networks:

  • Bridge
  • alloc by default when contain starts,
  • network: docker0IP 172.17.0.1/16 IP address of your docker
  • use brctl show to check your connection(two busybox container run at the moment):
                bridge name     bridge id               STP enabled     interfaces
                docker0         8000.02429af16b57       no              veth317e9b3
                                                                        vetha74a241
    
    docker0 connected to two interface base1, and base2. base1 can access base2 e.g wget -q -O - 172.17.0.3 (- means output to stdout)
  • host
  • access network from host. Share UTS/NET/IPC
  • none
  • NULL, only have loopback interface

A slightly complicated docker run command

docker run --name mydocker1 -it --network bridge -h mydocker1.rayx.me --dns 8.8.8.8 --dns-search rayx.me --add-host www.rayx.me:54.12.17.68 --rm busybox:latest Above command will create a docker named mydocker1 with bridge network. The host name mydocker1.rayx.me, use dns 8.8.8.8 in /etc/hosts will have :

54.12.17.68     www.rayx.me
172.17.0.4      mydocker1.rayx.me mydocker1
So access to www.rayx.me will use IP 54.12.17.68

Inbound communications

docker [container] run -p or docker [container] run -P * -P will expose all ports, -p will expose specific ports

  • -p export container port to a host dynamic port (e.g 3001) e.g. start the container docker run --name busybox-web1 --rm -p 80 ray-x/httpd-busybox:v0.2

    check dynamic port

    sudo iptables -t nat -vnL

    got:

    Chain DOCKER (2 references)
    pkts bytes target     prot opt in     out     source               destination         
        0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
        0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:32768 to:172.17.0.5:80
The 80 port was mapped to So you could access the web through curl 127.0.0.1:32768 Or with docker port
      docker port 7a96c7ddeb1e
      80/tcp -> 0.0.0.0:32768
* -p : map containerPort to hostPort

e.g. `docker run --name busybox-web1 --rm -p 8080::80  ray-x/httpd-busybox:v0.2`
  • -p :: map containerPort to host dynamic port (e.g 192.168.0.2:8080)

    e.g. docker run --name busybox-web1 --rm -p 192.168.10.10::80 ray-x/httpd-busybox:v0.2 All access to docker 80 will need to through 192.168.10.10 network + a dynamic port

  • -p ::: map containerPort to host and port

e.g. docker run --name busybox-web1 --rm -p 192.168.10.10:8080::80 ray-x/httpd-busybox:v0.2

join other container’s network (share UTS, IPC, Net )

start up container b1 docker run --name b1 --rm -it busybox start up container b2 and join network of b1 docker run --name b2 --rm -it --network container:b1 busybox run ifconfig on both docker container and the ip address will be the same (in my test both are 172.17.0.2) b1 and b2 share network and 127.0.0.1 in b1 is same as 127.0.0.1 in b2. e.g start a webserver in b1 and you can use 127.0.0.1:80 to access the http from b2

host network

docker run --name b2 --rm  -it  --network host busybox
/ #ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:32:D3:57:27  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
......

enp1s0    Link encap:Ethernet  HWaddr 00:25:22:26:4E:F8  
          inet addr:192.168.199.88  Bcast:192.168.199.255  Mask:255.255.255.0
          inet6 addr: fe80::225:22ff:fe26:4ef8/64 
          .....

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          .....

veth62789e1 Link encap:Ethernet  HWaddr 22:3A:F7:11:B3:92  
          inet6 addr: fe80::203a:f7ff:fe11:b392/64 Scope:Link
          .....
By adding --network host container can access the host network

Multiple bridges in a host

We could use docker network create to create a new bridge(or host, lo, maclan, overlay etc)

e.g. docker network create -d bridge --subnet "172.26.0.0/16" --gateway "172.26.0.2" bridge1

Then we can attach container to the bridge1

docker run --name b2 --rm -it --net bridge1 busybox

Create a docker image

Create a new image from a container’s changes

  • Usage:

  • Docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

e.g: Get a busy box image and create a file inside container

docker run --name base1 -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
d9cbbca60e5f: Pull complete
Digest: sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c
Status: Downloaded newer image for busybox:latest
/ #mkdir -p /data/html
vi index.html
<h1>Busybox httpd server. </h1>

In another terminal create a new image and check it

docker commit -p  -a "rayx1" -m "a base image with index.html" base1
docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
<none>              <none>              64e381aeac90        55 seconds ago       1.22MB

you will see a un-named image. To tag a image use docker tag

Usage:  docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

e.g tag it to user rayx1 with name httpd-busybox and tag v0.1

docker tag 64e381aeac90 rayx1/httpd-busybox:v0.1

Also you could use multiple tags

docker tag rayx1/httpd-busybox:v0.1 rayx1/httpd:latest

You will see:

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
rayx1/httpd-busybox   v0.1                64e381aeac90        6 minutes ago       1.22MB
rayx1/httpd           latest              64e381aeac90        6 minutes ago       1.22MB

You could remove a tag with image rm

docker image rm rayx1/httpd
Untagged: rayx1/httpd:latest

You can change docker file fields

docker inspect base1

[
    {
        ...
        "Config": {
            ...
            "Cmd": [
                "sh"
            ],
            ...
        },

    }
]

Create a new image with Cmd start a httpd with version v0.2

docker commit -a "rayx1 <ray@myemail.com>" -c 'CMD ["/bin/httpd", "-f", "-h", "/data/html"]' -p base1 rayx1/httpd-busybox:v0.2

Run the new image

docker run : Run a command in a new container

docker run --name base2 rayx1/httpd-busybox:v0.2 Use docker inspect base2 get the ipaddress "IPAddress": "172.17.0.3", and you can check the http with lynx 172.17.0.3

start/stop/restart

docker start|stop|restart [OPTIONS] CONTAINER [CONTAINER...] Options:

Options:
  -a, --attach               Attach STDOUT/STDERR and forward signals
      --detach-keys string   Override the key sequence for detaching a container
  -i, --interactive          Attach container's STDIN

Note: You can not re-run a container image again. You have to start/stop/restart your container. container run only can be used for start a new container

login into docker hub and push the image

docker login -u rayx1 docker push rayx1/httpd-busybox

note you can also login into other docker image registry. e.g to login into aliyun docker docker login -u rayx1 registry.cn-beijing.aliyuncs.com and push docker push registry.cn-beijing.aliyuncs.com/rayx1/httpd-busybox:v0.2

Export and import (save/load) your local docker image (without push to server)

If you’d like to distribute your images to your teammate without push to a server (maybe, for testing purpose)

Export

docker save [OPTIONS] IMAGE [IMAGE...] e.g pack http-busybox into httpd.gz docker save -o httpd.gz rayx1/httpd-busybox:v0.2 or multiple image into a file docker save -o httpd.gz rayx1/httpd-busybox:v0.2 rayx1/busybox

import

docker load -i httpd.gz

What is Kubernetes?

Kubernetes, AkA k8s.Kubernetes, is an Production-Grade Container Orchestration System. It can automating deployment, scaling, and managing containerized applications.

What Kubernetes can do?

  • Service discovery and load balancing
  • Storage orchestration (automatically mount a storage system)
  • Automated rollouts and rollbacks
  • Automatic bin packing
  • Self-healing
  • Secret and configuration management
  • etc

Kubernetes Clusters

K8S Architecture

Master node:

API will access k8s master and k8s will route the request to node.

Kubernetes cluster with all the components tied together.)

API Server

kube-apiserver is a component of the Kubernetes control plane that exposes the Kubernetes API.

Scheduler

kube-scheduler control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.

controller and kube-controller-manager

  • controller: Control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.
  • Node controller: Monitor nodes. Noticing and responding when nodes go down.
  • Endpoints controller: Populates the Endpoints object
  • Replication controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

POD And Nodes

Pod

Kubernetes created a Pod to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:

  • Shared storage, as Volumes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use Pod is a virtual machine to host docker containers/applications Pods overview Pods overview

Normally, we group containers that logically coupled together in a Pod. But in most case we run a single container in a Pod.

Node

A Node is a machine that host Pods. Node overview Node can be either virtual machine or physical machine. Node is consisted with : * kubelet * kube-proxy * Pods * docker(or other container)

kube-cluster

A cluster of Nodes

Labels and Selectors

Labels are key/value pairs that are attached to objects, such as pods. key=value selector: used to filter pods

Pod management

ReplicationController

Manage and maintain number of Replica of Pod. (Scale up and down )

Replica Set

Manage and maintain number of Replica of Pod. (Scale up and down )

Deployments

Provides declarative updates for Pods and ReplicaSets.

StatefulSets

Workload API object used to manage stateful applications.

DeamonSet

ensures that all (or some) Nodes run a copy of a Pod.

Job

A Job creates one or more Pods and ensures that a specified number of them successfully terminate.

Cronjob

Creates Jobs on a repeating schedule.

HPA (Horizontal Pod Autoscaler)

Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization or, metrics.

Service

An abstract way to expose an application running on a set of Pods as a network service.

Limit container resources

By default, a container has no resource constrains and an use as much of resource as kernel scheduler allows.

We can restraint CPU, Memory and GPU usage for docker

Memory and OOM

On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory. (Most likely a Java application :-P) Docker attempts to mitigate these risks by adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system.

-oom-score-adj to adjust the priority --oom-kill-disable * -m or –memory= e.g. -m 32m * –memory-swap The amount of memory this container is allowed to swap to disk.

–memroy-swap -m explain
passive S passive M Container total space S, RAM: M, swap S-M, if S==M, no SWAP allocated
0 positive swap unset (same as below)
unset positive M if swap enabled in host , total swap 2*M
-1 positive M if Host enabled swap, container can use up all swap space

CPU

–cpus how may cpus can be used 0.5 half of a CPU, 1.5 one and a half CPU

e.g

docker run -it --cpus=".5" ubuntu /bin/bash

Build private docker registry

If you do not want publish your docker to public registry(e.g. dockerhub, aws, aliyun etc). You can use a local/private registry. Docker provide docker registry(which is is a docker image)

A good reference by digital ocean how to set up a private docker reigstry on ubuntu 18.04 And Deploy a registry server

Publish image to private registry

Note: need a https server, or add "insecure-registries":[true] in /etc/docker/demon.json

docker tag mydocker:v0.1-11 private.docker.domain.name.com:5000/mydocker:v0.3-11
docker push private.docker.domain.name.com:5000/mydocker:v0.3-11

Harbor

Trusted cloud native repository for Kubernetes Installation: [How To Install Harbor Docker Image Registry on CentOS / Debian / Ubuntu] (https://sxi.io/how-to-install-harbor-docker-image-registry-on-centos-debian-ubuntu/)

Harbor login

Harbor projects

Dockerfile

We can use * storage volume * docker exec, * ansible (and similer software) * docker run with options, * container based on container * etc

to build a customized a docker. But with dockerfile it is easy to build a customized docker.

Share system variable

As discussed early, we can use a infrastructure container to share system variable to other container. (e.g use consul) and generate configure file based on system variable. e.g. a nginx file in /etc/nginx/conf.d/

# server.config

{
  server_name $MY_NGX_SERVER_NAME;
  listen $NGX_IP:$NGX_PORT;
  root $WEB_ROOT;
}

Dockerfile

  • Source code for building Docker images. It contains all the comnands to assemble a image
  • Use docker build to access dockerfile

Dockerfile format

  • # Comment
  • INSTRUCTION arguments
    • Instruction is NOT casesensitive, but it is a convention to use UPPERCASE to distinguish them from arguments more easily
  • Docker runs instructions in a Dockerfile in order
  • The first instruction must be FROM in order to specify the Base Docker Image from which your are building.

Docker ignore file .dockerignore

Same as .gitignore

Environment variable and replacement

  • env variable (declared with ENV statement) can also be used in instructions as variables to be interpreted by Dockerfile
  • Env variable are notated in Dockerfile with $variable_name or ${variable_name}
  • ${variable_name} support bash modifiers
  • ${var:-word} if var is set, return var value, otherwise return word
  • ${var:+word} if var is set, return word value, otherwise return empty

Dockerfile instructions

FROM

  • Need to be first un-comment statement.
  • Base image can be either local image or docker registry (e.g docker hub)
  • Syntax (either)
    • FROM \[:] (repository is image name e.g nginx, redis, or ray-x/busybox-httpd)
    • FROM \@\ e.g.
      FROM busybox:latest
      

MAINTAINER (depreacted)

e.g MAINTAINER "rayxu <rayx@rayx.me>"

LABELS

  • Usage: LABEL <key>=<value> [<key>=<value> ...]
  • The LABEL instruction adds metadata to an image in format of key=value e.g MAINTAINER="rayxu <rayx@rayx.me>"

COPY

  • Syntax:
    • COPY \ [\ …] \
    • COPY [“\“, … “\“]
  • Copies new files or directories from and adds them to the filesystem of the image at the path \.
  • \ may contain wildcards and matching will be done using Go’s filepath.Match rules.
  • \ is an absolute path, or a path relative to WORKDIR.
  • If \ doesn’t exist, it is created along with all missing directories in its path.
  • If space existed in \ use “” e.g. “my src folder”

ADD

ADD is similar to COPY, expects that it can add and unzip compressed file (gz, Z, bz2, zip). It also can fetch files from URL e.g. ADD http://nginx.org/download/nginx-1.18.0.tar.gz /usr/local/ or nginx-1.18.0.tar.gz /usr/local (will untar to /usr/local, docker will have /usr/local/nginx-1.18.0) * Syntax: * ADD \ [\ …] \ * ADD [“\“, … “\“] * Same as COPY 1~5 bullet points * to un-compress, \ must not end with / * If use ADD ["<src>", ... "<dest>"] and wildcard existed in src, \ should end with / if \ not end with / it will be treat as a single file instead of a dir

WORKDIR

The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction. * Syntax: * WORKDIR /path/to/workdir

  • WORKDIR can be used multiple time
    WORKDIR /usr
    RUN pwd  #output /usr
    WORKDIR /bin   
    RUN pwd # output /bin
    
  • WORKDIR instruction can resolve environment variables
    ENV DIRPATH /path
    WORKDIR $DIRPATH/$DIRNAME
    

VOLUME

The VOLUME instruction creates a mount point(volume) and marks it as holding externally mounted volumes from native host or other containers. * Syntax: * VOLUME \ e.g. VOLUME /var/log * VOLUME [“\“] e.g. VOLUME [“/opt”] * VOLUME is used to share folder between Docker and host/other dockers * Docker VOLUME is similar to -v option in docker run command. Difference is that VOLUME does not specify the directory mapping. Normally is uses to gether the logs in container. More specific, VOLUME /var/log will expose the folder to a folder like /var/lib/docker/volumes/3207....84e4 and docker container will know the mapping. Any logs in /var/log in docker will also appear in /var/lib/docker/volumes/3207....84e4

EXPOSE

Specify the port/protocol container listens on at runtime. * Syntax: EXPOSE \ [\/\…] * e.g

EXPOSE 11211/upp 11211/tcp 2 323/tcp EXPOSE 80 (default tcp) Check the port with docker port image-name

ENV

ENV sets the environment variable \ to the value \. Note ENV is set in docker build also it will be passed to docker run The ENV setup can be overwrite with docker run -e <key>=<value> * Syntax: ENV \ \ ENV \=\

  • Refer to the env variable with $variable_name or ${variable_name}
  • e.g ENV myName John Doe equal to ENV myName="John Doe"

ENV myName="John Doe" myDog=Rex\ The\ Dog \ myCat=fluffy * To set a value for a single command, use RUN \=\ \

RUN

Run the executable in docke durning docker build. The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.

  • Syntax
  • RUN \ (shell form, /bin/sh -c \)
  • RUN [“executable”, “param1”, “param2”] (exec form)
  • Shell form PID not 1 and can not receive Unix signals
  • Usage
    ADD http://nginx.org/download/nginx-1.18.0.tar.gz /usr/local/src
    RUN cd /usr/local/src && \
    tar xf nginx-1.18.0.tar.gz
    
  • exec form does not support shell operator (e.g wildcard, &, >, | etc) to use shell , you need to run RUN ["/bin/bash", "-c", "<command>", "<argument1>", "<argument2>" ... ]

nohub, exec

Tp prevent demon stop after shell stops, need to use nohub or exec.

Note: nohub command exec: replaces the current process image with a new process image. This means it replace nohub: no hungup, Run a Command or Shell-Script Even after You Logout

CMD

The main purpose of a CMD is to provide defaults for an executing container. e.g. buxybox default CMD is /usr/sh, nginx default is nginx * Syntax * CMD [“executable”,”param1”,”param2”] (exec form, this is the preferred form) * CMD [“param1”,”param2”] (as default parameters to ENTRYPOINT) * CMD command param1 param2 (shell form) * If multipule CMD provided, only the last one is effective * To build a busybox httpd, which is correct: * Pitfull * CMD /bin/httpd -f -h \({WEB_ROOT} * CMD ["/bin/httpd", "-f", "-h", "\)”] * CMD [“/bin/sh”, “-c”, “/bin/httpd”, “-f”, “-h ${WEB_ROOT}”] * CMD [“/bin/sh”, “-c”, “/bin/httpd”, “-f”, “-h /opt/data/web”] * form 1, you can not enter interative mode with -it, If you need to inspect, need to run `docker exec ‘/bin/sh’ * form 2, will not work, ${WEB_ROOT} not found * form 3, will not work, start and then exit(httpd is a backend deamon sh -c httpd will return so PID 1 will exit too, this will stop the container) * form 4, will not work, start and then exit(same as above)

ENTRYPOINT

An ENTRYPOINT allows you to configure a container that will run as an executable. * Syntax * ENTRYPOINT [“executable”, “param1”, “param2”] (exec form) * ENTRYPOINT command param1 param2 * Command line arguments to docker run \ will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be passed to the entry point, i.e., docker run \ -d will pass the -d argument to the entry point. You can override the ENTRYPOINT instruction using the docker run –entrypoint flag. * The shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container’s PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from docker stop \.

  • Only the last ENTRYPOINT instruction in the Dockerfile will have an effect.
  • docker run --entrypoint <cmd> <args> overwrite the ENTRYPOINT in dockerfile
  • ENTRYPOINT solve the issue that CMD ["/bin/sh", "-c", "/bin/httpd", "-f", "-h /opt/data/web"] has ENTRYPOINT /bin/httpd -f =h /opt/data/web will not exit
  • If both CMD and ENTRYPOINT exists, arguments of CMD will be pass to ENTRYPOINT as argument
    CMD["/bin/httpd", "-f", "-h", "/opt/data/web"]
    ENTRYPOINT /bin/sh -c
    
    is eqal to
    ENTRYPOINT /bin/sh -c /bin/sh -c /bin/httpd -f -h /opt/data/web
    
    CMD["/bin/httpd", "-f", "-h", "/opt/data/web"]
    ENTRYPOINT ["/bin/sh", "-c"]
    
    is eqal to
    ENTRYPOINT /bin/sh -c /bin/httpd -f -h /opt/data/web
    
    if you run docker run --name bbxhttpd -it -P bbxhttpd:v0.1 "ls /opt"

“ls /opt” will overwrite CMD["/bin/httpd", "-f", "-h", "/opt/data/web"]

  • Use ENTRYPOINT to set ENV var and start deamon

file: entrypoint.sh

#!/bin/sh
cat >  /etc/nginx/conf.d/www.conf << EOF
server {
  server_name ${HOSTNAME};
   listen${IP:-0.0.0.0}:${PORT:-80}
  root ${NGX_DOC_ROOT:-/usr/share/nginx/html}
}
EOF
exec "$@"   # PID=1

file Dockerfile

FROM nginx:1.18-alpine
ENV NGX_ROOT="/usr/data/html"
ADD index.html ${NGX_ROOT}
ADD entrypoint.sh /bin/
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]
ENTRYPOINT ["/bin/entrypoint.sh"]

Run:

$ docker build -t nginx_demo:v0.1 ./
$ docker run --name ngx1 --rm -P nginx_demo:v0.1
login into docker
$ docker exec -it ngx1 /bin/sh
# ps
PID  USER  TIME  COMMAND
1    ROOT  0:00  nginx: master proccess /usr/bin/nginx -g daemon off; 

You will see nginx started and use ROOT user. That is not good for security reason

# ### USER User name for RUN, CMD, ENTRYPOINT * Syntax * USER \[:\] * USER \[:\] * check /etc/passwd for \

HEALTHCHECK

The HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. (e.g. not responding, infinite loop) Syntax * HEALTHCHECK [OPTIONS] CMD command (check container health by running a command inside the container) * The options that can appear before CMD are: * –interval=DURATION (default: 30s) * –timeout=DURATION (default: 30s) * –start-period=DURATION (default: 0s) * –retries=N (default: 3) * Response: * 0: success - the container is healthy and ready for use * 1: unhealthy - the container is not working correctly * 2: reserved - do not use this exit code * HEALTHCHECK NONE (disable any healthcheck inherited from the base image) example:

HEALTHCHECK --interval=5m --timeout=5s --start-period = 1m\
  CMD curl -f http://localhost/ || exit 1

A more complex example

FROM nginx:1.18-alpine
ENV NGX_ROOT="/usr/data/html"
ADD index.html ${NGX_ROOT}
ADD entrypoint.sh /bin/
EXPOSE 80
HEALTHCHECK --start-period = 3s --interval=10 --timeout=1s CMD wget -O - -q http://{IP:-0.0.0.0}:${PORT:-80}/
CMD ["/usr/sbin/nginx", "-g", "daemon off;"]
ENTRYPOINT ["/bin/entrypoint.sh"]
The check result will show in console:
docker run --name web1 --rm -P -e "PORT=8080" ngx:v0.1
127.0.0.1 - - [10/May/2020:18:11:20 +0000] "GET / HTTP/1.1" 200 32 "-" "Wget" "-"
127.0.0.1 - - [10/May/2020:18:11:23 +0000] "GET / HTTP/1.1" 200 32 "-" "Wget" "-"

SHELL

The SHELL instruction allows the default shell used for the shell form of commands to be overridden. The default shell on Linux is [“/bin/sh”, “-c”], and on Windows is [“cmd”, “/S”, “/C”]. The SHELL instruction must be written in JSON form in a Dockerfile. * Syntax * SHELL [“executable”, “parameters”] * Example * SHELL ["powershell", "-command"] * SHELL ["/usr/bin/zsh", "-c"]

STOPSIGNAL

sets the system call signal that will be sent to the container to exit. * Syntex * STOPSIGNAL signal

ARG

The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the –build-arg = flag. If a user specifies a build argument that was not defined in the Dockerfile, the build outputs a warning. This provide a way to use one dockerfile to meet different requirement * Syntex * ARG \[=\]

example:

...

ARG auther="ray-x"
LABEL maintainer="${auther}"
...
Use –build-arg to pass the ARG in docker build --build-arg auther="ray-x rayx@mail.com"

ONBUILD

The ONBUILD instruction adds a trigger instruction to be executed at a later time, when the image is used as the base for another build. The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile. * Syntax * ONBUILD \ * ONBUILD can not use in ONBUILD ONBUILD ONBUILD CMD ["ls"] is illegal * Use onbuild tag for base image has onbuild * COPY, ADD may not work....(different context)

e.g. docker nginx1:v0-onbuild

...
ONBUILD ADD http://nginx.org/download/nginx-1.18.0.tar.gz /usr/local/src

FROM nginx1:v0-onbuild

Docker storage

Quote from docker.com “Copy-on-write is a strategy of sharing and copying files for maximum efficiency. If a file or directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or running the container), the file is copied into that layer and modified. This minimizes I/O and the size of each of the subsequent layers. These advantages are explained in more depth below.”

COW is low efficiency and thus I/O intensive application will need to mount data volume from host to docker.

Storage volume will help data persistency after docker image removed. Also split data with binary executable.

Bind mounts

A volume that points to a user specified location on host file system bind mount /my/bind/volume -> (bind) to host /user/configured/directory

Docker managed volume

Docker deamon creates managed volumes in a portion of host’s file system that owned by docker `var/lib/docker/vfs/dir/

Usage: Use -v to use volume

  • Dockage-managed volume:
  • docker run -it –name bbox1 -v /data busybox
  • docker inspect -f {{.Mounts}} bbox1
    • Inspect bbox1 container volume, volume id and directory in host ([{volume bb23e94e907dc29f3e62deddd332520d34f489177c5bbd5b03a8a75426430a19 /var/lib/docker/volumes/bb23e94e907dc29f3e62deddd332520d34f489177c5bbd5b03a8a75426430a19/_data /data local true }] )
  • Bind mount volume
  • docker run -it –name bbox2 -v HOSTDIR:VOLUMEDIR busybox e.g. docker run -it --name bbox2 -v /data/volumes/b2:/data busybox
  • docker inspect -f {{.Mounts}} bbox2
    • output: [{bind /data/volumes/b2 /data true rprivate}]

Share folder and joint container with -v and --volumes-from

  • User case duplicate setup/data:
  • Container A startup and access file/setup F in host.
  • Container B startup and access F through container A
  • Container C startup and access F through container A
  • Container A could stop/pause
  • Network duplication
  • Container A startup and startup network network1 and loopback and filesystem
  • Container Nginx started up and use A to access network1 and loopback and nginx setup
  • container tomcat started up and use A to access loopback and tomcat setup
  • container mysql started up and use A to loopback and data volume
  • Application server started up and use A to access loopback

Instruction example

Startup a infrastructure container infrcon access host folder /data/infracon/volume docker run --name infracon -it -v /data/infracon/volume/:/data/web/html busybox

Startup httpd docker run --name httpd --network container:infracon --volumes-from infracon -it bosybox