Skip to content

2020

Setup postgres debugger env with docker

pldebugger setup

pldebugger require recompile with postgresql source code. A little bit hard to setup. Lucky enough, debian provides already compiled version. Strech: version 10 Buster: version 12

FROM postgres:12

MAINTAINER ray@ray-x
ENV PG_MAJOR 12
ENV PG_VERSION 12.3-1.pgdg100+1

# Install the postgresql debugger
RUN apt-get update \
  && apt-get install -y --no-install-recommends \
  postgresql-$PG_MAJOR-pldebugger


EXPOSE 5432

Start the docker and you should see:

pgdbg           |
pgdbg           | PostgreSQL Database directory appears to contain a database; Skipping initialization
pgdbg           |
pgdbg           | 2020-06-11 03:00:46.211 UTC [1] LOG:  starting PostgreSQL 12.3 (Debian 12.3-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
pgdbg           | 2020-06-11 03:00:46.211 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
pgdbg           | 2020-06-11 03:00:46.211 UTC [1] LOG:  listening on IPv6 address "::", port 5432
pgdbg           | 2020-06-11 03:00:46.214 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
pgdbg           | 2020-06-11 03:00:46.290 UTC [26] LOG:  database system was shut down at 2020-06-21 03:00:32 UTC
pgdbg           | 2020-06-11 03:00:46.314 UTC [1] LOG:  database system is ready to accept connections

Notes that the logs began with pgdbg instead of postgres

To debug with dbeaver, install extension :

CREATE EXTENSION pldbgapi;

Install debug extension in dbeaver (if not yet)

Help -> Install new software dbeaver install Search and install debugger Click “ok”, “accept”, “confirm”… to install After restart dbeaver, you should see a debug icon: DebugIcon

Create a demo sql:

CREATE SCHEMA test;
DROP function if exists test.somefunc(var integer);
CREATE FUNCTION test.somefunc(var integer) RETURNS integer AS $$
DECLARE
   quantity integer := 30+var;
BEGIN
   RAISE NOTICE 'Quantity here is %', quantity;      --在这里的数量是30
   quantity := 50;
   --
   -- 创建一个子块
   --
   DECLARE
      quantity integer := 80;
   BEGIN
      RAISE NOTICE 'Quantity here is %', quantity;   --在这里的数量是80
   END;
   RAISE NOTICE 'Quantity here is %', quantity;      --在这里的数量是50
   RETURN quantity;
END;
$$ LANGUAGE plpgsql;

SELECT test.somefunc(12);

Configure a debug session: Specify database, function, aurgument: Debug config

Start debug debug window

vim-clap is a combination of fzf, ctrlp, leaderF, Ag/Ack, nerdtree(in some extends) ......

Check this: Clap

And this: Clap providers: Clap providers

And this: Clap providers

Yes, it also provide a preview window...... Clap preview: Clap preview window

You can replace you fzf commands with vim-clap, e.g. my vimrc:

noremap <leader><s-F> :Clap grep2 ++query=<cword><CR>
cmap <leader><S-F>h :Clap command_history<CR>
noremap <leader>ch :Clap command_history<CR>
noremap <leader>cf :Clap history<CR>


function! s:history(arg)
  let l:query=''
  let l:subcommand=''
  echo a:arg
  if len(a:arg) > 0
    let l:query=' ++query='+a:arg[1]
  endif

  if a:arg[0] == ':'
    let l:subcommand = 'command_history'
    let l:query=trim(a:arg[1:])
  elseif a:arg[0] == '/'
    let l:subcommand = 'search_history'
    let l:query=trim(a:arg[1:])
  else
    let l:subcommand = 'history'
    let l:query=trim(a:arg)
  endif

  if len(l:query) > 1
    let l:query=' ++query=' . l:query
  endif
  exec 'Clap '. l:subcommand . l:query

endfunction

" noremap <c-F>:Clap grep2 ++query=@visual<CR>
noremap <s-T> :Clap tags<CR>
nmap <S-F2> :Clap filer<CR>

command! -bang -nargs=* History call s:history(<q-args>)
command! Files :Clap files
command! Buffers :Clap buffers
command! Tags :Clap proj_tags
command! Buffers :Clap buffers
command! Commits :Clap commits
command! Gdiff :Clap git_diff_files
command! Jumps :Clap jumps
command! Yanks :Clap yanks
command! Windows :Clap windows
command! Ag :Clap grep ++query<cword>
command! Ag2 :Clap grep2 ++query<cword>

So in command mode, when you type History History! History: it will provides similar interface as fzf

How keys was defined in terminal

This had never been a issue until I give up vimr and use kitty + neovim. I found that my <S-Fn> no longer works.

Well, do panic, use infocmp or keybind or keycode to find out how the key is defined (also can use cat or sed -n -l). For kitty <S-F1> key code is ^[[1;2P . Here ^[ means <Esc> or \E

vim and neovim handle key code slight different.

for neovim, S-Fn was map to F(12+n) , e.g. S-f1 mapped to F13. So you can do this:

    map <F13> <S-F1>
vim is slightly different. :help keycode

    set <S-F1>=^[[1;2P
    map <Esc>[1;2P <S-F1>

So put it all together

if !has("gui_running")
  if !has('nvim')
    set <S-F1>=^[[1;2P
    map <Esc>[1;2P <S-F1>
    set <S-F2>=^[[1;2Q
    map <Esc>[1;2Q <S-F2>
    set <S-F3>=^[[1;2R
    map <Esc>[1;2R <S-F3>
  else
    map <F13> <S-F1>
    map <F14> <S-F4>
    map <F15> <S-F5>
    map <F16> <S-F6>
  endif
endif

vim as a programming ide

I used to use slickedit, qt-creator, idea (webstorm, goland), vscode, but I am back to vi now. Thanks for Plug I do not need to configure my setup everytime....... I am still using sublime edit(as a notepad)

vimr is one of the best nvim-gui. But it does not in active development in last 3 months(It is hard for a one developer project), some of the crash durning coding is annoying. I only use nvim/vim + kitty now.

  • nvim+kitty configured with pop menu:

    vim_ide with nvim+kitty

  • nvim clap preview:

    vim_ide with nvim+kitty

  • nvim+kitty coc+ale:

    vim_ide with nvim+kitty

Vim Plugins

I used following plugin a lots

  • Plug

Plugin management tool

  • vim-clap

Best plugin for search anything. I used it to replace fzf, leaderF, leaderP, NerdTree, Ag/Ack/Rg, yank(ring), project management. undolist and many more

  • coc.nvim

I disabled vim-go and turn to coc-go. Replace defx with coc-explorer, use coc-spell for spell check coc-snippet replaced my ultisnips. Also, there are coc for yml, json, prettier, python, rust, PHP (any language vs code supported)......

  • ALE

well, I am still using ALE and configure lots of lint tool with it.

  • Programming support:

YCM (used to be my favourite, only for C++ and python now), but I am using coc.nvim more offen now, vim-go(for go testing, highlight, gopls disabled),CompleteParameter, emmet-vim, tagbar/vista, polygot, and some language specific plugins (e.g html, js/ts, swift), ctags/gutentags, vim-less, govim(macvim only, with some cool AST)

  • Debug:

vimspector

  • Theme, look&feel:

onedark, eleline, devicons, startify, powerline, indentLine(with nerdfont),

  • Color:

nvim-colorizer.lua (display hex and color in highlight), rainbow, log-highlight, limelight, interestingwords

  • Git:

fugitive, gv, coc-git

  • Format:

tabular, coc-prettier(or, sometimes prettier), auto-pair

  • Menu and tab: quickui(created a menu for the function/keybind I used less often. I can not rememeber all the commands and keybinds....) wintab: one of the best buffer management tool

  • Tools: floatterm, coc-todolist

  • Move and Edit:

easymotion, multi-cursor (has ome bugs with auto-complete. check this: You don’t need more than one cursor in vim ), vim-anyfold (better folding)

Shell

  • OMZshell is good, iterm2 is popular, but I turned to zprezto(with powerlevel10) + kitty. It is cooool and faster, check this:

Some of the benfits of kitty:

  • Fully GPU/OpenGL rendering

  • Easy split/tabing

  • Configurable font. You can configure multiple fonts for display. e.g. my configure:

font_family      FiraCode Retina
italic_font      InconsolataLGC Nerd Font Italic
bold_font        FiraCode Semibold
bold_italic_font InconsolataLGC Nerd Font BoldItalic

# Font size (in pts)
font_size        16.0

Why am I doing this:

  • bold font is too heavy… semibold is less distracting

  • Retina font is better than regular (I guess…)

  • nerd font support

  • Some font do not have italic (e.g. Cascadia)

nvim+kitty split view:

vim_ide with nvim+kitty

Check my repo

kafka SASL client setup


layout: post title: “connect to kafka cluster with SASL” subtitle: [“GUI Tools Kafka setup”, “kafka 客户端SASL配置”] author: “Ray” header-style: text tags: - Docker


I found there are two GUI client that can be used to connect to kafka:

Conductor

I have been used this tool for a while, it do have a nice UI design and lots of features. But I need a professional licence to use it to connect to kafka. Conductor UI

Kafka Tool

Kafka Tool also support SASL. Please refer to the document here: Kafka Tool SASL setup JAAS Setup JAAS Setup

The JAAS connection string sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="your name" password="you password";

AKHQ (previously known as KafkaHQ)

Kafka GUI for Apache Kafka to manage topics, topics data, consumers group, schema registry, connect and more… Github

AKHQ

The simplest way to install the software is using docker tchiotludo/akhq

You need to update the application.yml and run

docker run -d \
    -p 8080:8080 \
    -v /tmp/application.yml:/app/application.yml \
    tchiotludo/akhq
Here is a example application.yml

To setup SASL Need to chang bootstrap, security, sasl configure:

akhq:
  server:
    base-path: "" # if behind a reverse proxy, path to akhq without trailing slash (optional). Example: akhq is
                  # behind a reverse proxy with url http://my-server/akhq, set base-path: "/akhq".
                  # Not needed if you're behind a reverse proxy with subdomain http://akhq.my-server/
    access-log: # Access log configuration (optional)
      enabled: true # true by default
      name: org.akhq.log.access # Logger name
      format: "[Date: {}] [Duration: {} ms] [Url: {} {} {}] [Status: {}] [Ip: {}] [Length: {}] [Port: {}]" # Logger format

  # default kafka properties for each clients, available for admin / producer / consumer (optional)
  clients-defaults:
    consumer:
      properties:
        isolation.level: read_committed

  # list of kafka cluster available for akhq
  connections:
    my-cluster-sasl:
      properties:
        bootstrap.servers: "1.236.23.21:9092,3.15.1.12:9092,3.15.16.69:9092"
        security.protocol: SASL_PLAINTEXT
        sasl.mechanism: PLAIN
        sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="your username" password="your password";

  pagination:
    page-size: 25 # number of elements per page (default : 25)
    threads: 16 # Number of parallel threads to resolve page

Docker network

Name space in linux

UTS (Hostname+DomainName), User, Mount, IPC, Pid, Net

In docker virtual network interface was used. Bridged network used to connect the containers

A good reference Understanding Docker Networking Drivers and their use cases

Docker network bridge Docker network bridge(From docker.com)

Connect to docker network Docker network bridge(From docker.com)

The connection between containers that inside different hosts could be through overlay network.

An diagram from docker-k8s-lab shows how overlay works docker overlay network

Docker networks:

  • Bridge
  • alloc by default when contain starts,
  • network: docker0IP 172.17.0.1/16 IP address of your docker
  • use brctl show to check your connection(two busybox container run at the moment):
                bridge name     bridge id               STP enabled     interfaces
                docker0         8000.02429af16b57       no              veth317e9b3
                                                                        vetha74a241
    
    docker0 connected to two interface base1, and base2. base1 can access base2 e.g wget -q -O - 172.17.0.3 (- means output to stdout)
  • host
  • access network from host. Share UTS/NET/IPC
  • none
  • NULL, only have loopback interface

A slightly complicated docker run command

docker run --name mydocker1 -it --network bridge -h mydocker1.rayx.me --dns 8.8.8.8 --dns-search rayx.me --add-host www.rayx.me:54.12.17.68 --rm busybox:latest Above command will create a docker named mydocker1 with bridge network. The host name mydocker1.rayx.me, use dns 8.8.8.8 in /etc/hosts will have :

54.12.17.68     www.rayx.me
172.17.0.4      mydocker1.rayx.me mydocker1
So access to www.rayx.me will use IP 54.12.17.68

Inbound communications

docker [container] run -p or docker [container] run -P * -P will expose all ports, -p will expose specific ports

  • -p export container port to a host dynamic port (e.g 3001) e.g. start the container docker run --name busybox-web1 --rm -p 80 ray-x/httpd-busybox:v0.2

    check dynamic port

    sudo iptables -t nat -vnL

    got:

    Chain DOCKER (2 references)
    pkts bytes target     prot opt in     out     source               destination         
        0     0 RETURN     all  --  docker0 *       0.0.0.0/0            0.0.0.0/0           
        0     0 DNAT       tcp  --  !docker0 *       0.0.0.0/0            0.0.0.0/0            tcp dpt:32768 to:172.17.0.5:80
The 80 port was mapped to So you could access the web through curl 127.0.0.1:32768 Or with docker port
      docker port 7a96c7ddeb1e
      80/tcp -> 0.0.0.0:32768
* -p : map containerPort to hostPort

e.g. `docker run --name busybox-web1 --rm -p 8080::80  ray-x/httpd-busybox:v0.2`
  • -p :: map containerPort to host dynamic port (e.g 192.168.0.2:8080)

    e.g. docker run --name busybox-web1 --rm -p 192.168.10.10::80 ray-x/httpd-busybox:v0.2 All access to docker 80 will need to through 192.168.10.10 network + a dynamic port

  • -p ::: map containerPort to host and port

e.g. docker run --name busybox-web1 --rm -p 192.168.10.10:8080::80 ray-x/httpd-busybox:v0.2

join other container’s network (share UTS, IPC, Net )

start up container b1 docker run --name b1 --rm -it busybox start up container b2 and join network of b1 docker run --name b2 --rm -it --network container:b1 busybox run ifconfig on both docker container and the ip address will be the same (in my test both are 172.17.0.2) b1 and b2 share network and 127.0.0.1 in b1 is same as 127.0.0.1 in b2. e.g start a webserver in b1 and you can use 127.0.0.1:80 to access the http from b2

host network

docker run --name b2 --rm  -it  --network host busybox
/ #ifconfig
docker0   Link encap:Ethernet  HWaddr 02:42:32:D3:57:27  
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0
......

enp1s0    Link encap:Ethernet  HWaddr 00:25:22:26:4E:F8  
          inet addr:192.168.199.88  Bcast:192.168.199.255  Mask:255.255.255.0
          inet6 addr: fe80::225:22ff:fe26:4ef8/64 
          .....

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          .....

veth62789e1 Link encap:Ethernet  HWaddr 22:3A:F7:11:B3:92  
          inet6 addr: fe80::203a:f7ff:fe11:b392/64 Scope:Link
          .....
By adding --network host container can access the host network

Multiple bridges in a host

We could use docker network create to create a new bridge(or host, lo, maclan, overlay etc)

e.g. docker network create -d bridge --subnet "172.26.0.0/16" --gateway "172.26.0.2" bridge1

Then we can attach container to the bridge1

docker run --name b2 --rm -it --net bridge1 busybox

Create a docker image

Create a new image from a container’s changes

  • Usage:

  • Docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]

e.g: Get a busy box image and create a file inside container

docker run --name base1 -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
d9cbbca60e5f: Pull complete
Digest: sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c
Status: Downloaded newer image for busybox:latest
/ #mkdir -p /data/html
vi index.html
<h1>Busybox httpd server. </h1>

In another terminal create a new image and check it

docker commit -p  -a "rayx1" -m "a base image with index.html" base1
docker image ls
REPOSITORY          TAG                 IMAGE ID            CREATED              SIZE
<none>              <none>              64e381aeac90        55 seconds ago       1.22MB

you will see a un-named image. To tag a image use docker tag

Usage:  docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]

Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

e.g tag it to user rayx1 with name httpd-busybox and tag v0.1

docker tag 64e381aeac90 rayx1/httpd-busybox:v0.1

Also you could use multiple tags

docker tag rayx1/httpd-busybox:v0.1 rayx1/httpd:latest

You will see:

REPOSITORY            TAG                 IMAGE ID            CREATED             SIZE
rayx1/httpd-busybox   v0.1                64e381aeac90        6 minutes ago       1.22MB
rayx1/httpd           latest              64e381aeac90        6 minutes ago       1.22MB

You could remove a tag with image rm

docker image rm rayx1/httpd
Untagged: rayx1/httpd:latest

You can change docker file fields

docker inspect base1

[
    {
        ...
        "Config": {
            ...
            "Cmd": [
                "sh"
            ],
            ...
        },

    }
]

Create a new image with Cmd start a httpd with version v0.2

docker commit -a "rayx1 <ray@myemail.com>" -c 'CMD ["/bin/httpd", "-f", "-h", "/data/html"]' -p base1 rayx1/httpd-busybox:v0.2

Run the new image

docker run : Run a command in a new container

docker run --name base2 rayx1/httpd-busybox:v0.2 Use docker inspect base2 get the ipaddress "IPAddress": "172.17.0.3", and you can check the http with lynx 172.17.0.3

start/stop/restart

docker start|stop|restart [OPTIONS] CONTAINER [CONTAINER...] Options:

Options:
  -a, --attach               Attach STDOUT/STDERR and forward signals
      --detach-keys string   Override the key sequence for detaching a container
  -i, --interactive          Attach container's STDIN

Note: You can not re-run a container image again. You have to start/stop/restart your container. container run only can be used for start a new container

login into docker hub and push the image

docker login -u rayx1 docker push rayx1/httpd-busybox

note you can also login into other docker image registry. e.g to login into aliyun docker docker login -u rayx1 registry.cn-beijing.aliyuncs.com and push docker push registry.cn-beijing.aliyuncs.com/rayx1/httpd-busybox:v0.2

Export and import (save/load) your local docker image (without push to server)

If you’d like to distribute your images to your teammate without push to a server (maybe, for testing purpose)

Export

docker save [OPTIONS] IMAGE [IMAGE...] e.g pack http-busybox into httpd.gz docker save -o httpd.gz rayx1/httpd-busybox:v0.2 or multiple image into a file docker save -o httpd.gz rayx1/httpd-busybox:v0.2 rayx1/busybox

import

docker load -i httpd.gz

What is Kubernetes?

Kubernetes, AkA k8s.Kubernetes, is an Production-Grade Container Orchestration System. It can automating deployment, scaling, and managing containerized applications.

What Kubernetes can do?

  • Service discovery and load balancing
  • Storage orchestration (automatically mount a storage system)
  • Automated rollouts and rollbacks
  • Automatic bin packing
  • Self-healing
  • Secret and configuration management
  • etc

Kubernetes Clusters

K8S Architecture

Master node:

API will access k8s master and k8s will route the request to node.

Kubernetes cluster with all the components tied together.)

API Server

kube-apiserver is a component of the Kubernetes control plane that exposes the Kubernetes API.

Scheduler

kube-scheduler control plane component that watches for newly created Pods with no assigned node, and selects a node for them to run on.

controller and kube-controller-manager

  • controller: Control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.
  • Node controller: Monitor nodes. Noticing and responding when nodes go down.
  • Endpoints controller: Populates the Endpoints object
  • Replication controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
  • Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.

POD And Nodes

Pod

Kubernetes created a Pod to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:

  • Shared storage, as Volumes
  • Networking, as a unique cluster IP address
  • Information about how to run each container, such as the container image version or specific ports to use Pod is a virtual machine to host docker containers/applications Pods overview Pods overview

Normally, we group containers that logically coupled together in a Pod. But in most case we run a single container in a Pod.

Node

A Node is a machine that host Pods. Node overview Node can be either virtual machine or physical machine. Node is consisted with : * kubelet * kube-proxy * Pods * docker(or other container)

kube-cluster

A cluster of Nodes

Labels and Selectors

Labels are key/value pairs that are attached to objects, such as pods. key=value selector: used to filter pods

Pod management

ReplicationController

Manage and maintain number of Replica of Pod. (Scale up and down )

Replica Set

Manage and maintain number of Replica of Pod. (Scale up and down )

Deployments

Provides declarative updates for Pods and ReplicaSets.

StatefulSets

Workload API object used to manage stateful applications.

DeamonSet

ensures that all (or some) Nodes run a copy of a Pod.

Job

A Job creates one or more Pods and ensures that a specified number of them successfully terminate.

Cronjob

Creates Jobs on a repeating schedule.

HPA (Horizontal Pod Autoscaler)

Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization or, metrics.

Service

An abstract way to expose an application running on a set of Pods as a network service.

Limit container resources

By default, a container has no resource constrains and an use as much of resource as kernel scheduler allows.

We can restraint CPU, Memory and GPU usage for docker

Memory and OOM

On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory. (Most likely a Java application :-P) Docker attempts to mitigate these risks by adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system.

-oom-score-adj to adjust the priority --oom-kill-disable * -m or –memory= e.g. -m 32m * –memory-swap The amount of memory this container is allowed to swap to disk.

–memroy-swap -m explain
passive S passive M Container total space S, RAM: M, swap S-M, if S==M, no SWAP allocated
0 positive swap unset (same as below)
unset positive M if swap enabled in host , total swap 2*M
-1 positive M if Host enabled swap, container can use up all swap space

CPU

–cpus how may cpus can be used 0.5 half of a CPU, 1.5 one and a half CPU

e.g

docker run -it --cpus=".5" ubuntu /bin/bash