This had never been a issue until I give up vimr and use kitty + neovim. I found that my <S-Fn> no longer works.
Well, do panic, use infocmp or keybind or keycode to find out how the key is defined (also can use cat or sed
-n -l).
For kitty <S-F1> key code is ^[[1;2P . Here ^[ means <Esc> or \E
vim and neovim handle key code slight different.
for neovim, S-Fn was map to F(12+n) , e.g. S-f1 mapped to F13.
So you can do this:
I used to use slickedit, qt-creator, idea (webstorm, goland), vscode, but I am back to vi now. Thanks for Plug I do
not need to configure my setup everytime....... I am still using sublime edit(as a notepad)
vimr is one of the best nvim-gui. But it does not in active development in last 3 months(It is hard for a one developer
project), some of the crash durning coding is annoying. I only use nvim/vim + kitty now.
Best plugin for search anything. I used it to replace fzf, leaderF, leaderP, NerdTree, Ag/Ack/Rg, yank(ring), project management. undolist and many more
coc.nvim
I disabled vim-go and turn to coc-go. Replace defx with coc-explorer, use coc-spell for spell check
coc-snippet replaced my ultisnips. Also, there are coc for yml, json, prettier, python, rust, PHP (any language vs code
supported)......
ALE
well, I am still using ALE and configure lots of lint tool with it.
Programming support:
YCM (used to be my favourite, only for C++ and python now), but I am using coc.nvim more offen now,
vim-go(for go testing, highlight, gopls disabled),CompleteParameter, emmet-vim, tagbar/vista, polygot,
and some language specific plugins (e.g html, js/ts, swift), ctags/gutentags, vim-less, govim(macvim only, with some cool AST)
nvim-colorizer.lua (display hex and color in highlight), rainbow, log-highlight, limelight, interestingwords
Git:
fugitive, gv, coc-git
Format:
tabular, coc-prettier(or, sometimes prettier), auto-pair
Menu and tab:
quickui(created a menu for the function/keybind I used less often. I can not rememeber all the commands and keybinds....)
wintab: one of the best buffer management tool
I have been used this tool for a while, it do have a nice UI design and lots of features. But I need a professional
licence to use it to connect to kafka.
To setup SASL
Need to chang bootstrap, security, sasl configure:
akhq:
server:
base-path: "" # if behind a reverse proxy, path to akhq without trailing slash (optional). Example: akhq is
# behind a reverse proxy with url http://my-server/akhq, set base-path: "/akhq".
# Not needed if you're behind a reverse proxy with subdomain http://akhq.my-server/
access-log: # Access log configuration (optional)
enabled: true # true by default
name: org.akhq.log.access # Logger name
format: "[Date: {}] [Duration: {} ms] [Url: {} {} {}] [Status: {}] [Ip: {}] [Length: {}] [Port: {}]" # Logger format
# default kafka properties for each clients, available for admin / producer / consumer (optional)
clients-defaults:
consumer:
properties:
isolation.level: read_committed
# list of kafka cluster available for akhq
connections:
my-cluster-sasl:
properties:
bootstrap.servers: "1.236.23.21:9092,3.15.1.12:9092,3.15.16.69:9092"
security.protocol: SASL_PLAINTEXT
sasl.mechanism: PLAIN
sasl.jaas.config: org.apache.kafka.common.security.scram.ScramLoginModule required username="your username" password="your password";
pagination:
page-size: 25 # number of elements per page (default : 25)
threads: 16 # Number of parallel threads to resolve page
docker run --name mydocker1 -it --network bridge -h mydocker1.rayx.me --dns 8.8.8.8 --dns-search rayx.me --add-host www.rayx.me:54.12.17.68 --rm busybox:latest
Above command will create a docker named mydocker1 with bridge network.
The host name mydocker1.rayx.me, use dns 8.8.8.8
in /etc/hosts will have :
The 80 port was mapped to
So you could access the web through curl 127.0.0.1:32768
Or with docker port
dockerport7a96c7ddeb1e
80/tcp->0.0.0.0:32768
* -p : map containerPort to hostPort
e.g. `docker run --name busybox-web1 --rm -p 8080::80 ray-x/httpd-busybox:v0.2`
-p :: map containerPort to host dynamic port (e.g 192.168.0.2:8080)
e.g. docker run --name busybox-web1 --rm -p 192.168.10.10::80 ray-x/httpd-busybox:v0.2
All access to docker 80 will need to through 192.168.10.10 network + a dynamic port
-p ::: map containerPort to host and port
e.g. docker run --name busybox-web1 --rm -p 192.168.10.10:8080::80 ray-x/httpd-busybox:v0.2
start up container b1
docker run --name b1 --rm -it busybox
start up container b2 and join network of b1
docker run --name b2 --rm -it --network container:b1 busybox
run ifconfig on both docker container and the ip address will be the same (in my test both are 172.17.0.2)
b1 and b2 share network and 127.0.0.1 in b1 is same as 127.0.0.1 in b2. e.g start a webserver in b1 and you can use 127.0.0.1:80 to access the http from b2
e.g: Get a busy box image and create a file inside container
docker run --name base1 -it busybox
Unable to find image 'busybox:latest' locally
latest: Pulling from library/busybox
d9cbbca60e5f: Pull complete
Digest: sha256:836945da1f3afe2cfff376d379852bbb82e0237cb2925d53a13f53d6e8a8c48c
Status: Downloaded newer image for busybox:latest
/ #mkdir -p /data/html
vi index.html
<h1>Busybox httpd server. </h1>
In another terminal create a new image and check it
docker commit -p -a "rayx1" -m "a base image with index.html" base1
docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 64e381aeac90 55 seconds ago 1.22MB
you will see a un-named image. To tag a image use docker tag
Usage: docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE
e.g tag it to user rayx1 with name httpd-busybox and tag v0.1
docker tag 64e381aeac90 rayx1/httpd-busybox:v0.1
Also you could use multiple tags
docker tag rayx1/httpd-busybox:v0.1 rayx1/httpd:latest
You will see:
REPOSITORY TAG IMAGE ID CREATED SIZE
rayx1/httpd-busybox v0.1 64e381aeac90 6 minutes ago 1.22MB
rayx1/httpd latest 64e381aeac90 6 minutes ago 1.22MB
docker run --name base2 rayx1/httpd-busybox:v0.2 Use docker inspect base2 get the ipaddress
"IPAddress": "172.17.0.3", and you can check the http with lynx 172.17.0.3
Options:
-a, --attach Attach STDOUT/STDERR and forward signals
--detach-keys string Override the key sequence for detaching a container
-i, --interactive Attach container's STDIN
Note: You can not re-run a container image again. You have to start/stop/restart your container. container run only
can be used for start a new container
note you can also login into other docker image registry. e.g to login into aliyun docker
docker login -u rayx1 registry.cn-beijing.aliyuncs.com and push
docker push registry.cn-beijing.aliyuncs.com/rayx1/httpd-busybox:v0.2
docker save [OPTIONS] IMAGE [IMAGE...] e.g pack http-busybox into httpd.gzdocker save -o httpd.gz rayx1/httpd-busybox:v0.2 or multiple image into a file
docker save -o httpd.gz rayx1/httpd-busybox:v0.2 rayx1/busybox
Kubernetes, AkA k8s.Kubernetes, is an Production-Grade Container Orchestration System. It can automating deployment, scaling, and managing containerized applications.
controller: Control loops that watch the state of your cluster, then make or request changes where needed. Each controller tries to move the current cluster state closer to the desired state.
Node controller: Monitor nodes. Noticing and responding when nodes go down.
Endpoints controller: Populates the Endpoints object
Replication controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.
Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
Kubernetes created a Pod to host your application instance. A Pod is a Kubernetes abstraction that represents a group of one or more application containers (such as Docker or rkt), and some shared resources for those containers. Those resources include:
Shared storage, as Volumes
Networking, as a unique cluster IP address
Information about how to run each container, such as the container image version or specific ports to use
Pod is a virtual machine to host docker containers/applications
Pods overview
Normally, we group containers that logically coupled together in a Pod. But in most case we run a single container in a Pod.
A Node is a machine that host Pods.
Node can be either virtual machine or physical machine.
Node is consisted with :
* kubelet
* kube-proxy
* Pods
* docker(or other container)
Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization or, metrics.
On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory. (Most likely a Java application :-P)
Docker attempts to mitigate these risks by adjusting the OOM priority on the Docker daemon so that it is less likely to be killed than other processes on the system.
-oom-score-adj to adjust the priority
--oom-kill-disable
* -m or –memory= e.g. -m 32m
* –memory-swap The amount of memory this container is allowed to swap to disk.
–memroy-swap
-m
explain
passive S
passive M
Container total space S, RAM: M, swap S-M, if S==M, no SWAP allocated
0
positive
swap unset (same as below)
unset
positive M
if swap enabled in host , total swap 2*M
-1
positive M
if Host enabled swap, container can use up all swap space
If you do not want publish your docker to public registry(e.g. dockerhub, aws, aliyun etc). You can use a local/private registry.
Docker provide docker registry(which is is a docker image)
As discussed early, we can use a infrastructure container to share system variable to other container. (e.g use consul)
and generate configure file based on system variable. e.g. a nginx file in /etc/nginx/conf.d/
ADD is similar to COPY, expects that it can add and unzip compressed file (gz, Z, bz2, zip). It also can fetch files from URL e.g. ADD http://nginx.org/download/nginx-1.18.0.tar.gz /usr/local/
or nginx-1.18.0.tar.gz /usr/local (will untar to /usr/local, docker will have /usr/local/nginx-1.18.0)
* Syntax:
* ADD \ [\ …] \
* ADD [“\“, … “\“]
* Same as COPY 1~5 bullet points
* to un-compress, \ must not end with /
* If use ADD ["<src>", ... "<dest>"] and wildcard existed in src, \ should end with / if \ not end with / it will be treat as a single file instead of a dir
The WORKDIR instruction sets the working directory for any RUN, CMD, ENTRYPOINT, COPY and ADD instructions that follow it in the Dockerfile. If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
* Syntax:
* WORKDIR /path/to/workdir
The VOLUME instruction creates a mount point(volume) and marks it as holding externally mounted volumes from native host or other containers.
* Syntax:
* VOLUME \ e.g. VOLUME /var/log
* VOLUME [“\“] e.g. VOLUME [“/opt”]
* VOLUME is used to share folder between Docker and host/other dockers
* Docker VOLUME is similar to -v option in docker run command. Difference is that VOLUME does not specify the directory mapping. Normally is uses to gether the logs in container. More specific, VOLUME /var/log will expose the folder to a folder like /var/lib/docker/volumes/3207....84e4 and docker container will know the mapping. Any logs in /var/log in docker will also appear in /var/lib/docker/volumes/3207....84e4
ENV sets the environment variable \ to the value \.
Note ENV is set in docker build also it will be passed to docker run
The ENV setup can be overwrite with docker run -e <key>=<value>
* Syntax:
ENV \ \
ENV \=\ …
Refer to the env variable with $variable_name or ${variable_name}
e.g
ENV myName John Doe equal to ENV myName="John Doe"
ENV myName="John Doe" myDog=Rex\ The\ Dog \
myCat=fluffy
* To set a value for a single command, use RUN \=\ \
Run the executable in docke durning docker build.
The RUN instruction will execute any commands in a new layer on top of the current image and commit the results. The resulting committed image will be used for the next step in the Dockerfile.
Syntax
RUN \ (shell form, /bin/sh -c \)
RUN [“executable”, “param1”, “param2”] (exec form)
Shell form PID not 1 and can not receive Unix signals
exec form does not support shell operator (e.g wildcard, &, >, | etc) to use shell , you need to run RUN ["/bin/bash", "-c", "<command>", "<argument1>", "<argument2>" ... ]
Tp prevent demon stop after shell stops, need to use
nohub or exec.
Note:
nohub command
exec: replaces the current process image with a new process image. This means it replace
nohub: no hungup, Run a Command or Shell-Script Even after You Logout
The main purpose of a CMD is to provide defaults for an executing container. e.g. buxybox default CMD is /usr/sh, nginx default is nginx
* Syntax
* CMD [“executable”,”param1”,”param2”] (exec form, this is the preferred form)
* CMD [“param1”,”param2”] (as default parameters to ENTRYPOINT)
* CMD command param1 param2 (shell form)
* If multipule CMD provided, only the last one is effective
* To build a busybox httpd, which is correct:
* Pitfull
* CMD /bin/httpd -f -h \({WEB_ROOT}
* CMD ["/bin/httpd", "-f", "-h", "\)”]
* CMD [“/bin/sh”, “-c”, “/bin/httpd”, “-f”, “-h ${WEB_ROOT}”]
* CMD [“/bin/sh”, “-c”, “/bin/httpd”, “-f”, “-h /opt/data/web”]
* form 1, you can not enter interative mode with -it, If you need to inspect, need to run `docker exec ‘/bin/sh’
* form 2, will not work, ${WEB_ROOT} not found
* form 3, will not work, start and then exit(httpd is a backend deamon sh -c httpd will return so PID 1 will exit too, this will stop the container)
* form 4, will not work, start and then exit(same as above)
An ENTRYPOINT allows you to configure a container that will run as an executable.
* Syntax
* ENTRYPOINT [“executable”, “param1”, “param2”] (exec form)
* ENTRYPOINT command param1 param2
* Command line arguments to docker run \ will be appended after all elements in an exec form ENTRYPOINT, and will override all elements specified using CMD. This allows arguments to be passed to the entry point, i.e., docker run \ -d will pass the -d argument to the entry point. You can override the ENTRYPOINT instruction using the docker run –entrypoint flag.
* The shell form prevents any CMD or run command line arguments from being used, but has the disadvantage that your ENTRYPOINT will be started as a subcommand of /bin/sh -c, which does not pass signals. This means that the executable will not be the container’s PID 1 - and will not receive Unix signals - so your executable will not receive a SIGTERM from docker stop \.
Only the last ENTRYPOINT instruction in the Dockerfile will have an effect.
docker run --entrypoint <cmd> <args> overwrite the ENTRYPOINT in dockerfile
ENTRYPOINT solve the issue that CMD ["/bin/sh", "-c", "/bin/httpd", "-f", "-h /opt/data/web"] has
ENTRYPOINT /bin/httpd -f =h /opt/data/web will not exit
If both CMD and ENTRYPOINT exists, arguments of CMD will be pass to ENTRYPOINT as argument
The HEALTHCHECK instruction tells Docker how to test a container to check that it is still working. (e.g. not responding, infinite loop)
Syntax
* HEALTHCHECK [OPTIONS] CMD command (check container health by running a command inside the container)
* The options that can appear before CMD are:
* –interval=DURATION (default: 30s)
* –timeout=DURATION (default: 30s)
* –start-period=DURATION (default: 0s)
* –retries=N (default: 3)
* Response:
* 0: success - the container is healthy and ready for use
* 1: unhealthy - the container is not working correctly
* 2: reserved - do not use this exit code
* HEALTHCHECK NONE (disable any healthcheck inherited from the base image)
example:
The SHELL instruction allows the default shell used for the shell form of commands to be overridden. The default shell on Linux is [“/bin/sh”, “-c”], and on Windows is [“cmd”, “/S”, “/C”]. The SHELL instruction must be written in JSON form in a Dockerfile.
* Syntax
* SHELL [“executable”, “parameters”]
* Example
* SHELL ["powershell", "-command"]
* SHELL ["/usr/bin/zsh", "-c"]
The ARG instruction defines a variable that users can pass at build-time to the builder with the docker build command using the –build-arg = flag. If a user specifies a build argument that was not defined in the Dockerfile, the build outputs a warning.
This provide a way to use one dockerfile to meet different requirement
* Syntex
* ARG \[=\]
The ONBUILD instruction adds a trigger instruction to be executed at a later time, when the image is used as the base for another build.
The trigger will be executed in the context of the downstream build, as if it had been inserted immediately after the FROM instruction in the downstream Dockerfile.
* Syntax
* ONBUILD \
* ONBUILD can not use in ONBUILD ONBUILD ONBUILD CMD ["ls"] is illegal
* Use onbuild tag for base image has onbuild
* COPY, ADD may not work....(different context)
Quote from docker.com “Copy-on-write is a strategy of sharing and copying files for maximum efficiency. If a file or
directory exists in a lower layer within the image, and another layer (including the writable layer) needs read access
to it, it just uses the existing file. The first time another layer needs to modify the file (when building the image or
running the container), the file is copied into that layer and modified. This minimizes I/O and the size of each of the
subsequent layers. These advantages are explained in more depth below.”
COW is low efficiency and thus I/O intensive application will need to mount data volume from host to docker.
Storage volume will help data persistency after docker image removed. Also split data with binary executable.
Inspect bbox1 container volume, volume id and directory in host ([{volume
bb23e94e907dc29f3e62deddd332520d34f489177c5bbd5b03a8a75426430a19
/var/lib/docker/volumes/bb23e94e907dc29f3e62deddd332520d34f489177c5bbd5b03a8a75426430a19/_data /data local true }]
)
Bind mount volume
docker run -it –name bbox2 -v HOSTDIR:VOLUMEDIR busybox e.g.
docker run -it --name bbox2 -v /data/volumes/b2:/data busybox