ERP System Implementation on Kubernetes Cluster with Sticky Sessions:
01. Security Features Enabled in Kubernetes Cluster.
02. SNMP, Syslog and audit logs enabled.
03. Enabled ERP no login service user.
04. Auto-scaling enabled both ESB and Jboss Pods.
05. Reduced power consumption using the scale in future during off-peak days.
06. NFS enables s usual with ERP service user.
07. External Ingress( Load Balance enabled).
08. Cluster load balancer enabled by default.
09. SSH enabled via both putty.exe and Kubernetes management console.
10. Network Monitoring enabled on Kubernetes dashboard.
11. Isolated Private and external network ranges to protect backend servers (pods).
12. OS of the pos is updated with the latest kernel version.
13. Core Linux OS will reduce security threats.
14. Lightweight OS over small HDD space
15. Less amount of RAM usage has been enabled.
16. AWS ready.
17. Possible for exporting into Public cloud ENV.
18. L7 and L4 Heavy Load Balancing Enabled.
19. Snapshot Versioning Control Enabled.
20. Many More ………etc.
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
ERP System Implementation Kubernetes Cluster with Sticky Sessions
1. Docker to Kube Clsuter
pg. 1 By: chanaka.lasantha@gmail.com
ERP SYSTEM IMPLEMENTATION KUBERNETES CLUSTER
WITH AUTO-SCALING (AWS READY).
Wednesday, April 15, 2020
2. Docker to Kube Clsuter
pg. 2 By: chanaka.lasantha@gmail.com
CREATING NFS SERVER:
apt -y install nfs-kernel-server
vim /etc/exports
/opt/bkpdata *(rw,async,no_wdelay,insecure_locks,no_root_squash)
root@master:/var/sheared# showmount -e 192.168.2.28
Export list for 192.168.2.28:
/opt/bkpdata *
MOUNT NFS CLIENT ON ALL NODES AND MASTER:
apt -y install nfs-common
vim /etc/fstab
192.168.2.28:/opt/bkpdata /var/sheared nfs rw 0 0
mount /var/sheared
df -hT
192.168.2.28:/opt/bkpdata nfs4 49G 9.0G 38G 20% /var/sheared
DOCKERFILE OF EBS:
# Base system is the latest LTS version of Ubuntu.
FROM ubuntu
# Make sure we don't get notifications we can't answer during building.
ENV DEBIAN_FRONTEND non-interactive
# Prepare scripts and configs
ADD supervisor.conf /etc/supervisor.conf
# Download and install everything from the repos.
RUN apt-get -q -y update; apt-get -q -y upgrade &&
apt-get -q -y install sudo openssh-server supervisor vim iputils-ping net-tools curl htop tcpdump unzip alien &&
apt-get clean all &&
mkdir /var/run/sshd
# Create script folder
RUN mkdir -p /app/scripts
# Set working dir
WORKDIR /app
# Adding Jboss PID kill script into the docker container with permission.
#RUN chmod 775 -R /app/scripts/*
# Adding JDK package as deb install.
COPY jdk-7u76-linux-x64.rpm /app
RUN alien --scripts -i /app/jdk-7u76-linux-x64.rpm
# Adding Jboss application into the /app folder.
COPY wso2esb-4.8.0.zip /app
RUN unzip /app/wso2esb-4.8.0.zip
RUN chmod 775 -R /app/wso2esb-4.8.0
# Set custom ENV for the node
ENV JAVA_HOME=/usr/java/jdk1.7.0_76/bin/java
3. Docker to Kube Clsuter
pg. 3 By: chanaka.lasantha@gmail.com
# Set ENV
CMD ["source /etc/profile"]
# Set root password
RUN echo 'root:z80cpu' >> /root/passwdfile
# Create user and it's password
RUN useradd -m -G sudo chanakan
RUN echo 'chanakan:z80cpu' >> /root/passwdfile
# Apply root password
RUN chpasswd -c SHA512 < /root/passwdfile
RUN rm -rf /root/passwdfile
# Enable ROOT access for the root user (Optional)
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
# Port 22 is used for ssh
EXPOSE 22 8280 8243 9443 11111 35399 9999 9763
# Assign /data as static volume.
VOLUME ["/data"]
# Starting sshd
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
USER root
DOCKERFILE OF JBOSS:
# Base system is the latest LTS version of Ubuntu.
FROM ubuntu
# Make sure we don't get notifications we can't answer during building.
ENV DEBIAN_FRONTEND non-interactive
# Prepare scripts and configs
ADD supervisor.conf /etc/supervisor.conf
# Download and install everything from the repos.
RUN apt-get -q -y update; apt-get -q -y upgrade &&
apt-get -q -y install sudo openssh-server supervisor vim iputils-ping net-tools curl unzip tcpdump alien &&
apt-get clean all &&
mkdir /var/run/sshd
# Create script folder
RUN mkdir -p /app/scripts
RUN mkdir -p /app/JAVADIR
RUN mkdir -p /app/logs
RUN mkdir -p /opt/images/temp/daily/
RUN mkdir -p /opt/images/approval/
RUN mkdir -p /opt/images/documents/
RUN mkdir -p /opt/images/signatures/
RUN mkdir -p /opt/images/documents/insurance/renewal
RUN mkdir -p /opt/images/documents/officerupload
RUN mkdir -p /opt/images/documents/cheque/statementUpload
4. Docker to Kube Clsuter
pg. 4 By: chanaka.lasantha@gmail.com
RUN mkdir -p /opt/images/documents/budget/
RUN mkdir -p /opt/images/documents/finance/jrnlUpload/
RUN mkdir -p /opt/images/documents/bulkReceipt/
RUN mkdir -p /opt/images/documents/recovery/bulkInteract/
RUN mkdir -p /opt/images/documents/borrow/scheduleUpload/
# Set working dir
WORKDIR /app
# Adding Jboss PID kill script into the docker container with permission.
COPY JBOSS_STOP.sh /app/scripts
RUN chmod 775 -R /app/scripts/*
# Adding JDK package as deb install.
COPY jdk-7u76-linux-x64.rpm /app
RUN alien --scripts -i /app/jdk-7u76-linux-x64.rpm
# Adding Jboss application into the /app folder.
COPY jboss-as-7.1.3.Final.zip /app
RUN unzip /app/jboss-as-7.1.3.Final.zip
RUN chmod 775 -R /app/jboss-as-7.1.3.Final
#ADD cc-erp-ear-4.0.0.ear /app/jboss-as-7.1.3.Final/standalone/deployments/
#RUN chown root:root /app/jboss-as-7.1.3.Final/standalone/deployments/cc-erp-ear-4.0.0.ear
# Set custom ENV for the node
ENV JAVA_HOME=/usr/java/jdk1.7.0_76/bin/java
RUN echo "export JBOSS_HOME=/app/jboss-as-7.1.3.Final" >> /etc/profile
# Set ENV
CMD ["source /etc/profile"]
# Set root password
RUN echo 'root:z80cpu' >> /root/passwdfile
# Create user and it's password
RUN useradd -m -G sudo chanakan
RUN echo 'chanakan:z80cpu' >> /root/passwdfile
# Apply root password
RUN chpasswd -c SHA512 < /root/passwdfile
RUN rm -rf /root/passwdfile
# Enable ROOT access for the root user (Optional)
RUN sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/g' /etc/ssh/sshd_config
# Port 22 is used for ssh
EXPOSE 22 9191
# Assign /data as static volume.
VOLUME ["/data"]
# Starting sshd
CMD ["supervisord", "-c", "/etc/supervisor.conf"]
USER root
13. Docker to Kube Clsuter
pg. 13 By: chanaka.lasantha@gmail.com
readOnly: false
# This is necessary for sticky-sessions because it can only consistently route to the same nodes, not pods.
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: esb-ssh
topologyKey: kubernetes.io/hostname
TO APPLY SERVICE AND DEPLOYMENT:
root@master:~# kubectl apply -f esb-ssh.yaml
root@master:~# watch -n 0.2 'kubectl get pods --all-namespaces -o wide'
root@master:~# kubectl describe service esb-ssh
RESTART A CONTAINER INSIDE OF POD:
root@master:~/ESB# kubectl delete pod esb-ssh-675995598d-szwp7
You can use the following command to clean these components
root@master:~/ESB# docker system prune
will be showed the message below:
WARNING! This will remove:
- all stopped containers
- all volumes not used by at least one container
- all networks not used by at least one container
- all dangling images
14. Docker to Kube Clsuter
pg. 14 By: chanaka.lasantha@gmail.com
RESOURCE REQUESTS AND LIMITS OF POD AND CONTAINER:
Each Container of a Pod can specify one or more of the following:
spec.containers[].resources.limits.cpu
spec.containers[].resources.limits.memory
spec.containers[].resources.limits.hugepages-<size>
spec.containers[].resources.requests.cpu
spec.containers[].resources.requests.memory
spec.containers[].resources.requests.hugepages-<size>
Although requests and limits can only be specified on individual Containers, it is convenient to talk about Pod resource requests and limits. A Pod
resource request/limit for a particular resource type is the sum of the resource requests/limits of that type for each Container in the Pod.
MEANING OF CPU:
Limits and requests for CPU resources are measured in cpu units. One cpu, in Kubernetes, is equivalent to 1 vCPU/Core for cloud providers and 1
hyperthread on bare-metal Intel processors.
Fractional requests are allowed. A Container with spec.containers[].resources.requests.cpu of 0.5 is guaranteed half as much CPU as one that asks for 1
CPU. The expression 0.1 is equivalent to the expression 100m, which can be read as “one hundred millicpu”. Some people say “one hundred millicores”,
and this is understood to mean the same thing. A request with a decimal point, like 0.1, is converted to 100m by the API, and precision finer than 1m is
not allowed. For this reason, the form 100m might be preferred.
CPU is always requested as an absolute quantity, never as a relative quantity; 0.1 is the same amount of CPU on a single-core, dual-core, or 48-core
machine.
MEANING OF MEMORY:
Limits and requests for memory are measured in bytes. You can express memory as a plain integer or as a fixed-point integer using one of these suffixes:
E, P, T, G, M, K. You can also use the power-of-two equivalents: Ei, Pi, Ti, Gi, Mi, Ki. For example, the following represent roughly the same value:
128974848, 129e6, 129M, 123Mi
Here’s an example. The following Pod has two Containers. Each Container has a request of 0.25 cpu and 64MiB (226 bytes) of memory. Each Container
has a limit of 0.5 cpu and 128MiB of memory. You can say the Pod has a request of 0.5 cpu and 128 MiB of memory, and a limit of 1 cpu and 256MiB of
memory.
apiVersion: v1
kind: Pod
metadata:
name: frontend
spec:
containers:
- name: db
image: mysql
env:
- name: MYSQL_ROOT_PASSWORD
value: "password"
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
- name: wp
image: wordpress
resources:
15. Docker to Kube Clsuter
pg. 15 By: chanaka.lasantha@gmail.com
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
TO SET THE RESOURCE / REVOKE REQUESTS AND LIMITS OF THE DEPLOYMENT:
root@master:~# kubectl set resources deployment test-ssh --limits cpu=200m,memory=512Mi --requests cpu=100m,memory=256Mi
root@master:~# kubectl set resources deployment nginx --limits cpu=0,memory=0 --requests cpu=0,memory=0
root@master:~# watch -n 0.2 'kubectl get pods -o wide'
TO SCALE UP:
root@master:~# kubectl scale deployment test-ssh --replicas=3
root@master:~# kubectl scale deployment esb-ssh --replicas=3
root@master:~# watch -n 0.2 'kubectl get pods -o wide'
CREATE HORIZONTAL POD AUTOSCALER:
The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods controlled by the test-ssh and esb-
ssh deployment we created in the first step of these instructions. Roughly speaking, HPA will increase and decrease the number of replicas (via the
deployment) to maintain an average CPU utilization across all Pods of 50% (since each pod requests 200 milli-cores by kubectl run), this means average
CPU usage of 100 milli-cores). See here for more details on the algorithm.
root@master:~# kubectl autoscale deployment test-ssh --cpu-percent=50 --min=1 --max=10
root@master:~# kubectl autoscale deployment esb-ssh--cpu-percent=50 --min=1 --max=10
TO EXPOSE PORT 2202 FOR EXTERNAL ACCESS(Optional):
root@master:~# kubectl expose deployment test-ssh --port=2202 --target-port=22
root@master:~# kubectl expose deployment test-ssh --port=9191 --target-port=9191
CEARTE SSL CERTIFICATES FRO HAPROXY – SELFSIGNED:
root@master# apt -y install haproxy
root@master# mkdir -p /etc/pki/tls/certs
root@master# openssl req -x509 -nodes -newkey rsa:2048 -keyout /etc/pki/tls/certs/haproxy.pem -out /etc/pki/tls/certs/haproxy.pem -days 3650
root@master# chmod 600 /etc/pki/tls/certs/haproxy.pem