# Killer Coda
- [Standard Commands](#Standard%20Commands)
- [Monitor Cluster Components](#Monitor%20Cluster%20Components)
- [POD Port Access](#POD%20Port%20Access)
- [Container & POD Logging](#Container%20&%20POD%20Logging)
- [Utility - crictl](#Utility%20-%20crictl)
- [Kubelet Misconfigured](#Kubelet%20Misconfigured)
- [Application Misconfigured 1](#Application%20Misconfigured%201)
- [Application Misconfigured 2](#Application%20Misconfigured%202)
- [Application Multi Container Issue](#Application%20Multi%20Container%20Issue)
- [Issue One: Gather Logs](#Issue%20One:%20Gather%20Logs)
- [Issue One: Fix Deployment](#Issue%20One:%20Fix%20Deployment)
- [Create ConfigMaps](#Create%20ConfigMaps)
- [ConfigMap Access in Pods](#ConfigMap%20Access%20in%20Pods)
- [Ingress Create](#Ingress%20Create)
- [Create Services for existing Deployments pt. 1](#Create%20Services%20for%20existing%20Deployments%20pt.%201)
- [Create Ingress for existing Services pt. 2](#Create%20Ingress%20for%20existing%20Services%20pt.%202)
- [NetworkPolicy Namespace Selector](#NetworkPolicy%20Namespace%20Selector)
- [Create new NPs](#Create%20new%20NPs)
- [NP - Space1](#NP%20-%20Space1)
- [NP - Space2](#NP%20-%20Space2)
- [Solution Verification](#Solution%20Verification)
- [NetworkPolicy Misconfigured](#NetworkPolicy%20Misconfigured)
- [Verify Solution](#Verify%20Solution)
- [RBAC ServiceAccount Permissions](#RBAC%20ServiceAccount%20Permissions)
- [Control ServiceAccount permissions using RBAC](#Control%20ServiceAccount%20permissions%20using%20RBAC)
- [RBAC User Permissions](#RBAC%20User%20Permissions)
- [Control User permissions using RBAC](#Control%20User%20permissions%20using%20RBAC)
- [Scheduling Priority](#Scheduling%20Priority)
- [POD Priorities](#POD%20Priorities)
- [Create Pod with higher priority](#Create%20Pod%20with%20higher%20priority)
- [Scheduling Pod Affinity](#Scheduling%20Pod%20Affinity)
- [Select Node by Pod Affinity](#Select%20Node%20by%20Pod%20Affinity)
- [Scheduling Pod Anti Affinity](#Scheduling%20Pod%20Anti%20Affinity)
- [Select Node by Pod Anti Affinity](#Select%20Node%20by%20Pod%20Anti%20Affinity)
- [Persistent Volumes](#Persistent%20Volumes)
- [Create a Persistent Volume](#Create%20a%20Persistent%20Volume)
- [PVC Resize](#PVC%20Resize)
- [Services & Networking](#Services%20&%20Networking)
- [Ingress](#Ingress)
- [Network Policies](#Network%20Policies)
## Standard Commands
These are from the Udemy Course that I felt would be relevant to overall training/practicing.
### Monitor Cluster Components
- View resource metrics for Nodes & PODs.
- Unless the metrics feature is installed/enabled, this won't work.
```
kubectl top node
kubekctl top pod
```
### POD Port Access
- A `NodePort` is for external access.
- You can't change the port number via the command line & have to edit the node port after creation.
- A `ClusterIP` is for internal POD access.
### Container & POD Logging
On the controller node the following directories are where logs are located:
- `/var/log/containers`
- `/var/log/pods`
- `/var/log/syslog | grep [a]piserver`, for kube-apiserver logs
### Utility - crictl
The `crictl` command can show what PODs are running & logging.
- Show running containers/PODs:
- `crictl ps`
- Show container logs:
- `crictl logs $CONTAINER_UUID`
---
### Kubelet Misconfigured
- Problem: kubelet is configured with an incorrect option.
- Solution: Use the `find` utility to locate files that inject extra args to the kubelet service.
```
node01 $ cd /
node01 $ find . -iname kubeadm*
... output truncated for brevity ...
./var/lib/kubelet/kubeadm-flags.env
```
### Application Misconfigured 1
- Problem: fix deployment.
- Issue: was related to defined `configMap` object.
- Solution: updated deployment to use correct value for `configMap` object.
### Application Misconfigured 2
- Problem: fix deployment
- Issue: was related to deployment set to run on a specific node due to node affinity being set in the deployment config.
- Solution: Update deployment to remove node affinity.
### Application Multi Container Issue
#### Issue One: Gather Logs
- Problem: obtain all logs for the deployment containers.
- Solution `k -n $NS logs deployments/$deploy --all-containers=true > logs.log`
#### Issue One: Fix Deployment
- Problem: Both containers are using the same port, 80.
- Solution: Edit the deployment and delete one of the containers.
### Create ConfigMaps
- Problem: Create a `cm` using specified data __AND__ another from the existing `configMap` file
- Solution:
- `k create cm $CM --from-literal=$KEY=$VALUE`
- `k create -f $CM.yaml`
### ConfigMap Access in Pods
- Problem:
1. Create a POD and set it to use the specified `cm` of "trauerweide" as a ENV variable TREE1.
2. Use the `cm` of "birke" as a volume.
- Solution:
1. https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-environment-variables
2. https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod
1. Make sure to read the question and not just blindly specify an incorrect dir! e.g. i used "/etc/birke/\*" & that isn't correct!!
2. Just use the base level dir & the `cm` will automagically mount each key as a file in that dir.
- Verifying solution:
```
# for solution 1
k exec pod1 -- env | grep "TREE"
TREE1=trauerweide
# for solution 2
kubectl exec pod1 -- cat /etc/birke/tree
kubectl exec pod1 -- cat /etc/birke/level
kubectl exec pod1 -- cat /etc/birke/department
```
### Ingress Create
#### Create Services for existing Deployments pt. 1
- Problem: Create 2 ingress `clusterip` services for the 2 deployments in `ns` __world__.
- Deployment names: __asia__ & __europe__
- Solution:
```
k -n world create service clusterip asia --tcp=80:80
k -n world create service clusterip europe --tcp=80:80
```
#### Create Ingress for existing Services pt. 2
- Problem: Create an nginx ingress for the 2 previously created services to be accessed via their respective paths.
- http://world.universe.mine:30080/asia/
- http://world.universe.mine:30080/europe/
- Solution:
- https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/#create-an-ingress
- Remember to set the port of the __service__ set previously. In this example, port 80.
- It will automatically create the **30080** port upon ingress creation.
### NetworkPolicy Namespace Selector
- Show `ns` labels:
```
k get ns --show-labels
```
#### Create new NPs
- Problem:
- Create NPs for namespce, space1, to only have egress to space2. Ingress is not affected.
- Create NPs for namespce, space2, to only have ingress to space1. Egress is not affected.
- Both should still allow DNS. This is mainly related to space1 since we're managing egress for it.
##### NP - Space1
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np
namespace: space1
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: space2
- ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
```
##### NP - Space2
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np
namespace: space2
spec:
podSelector: {}
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: space1
```
##### Solution Verification
- Should Work:
```
k -n space1 exec app1-0 -- curl -Isqm 1 microservice1.space2.svc.cluster.local
k -n space1 exec app1-0 -- curl -Isqm 1 microservice2.space2.svc.cluster.local
k -n space1 exec app1-0 -- nslookup tester.default.svc.cluster.local
k -n kube-system exec -it validate-checker-pod -- curl -Isqm 1 app1.space1.svc.cluster.local
```
- Shouldn't Work:
```
k -n space1 exec app1-0 -- curl -Isqm 1 tester.default.svc.cluster.local
k -n kube-system exec -it validate-checker-pod -- curl -Isqm 1 microservice1.space2.svc.cluster.local
k -n kube-system exec -it validate-checker-pod -- curl -Isqm 1 microservice2.space2.svc.cluster.local
k -n default run nginx --image=nginx:1.21.5-alpine --restart=Never -i --rm -- curl -Isqm 1 microservice1.space2.svc.cluster.local
```
### NetworkPolicy Misconfigured
- Problem: Fix the existing NP **np100x** to allow egress to the PODs in namespaces, _level-1000_, _level-1001_ & _level-1002_
- Solution: update network policy **np100x**
```
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: np-100x
namespace: default
spec:
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: level-1000
podSelector:
matchLabels:
level: 100x
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: level-1001
podSelector:
matchLabels:
level: 100x
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: level-1002
podSelector:
matchLabels:
level: 100x
- ports:
- port: 53
protocol: TCP
- port: 53
protocol: UDP
podSelector:
matchLabels:
level: 100x
policyTypes:
- Egress
```
##### Verify Solution
```
kubectl exec tester-0 -- curl -Isqm 1 tester.level-1000.svc.cluster.local
kubectl exec tester-0 -- curl -Isqm 1 tester.level-1001.svc.cluster.local
kubectl exec tester-0 -- curl -Isqm 1 tester.level-1002.svc.cluster.local
```
### RBAC ServiceAccount Permissions
#### Control ServiceAccount permissions using RBAC
- Problem:
- Create a service account _pipline_ for namespaces __ns1__ & __ns2__.
- Add the service account _pipeline_ to the predefined cluster role _view_ in both namespaces.
- Create cluster role & cluster role binding for the service account _pipeline_ to create & delete in each of the namespaces.
- Create the service account in each NS:
```
k -n ns1 create serviceaccount pipeline
k -n ns1 create serviceaccount pipeline
```
- Create the cluster role binding view objects:
```
k create clusterrolebinding ns1-pipeline-view --clusterrole=view --serviceaccount=ns1:pipeline
k create clusterrolebinding ns2-pipeline-view --clusterrole=view --serviceaccount=ns2:pipeline
```
- Create cluster role & binding to create & delete deployment objects:
```
k -n ns1 create clusterrole ns1-pipeline-deployments --verb=create,delete --resource=deployments
k -n ns2 create clusterrole ns2-pipeline-deployments --verb=create,delete --resource=deployments
k create clusterrolebinding ns1-pipeline-deployments-bind --clusterrole=ns1-pipeline-deployments --serviceaccount=ns1:pipeline
k create clusterrolebinding ns2-pipeline-deployments-bind --clusterrole=ns2-pipeline-deployments --serviceaccount=ns2:pipeline
```
- Verify the service account can perform requested tasks:
```
k auth can-i list deployments --as=system:serviceaccount:ns1:pipeline
k auth can-i list deployments --as=system:serviceaccount:ns2:pipeline
k auth can-i create deployments --as=system:serviceaccount:ns2:pipeline
k auth can-i create deployments --as=system:serviceaccount:ns1:pipeline
k auth can-i delete deployments --as=system:serviceaccount:ns1:pipeline
k auth can-i delete deployments --as=system:serviceaccount:ns2:pipeline
```
### RBAC User Permissions
#### Control User permissions using RBAC
- Problem: There is existing Namespace applications.
- User _smoke_ should be allowed to create and delete Pods, Deployments and StatefulSets in Namespace applications.
- User _smoke_ should have view permissions (like the permissions of the default ClusterRole named view) in all Namespaces but not in kube-system .
- Verify everything using `kubectl auth can-i` .
- Solution - Create _smoke_ role & role binding:
```
```
- Solution - Create _smoke_ view role & role binding:
```
k get ns # get all namespaces
k -n applications create rolebinding smoke-view --clusterrole view --user smoke
k -n default create rolebinding smoke-view --clusterrole view --user smoke
k -n kube-node-lease create rolebinding smoke-view --clusterrole view --user smoke
k -n kube-public create rolebinding smoke-view --clusterrole view --user smoke
```
### Scheduling Priority
#### POD Priorities
- Problem: Remove the higher priority POD.
- Solution: Review the PODs for their _priority_ fields and the one with the larger number is the higher prioritized POD.
```
k get pod -o yaml | grep priority
k get priorityclass
```
#### Create Pod with higher priority
- Problem: Create a POD with 1Gi memory & a higher priority class in order to get it scheduled.
- Solution: Create POD manifiest via `run` sub-command and then edit it to include the `priorityClassName` option.
- https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/
### Scheduling Pod Affinity
#### Select Node by Pod Affinity
- Solution: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity
- Note: This was frustrating because there are multiple ways to accomplish this & the simplest is not provided. I may need to practice more on this in general.
### Scheduling Pod Anti Affinity
#### Select Node by Pod Anti Affinity
- Note: Be aware of the very specific naming of the affinity & anti affinity objects.
### Persistent Volumes
#### Create a Persistent Volume
```
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-volume
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-path
hostPath:
path: "/mnt/data/"
claimRef:
name: pv-claim
namespace: default
```
### PVC Resize
- In this instance you will edit the `resources` field in the `spec`. For example:
```
spec:
... output truncated for brevity ...
resources:
requests:
storage: 60Mi # increased from 40Mi
```
### Services & Networking
#### Ingress
- Task: Create a nginx ingress & disable ssl-redirect.
- Solution:
```
k create ing nginx-ingress-resource --rule="/shop*=nginx-service:80,http" --annotation=nginx.ingress.kubernetes.io/ssl-redirect="false"
```
#### Network Policies
- Task: my-app-deployment and cache-deployment deployed, and my-app-deployment deployment exposed through a service named my-app-service . Create a NetworkPolicy named my-app-network-policy to restrict incoming and outgoing traffic to my-app-deployment pods with the following specifications:
- Allow incoming traffic only from pods.
- Allow incoming traffic from a specific pod with the label `app=trusted`
- Allow outgoing traffic to pods.
- Deny all other incoming and outgoing traffic.
- Solution:
```
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: my-app-network-policy
spec:
policyTypes:
- Ingress
- Egress
podSelector:
matchLabels:
app: my-app
ingress:
- from:
- podSelector: {}
- from:
- podSelector:
matchLabels:
app: trusted
egress:
- to:
- podSelector: {}
```
- Describing the network policy:
```
k describe networkpolicies my-app-network-policy
Name: my-app-network-policy
Namespace: default
Created on: 2024-11-15 08:44:44 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=my-app
Allowing ingress traffic:
To Port: <any> (traffic allowed to all ports)
From:
PodSelector: <none>
----------
To Port: <any> (traffic allowed to all ports)
From:
PodSelector: app=trusted
Allowing egress traffic:
To Port: <any> (traffic allowed to all ports)
To:
PodSelector: <none>
Policy Types: Ingress, Egress
```