※ 쿠버네티스 컨피그('~/.kube/config') 에 여러 계정의 여러 클러스터가 저장되어있고 컨텍스트(context) 전환할 때

어떤 클러스터 컨텍스트를 사용하고 있는지 표시하고 싶은 경우 사용

 

  • Bash 쉘에서의 설정

'~/.bashrc' 또는 '~/.bash_profile' 파일에 아래 줄 추가

export PS1='\[\e]0;\u@\h: \w\a\]\[\033[01;32m\]\u@\h\[\033[00m\]:\[\033[01;34m\]\w\[\033[00m\] \[\033[01;36m\]($(kubectl config current-context))\[\033[00m\] $ '

 

설정 파일 적용

source ~/.bashrc

 

Context Name은 AWS 서비스의 경우 꼭 Arn을 다 입력하지 않고 클러스터 명 정도만 간편하게 적어 사용할 수 있다.

ETCDCTL version 2

 

etcdctl backup
etcdctl cluster-health
etcdctl mk
etcdctl mkdir
etcdctl set

 

ETCDCTL version 3

 

etcdctl snapshot save 
etcdctl endpoint health
etcdctl get
etcdctl put

 

Setting API version 

export ETCDCTL_API=<number>

Setting path to certificate files for ETCDCTL to authenticate ETCD API server

--cacert /etc/kubernetes/pki/etcd/ca.crt     
--cert /etc/kubernetes/pki/etcd/server.crt     
--key /etc/kubernetes/pki/etcd/server.key

An example command with kubectl pa

kubectl exec etcd-master01 -n kube-system -- sh -c "ETCDCTL_API=3 etcdctl get / --prefix --keys-only --limit=10 --cacert /etc/kubernetes/pki/etcd/ca.crt --cert /etc/kubernetes/pki/etcd/server.crt  --key /etc/kubernetes/pki/etcd/server.key"

 

Error message:

│ Error: Failed to construct REST client
│
│   with module.kubernetes.kubernetes_manifest.eniconfig["xx-northeast-xx"],
│   on modules/kubernetes/xx_network.tf line 41, in resource "kubernetes_manifest" "eniconfig":
│   41: resource "kubernetes_manifest" "eniconfig" {
│
│ cannot create REST client: no client config

From the official document of Hashcorp:

This resource requires API access during planning time. This means the cluster has to be accessible at plan time and thus cannot be created in the same apply operation. We recommend only using this resource for custom resources or resources not yet fully supported by the provider.

 

tfvars를 -var-file= 로 지정하여 terraform plan 시, 특정 모듈의 kubernetes_manifest 리소스 부분과 관련해서 no client config 에러가 발생하였다.

해시코프 공식 문서에 따르면 지정한 리소스가 테라폼 플래닝 중에 api 접근이 필요하며, 즉 terraform plan이 실행될 때 해당 클러스터(eks cluster)가 접근 가능한 상태여야 한다고 한다. 클러스터가 아직 생성되지 않은 상태이므로  해당 eniconfig 에 대한 커스텀 네트워크 리소스 설정 부분을 배제한 다른 tfvars 를 이용하여 배포에 성공하였다.

 

 

If you replace the value "vpc-abab" with the string "vpc-cdcd" in vpc_id :

 

# search the vpc_id variable
$ grep -r "vpc_id" .
grep: ./.terraform/providers...: binary file matches
./env/an1-dev.tfvars:vpc_id                                  = "vpc-abab"
./examples/install.tfvars:vpc_id             = "vpc-abab"
./examples/add_role.tfvars:vpc_id             = "vpc-abab"
./examples/account_install.tfvars:vpc_id             = "vpc-abab"
./examples/spot_instance.tfvars:vpc_id             = "vpc-abab"
....

# replace the target value "vpc-abab" with the "vpc-cdcd" as you wish
SOYI$ find . -type f -name "*.tf*" -exec sed -i 's/vpc-abab/vpc-cdcd/g' {} +

# verify if it's correctly replaced
$ grep -r "vpc_id" .
grep: ./.terraform/providers/registry.terraform.io/...: binary file matches
./env/an1-dev.tfvars:vpc_id                                  = "vpc-cdcd"
./examples/install.tfvars:vpc_id             = "vpc-cdcd"
./examples/add_role.tfvars:vpc_id             = "vpc-cdcd"
./examples/account_install.tfvars:vpc_id             = "vpc-cdcd"
./examples/spot_instance.tfvars:vpc_id             = "vpc-cdcd"
....

 

 

1. Local env: Ubuntu CLI, terraform

2. Error messages: sts get caller identity error, "signature does not match"

 

2.1 with terraform

soyi@SOYI /mnt/c/Users/SOYI/tf-test/fundamental-aws-infra/terraform $ terraform init

Initializing the backend...
Initializing modules...
╷
│ Error: error configuring S3 Backend: error validating provider credentials: error calling sts:GetCallerIdentity: SignatureDoesNotMatch: Signature expired: 20230807T110917Z is now earlier than 20230808T053756Z (20230808T055256Z - 15 min.)
│       status code: 403, request id: 557xxxx-xxxx-xxxx-xxxx-7b40xxxxxxxxx
│
│
╵

 

2.2 check aws sts command (same error)

soyi@SOYI /mnt/c/Users/SOYI/tf-test/fundamental-aws-infra/terraform $ aws sts get-caller-identity

An error occurred (SignatureDoesNotMatch) when calling the GetCallerIdentity operation: Signature expired: 20230807T110934Z is now earlier than 20230808T053813Z (20230808T055313Z - 15 min.)

 

3. resolve: 

3.1 check the aws configure credentials first

3.2 if 3.1 is ok, then synchronize the ntpdate if the env is Ubuntu like -

# run following command if it's Ubuntu
$ sudo ntpdate ntp.ubuntu.com
 8 Aug 14:55:29 ntpdate[1292]: step time server 91.189.91.157 offset +67418.817306 sec
 
# the result means it's done.

# if ntpdate command is not found, then install ntp as below.
sudo apt install ntp && ntpdate -y

 

 

 

What are the major leaders in infrastructure automation technologies?


 

  • AWS CloudFormation: AWS CloudFormation is Amazon Web Services' (AWS) native IaC service that allows users to create and manage AWS resources using JSON or YAML templates.

 

  • Google Cloud Deployment Manager: Similar to CloudFormation, Google Cloud Deployment Manager enables the creation and management of Google Cloud Platform (GCP) resources through configuration files.

 

  • Azure Resource Manager (ARM) Templates: Microsoft Azure's ARM Templates provide IaC capabilities for defining and managing Azure resources.

 

  • Ansible: Ansible is an open-source automation tool that supports IaC for provisioning, configuration management, and application deployment.

 

  • Chef: Chef is another open-source automation platform that enables configuration management and infrastructure automation.

 

  • Puppet: Puppet is a configuration management tool that helps manage the state of IT infrastructure through code.

 

  • Jenkins: Jenkins is an open-source automation server that can be used for continuous integration and continuous deployment (CI/CD) pipelines.

 

  • GitLab CI/CD: GitLab provides built-in CI/CD capabilities for automating the deployment and testing of applications using GitLab's infrastructure.

 

 

A brief description of some of HashiCorp key services


  • Terraform: Terraform is an infrastructure as code (IaC) tool that enables users to define and manage cloud infrastructure using a declarative configuration language. With Terraform, you can create, modify, and destroy infrastructure resources across various cloud providers, data centers, and services, all in a version-controlled and repeatable manner.

 

  • Consul: Consul is a service networking platform that provides features for service discovery, health checking, and key-value storage. It simplifies the management of distributed applications and microservices by enabling them to locate and communicate with each other reliably.

 

  • Vault: Vault is a secrets management tool that securely stores, accesses, and manages sensitive data, such as passwords, tokens, and encryption keys. It ensures that applications and services can access secrets securely without hardcoding or exposing them.

 

  • Nomad: Nomad is a lightweight and flexible job scheduler and orchestrator. It allows users to deploy and manage applications across various infrastructure platforms, including virtual machines, containers, and bare-metal servers.

    Packer: Packer is a tool for creating machine images for various platforms, such as Amazon EC2, Microsoft Azure, and Docker. It automates the process of creating consistent, ready-to-use machine images, ensuring that environments are reproducible and scalable.

    Vagrant: Vagrant is a development environment automation tool that simplifies the setup and configuration of virtual development environments. It allows developers to create and share reproducible development environments easily.

'devops > ETC' 카테고리의 다른 글

Hashcorp Vault with a simple demo  (0) 2024.06.23
Pulumi with a simple demo  (0) 2024.06.23
Istio - service mesh  (2) 2024.06.15
Aqua Security / Trivy  (2) 2024.06.15
Cross Plane 이란  (0) 2024.06.15

 

-. 쿠버네티스에서 기본적으로 파드들을 배포, 업데이트 등 관리하기 위해 사용하는 yaml 템플릿 종류에 대한 정리입니다.

 

1. DaemonSet

  • Purpose: 각 노드에 특정 파드를 실행하고 싶을 때 이용. A DaemonSet ensures a specific pod runs on each nodes in the Kubernetes cluster. 주로 모니터링 에이전트, 로그 수집, 백그라운드로 돌아가는 작업들 등 각 노드에 특정 파드의 복제본 파드를 실행시키거나 하는 상황에서 사용. it's commonly used for log collection, monitoring agents, and background tasks, so on.
  • Advantage: 데몬셋은 각 노드 별로 해당 파드를 동일하게 복제하여 일치성을 유지하는데 도움이 되는 방식. 해당 방법으로 동일 클러스터 내의 모든 호스트로부터 데이터를 수집하고 워크로드를 분산하는데 도움이 된다. DaemonSets help maintain consistency across nodes by running a pod on each node. easy to distribute workloads and collect data from every host in the cluster.

데몬셋으로 서비스 생성하는 예: yaml 방식 배포

하기 방식으로 서비스에 맞게 yaml 파일을 생성하고, kind는 DaemonSet으로 생성하여 create 또는 apply로 명령하여 데몬셋을 실행한다.

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: servicename-daemonset
spec:
  selector:
    matchLabels:
      app: servicename
  template:
    metadata:
      labels:
        app: appname
    spec:
      containers:
      - name: example
        image: example:latest

 

2. StatefulSet

  • Purpose: Stateful 한 어플리케이션 배포에 사용한다. 즉 배포대상 파드가 유일무이한 특성을 가지거나 네트워크 안정성이 필요한 서비스일때 사용한다. Although ReplicaSets are not typically used directly, this can manage stateful applications where each pod has a unique identity and stable network identity, for databases like MySQL, messaging systems like Kafka or  any application which needs stable and persistent network identities.
  • Advantage: StatefulSet 은 파드가 생성될때 순서대로 고유의 서비스를 유지하면서 스케일링 되도록 한다. StatefulSets ensure pods are created and scaled in order, preserving their identities, which required data needs to be retained and moved with the pods. 데이터 손실없이 pod 변경이 필요한 stateful 어플리케이션에 적합하다.

StatefulSet으로 서비스 생성하는 예: yaml 방식 배포

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: yourservice-name
spec:
  selector:
    matchLabels:
      app: service-name
  serviceName: "example"
  replicas: 3
  template:
    metadata:
      labels:
        app: service-name
    spec:
      containers:
      - name: service-name
        image: service-name:latest
        ports:
        - containerPort: 00
        volumeMounts:
        - name: service-name-persistent-storage
          mountPath: /var/lib/service-name
  volumeClaimTemplates:
  - metadata:
      name: service-name-persistent-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "standard"
      resources:
        requests:
          storage: 1Gi

 

3. ReplicaSet

  • Purpose: 동일 클러스터 내 동일한 파드들을 원하는 숫자만큼 유지하기 위해 사용하여 배포 순서가 중요하지 않은 stateless 어플리케이션 배포 및 스케일링에 이용한다. maintain a desired number of identical pods in the cluster, ensuring high availability and fault tolerance. scale and manage stateless applications where the order of pod creation and termination does not matter. useful to ensure a specific number of replicas are running, regardless of the underlying nodes' availability.
  • Selector and Labels: 관리할 파드를 구별하기 위해 보통 label selector을 이용하며 매칭되는 라벨을 가진 파드가 명시된 숫자만큼 항상 실행되도록 관리할 수 있다. it uses a label selector to identify the pods it needs to manage, and ensures the specified number of pods with matching labels are always running.
  • Pod Template: 파드 업데이트 및 새 파드를 생성할 때 파드 템플릿을 이용한다. 즉 새로운 변경을 포함하여 업데이트시 해당 변경된 템플릿을 참조하여 새 파드가 생성된다. updating the pod template will apply the changes to the ReplicaSet and it'll create new pods based on the updated template.
  • Automatic Scaling: 스케일 업 시, 해당 리플리카 숫자 만큼 쿠버네티스에 의해 파드가 증가된다. 또한 리플리카 수를 더 작게 변경하면 존재하던 파드 중 일부가 종료되어 해당 숫자만큼 줄어든다.

4. Job: 데이터 처리 및 백업같은 수명이 짧은 배치성 jobs 에 쓰인다. used for short-lived tasks or batch jobs that run to completion such as data processing or backups.

예시: 크론 잡 만들기 an example command to create a cronjob

kubectl create cronjob my-cronjob --image=my-image:latest --schedule="* * * * *"

일반 job 생성 creating a typical job

kubectl create job my-job --image=my-image:latest

 

 

 

-. kubectl 명령어를 이용한 파드 재시작(재배포) 방법

 

1. Restarting a Pod:

단일 파드를 재시작하는 방법은 단순하게 running중인 현재의 pod를 삭제하여 연결된 replica 정의에 따라 새로 자동 생성되도록 하는 방법이 있다.

# Replace "your-pod-name" with the name of your pod
kubectl delete pod your-pod-name

 

2. Restarting a Deployment:

또한 좀더 stateful 한 서비스를 위해 점진적인 배포 및 재시작을 원하는 경우 rollout을 사용하면 된다.

kubectl rollout restart deployment your-deployment-name

 

3. Restarting a StatefulSet:

마찬가지로 statefulset으로 정의된 배포에 대하여 점진적인 롤아웃 방식 재 배포를 원하는 경우 아래와 같이 하면 된다.

kubectl rollout restart statefulset your-statefulset-name

4. Restarting a DaemonSet:

  • Rolling update

쿠버네티스 버전 1.5 와 그 이전 버전은 데몬셋 템플릿을 수정하여 업데이트 시, 매뉴얼하게 직접 데몬셋 파드를 삭제하여 새 파드가 생성되는 on Delete 방식이며, 1.6부터는 디폴트로 롤링 업데이트를 사용할 수 있는 것으로 공식 문서에 기재되어 있다. (롤링 업데이트: 템플릿 수정시 이전 수정 전 파드들이 자동 종료되고 새 버전 템플릿을 반영한 새 파드들이 생성되는 방식)

롤링 업데이트를 사용하려면 DaemonSet 템플릿에 아래 부분을 명시한다.

.spec.updateStrategy.type to RollingUpdate

Any updates to a RollingUpdate DaemonSet ".spec.template" will trigger a rolling update. you can update the DaemonSet by applying a new YAML file.

 

공식문서 참조: https://kubernetes.io/docs/tasks/manage-daemon/update-daemon-set/

 

Perform a Rolling Update on a DaemonSet

This page shows how to perform a rolling update on a DaemonSet. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluster. It is recommended to run this tutorial on a clust

kubernetes.io

 

  • Cascading deletion

공식문서 참조: https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/

 

Use Cascading Deletion in a Cluster

This page shows you how to specify the type of cascading deletion to use in your cluster during garbage collection. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluste

kubernetes.io

& foreground, background cascading deletion: https://kubernetes.io/docs/tasks/administer-cluster/use-cascading-deletion/

 

Use Cascading Deletion in a Cluster

This page shows you how to specify the type of cascading deletion to use in your cluster during garbage collection. Before you begin You need to have a Kubernetes cluster, and the kubectl command-line tool must be configured to communicate with your cluste

kubernetes.io

 

리소스를 삭제 시 해당 리소스 및 리소스와 연관된 리소스 (configmap 등)를 함께 삭제하는 것과 관련된 것이며 이 때, foreground 또는 background 로 진행할지 사전 정의가 가능하다. 디폴트는 foreground이며 background 삭제는 opt-in 옵션이다.

When you delete a DaemonSet using the kubectl delete command without the --cascade=false flag or simply use kubectl delete daemonset your-daemonset-name, Kubernetes will perform a cascading delete by default. This means that not only the DaemonSet resource will be deleted, but also all the pods managed by that DaemonSet will be terminated.

특정 MySQL 유저가 새로운 함수를 생성할 때 다음과 같은 에러가 발생하였다.

SQL Error [1418]

This function has none of DETERMINISTIC, NO SQL, or READS SQL DATA in its declaration and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable)

SQL Error [1419] 

You do not have the SUPER privilege and binary logging is enabled

(you *might* want to use the less safe log_bin_trust_function_creators variable)

 

MySQL 전역 변수 "log_bin_trust_function_creators" 가 default off 상태이며 해당 유저가 super 권한이 아닌경우 trigger 권한이 있어도 생성이 안될 수 있다.

 

해당 파라미터 값을 on 으로 변경하여 안정성이 조금 떨어지지만 일반 계정으로 생성이 가능하게 하거나 해당 유저에게 super 권한을 주어서 해결할 수 있다.

 

From dev.mysql.com:

This variable applies when binary logging is enabled. It controls whether stored function creators can be trusted not to create stored functions that may cause unsafe events to be written to the binary log. If set to 0 (the default), users are not permitted to create or alter stored functions unless they have the SUPER privilege in addition to the CREATE ROUTINE or ALTER ROUTINE privilege. A setting of 0 also enforces the restriction that a function must be declared with the DETERMINISTIC characteristic, or with the READS SQL DATA or NO SQL characteristic. If the variable is set to 1, MySQL does not enforce these restrictions on stored function creation. This variable also applies to trigger creation. See Section 25.7, “Stored Program Binary Logging”.

 

MySQL :: MySQL 8.0 Reference Manual :: 25.7 Stored Program Binary Logging

25.7 Stored Program Binary Logging The binary log contains information about SQL statements that modify database contents. This information is stored in the form of “events” that describe the modifications. (Binary log events differ from scheduled eve

dev.mysql.com

https://dev.mysql.com/doc/refman/8.0/en/replication-options-binary-log.html

+ Recent posts