eks tcp application pod에 대한 tcp 연결 테스트용 스크립트 작성

 

tcp_conn.py

import socket

def tcp_health_check(host, port):
    message = "!!!HEALTH_CHECK!!!"
    expected_responses = ["ok", "OK", "Ok"]
    
    try:
        with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as sock:
            sock.connect((host, port))
            print(f"Connected to {host} on port {port}")
            sock.sendall(message.encode())
            print(f"Sent: {message}")
            response = sock.recv(1024).decode()
            print(f"Received: {response}")
            if response in expected_response:
                print("Health check passed: received expected response.")
            else:
                print("Health check failed: unexpected response.")
    
    except socket.error as e:
        print(f"Socket error: {e}")

# Target port and IP or hostname
eks_pod_ip = "<pod ip or hostname>"
port = <target port>

# execute health check
tcp_health_check(eks_pod_ip, port)

 

현재 작업 중인 로컬 브랜치의 변경사항을 모두 지우고, git에서 받아온 main 브랜치만 최신으로 유지

 

git reset 시 현재 브랜치에서의 변경사항을 모두 취소하고 이전 커밋 상태로 되돌아간다.

이후 checkout 하여 branch를 main으로 바꾸고, 최신 버전으로 업데이트 하기 위해 pull 한다.

git reset --hard HEAD
git checkout main
git pull origin main

 

※ 현재 detached된 HEAD 브랜치를 기반으로 새로운 브랜치를 만들고

그 브랜치에서 main 브랜치를 merge 하기

 

# 새 브런치 만들기
git checkout -b mybranch-new
# 수정된 파일을 추가하고 커밋 (merge 시 충돌 발생한 경우 해결 후 이부분 반복)
git add <filename1> <filename2> <filename3>
git commit -m "Feature: Changes according to the request"
# main branch로 이동후 main 브랜치를 최신으로 업데이트
git checkout main
git pull origin main
# 새로 만든 mybranch-new 브랜치를 main 브랜치로 merge 병합 (main에 있는 상태에서 상대 branch를 merge)
git merge mybranch-new
# main branch에 변경사항 업데이트
git push origin main

 

아래 가이드대로 설정해도 aurora mysql RDS 이용시 제대로 작동하지 않아, 

( RDS를 7일 이상 Stop 할때 가이드: https://aws.amazon.com/premiumsupport/knowledge-center/rds-stop-seven-days/?nc1=h_ls )

RDS 옵션 확인시 인스턴스는 stop옵션이 존재하지 않고 클러스터에 있는 것을 확인하여 - 해당 샘플 코드에서

인스턴스를 클러스터로 할 수 있도록 변경하였다.

참고 문서는 아래 API 문서를 참고하였고 실제 api 호출시 나오는 로그의 항목 필드를 확인해서 필터링할 수 있는지 봐서

코드를 변경하였다.

Python boto3 API 문서: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/rds.html#RDS.Client.describe_db_instances

RDS는 7일마다 자동시작되므로 cron(00 00 ? * 5 *) Event 트리거를 추가하였고, 시간과 분은 직접 stop한 시각으로부터 +RDS 시작에 걸리는 시간(분) 뒤로 적절히 정하였다.

 

아래 에러코드 발생시에는 DB 인스턴스에 추가한 태그 (여기서는 autostop=yes) 를 삭제하고, 

DB 클러스터를 선택하여 RDS 클러스터 태그로 추가한다.

Cannot stop instance blahblah.
An error occurred (InvalidParameterCombination) when calling the StopDBInstance operation: aurora-mysql DB instances are not eligible for stopping and starting.
Cannot stop instance portal-rds-1.

 

아래는 변경된 코드이며 클러스터를 중단한다. 

성공시 아래 로그가 발생한다.

 

  • Log
Stopping cluster: my-cluster-name
  • code in Python
import boto3
rds = boto3.client('rds')

def lambda_handler(event, context):

    #Stop DB clusters
    dbi = rds.describe_db_instances()
    print (dbi)
    dbs = rds.describe_db_clusters()
    print (dbs)
    
    for db in dbs['DBClusters']:
        #Check if DB instance is not already stopped
        if (db['Status'] == 'available'):
            try:
                GetTags=rds.list_tags_for_resource(ResourceName=db['DBClusterArn'])['TagList']
                for tags in GetTags:
                #if tag "autostop=yes" is set for cluster, stop it
                    if(tags['Key'] == 'autostop' and tags['Value'] == 'yes'):
                        result = rds.stop_db_cluster(DBClusterIdentifier=db['DBClusterIdentifier'])
                        print ("Stopping cluster: {0}.".format(db['DBClusterIdentifier']))
            except Exception as e:
                print ("Cannot stop rds {0}.".format(db['DBClusterIdentifier']))
                print(e)
                
if __name__ == "__main__":
    lambda_handler(None, None)

 

It's a sample python code in AWS Lambda based on AWS documentation

import json
import boto3
import requests
import logging

sns_client = boto3.client('sns')
ssm_client = boto3.client('ssm')
workdocs_client = boto3.client('workdocs')
s3_client = boto3.client('s3')
sns_topic_arn = 'arn:aws:sns:ap-northeast-2:[AWS account number]:[your sns]'

logger = logging.getLogger()
logger.setLevel(logging.INFO)

## The function to confirm the subscription from Amazon Workdocs
def confirmsubscription (topicArn, subToken):
    try:
        response = sns_client.confirm_subscription(
            TopicArn=topicArn,
            Token=subToken
        )
        logger.info ("Amazon Workdocs Subscripton Confirmaiton Message : " + str(response)) 
    except Exception as e:
        logger.error("Error with subscription confirmation : " + " Exception Stacktrace : " + str(e) )
# This would result in failing the AWS Lambda function and the event will be retried.
# One of the mechanism to handle retries would be to configure Dead Letter Queue (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) as part of the Amazon SQS service.
# Another mechanism could be to skip raising the error and Amazon Cloudwatch can be used to detect logged error messages to collect error metrics and trigger corresponding retry process.
        raise Exception("Error Confirming Subscription from Amazon Workdocs")
    
def copyFileworkdocstos3 (documentid):

    # ssm parameter code
    # Reading the Amazon S3 prefixes to Amazon Workdocs folder id mapping, Bucket Name and configured File Extensions from AWS System Manager.
    try:
        bucketnm = str(ssm_client.get_parameter(Name='/[your_bucket_param]')['Parameter']['Value'])
        folder_ids = json.loads(ssm_client.get_parameter(Name='/[your workdocs folder id param]')['Parameter']['Value'])
        file_exts = str(json.loads(ssm_client.get_parameter(Name='/[your workdocs extension param]')['Parameter']['Value'])['file_ext']).split(",")
        
        logger.info ("Configured Amazon S3 Bucket Name : " + bucketnm)
        logger.info ("Configured Folder Ids to be synced : : " + str(folder_ids))
        logger.info ("Configured Supported File Extensions : " + str(file_exts))

        resp_doc = workdocs_client.get_document (DocumentId = documentid)
        logger.info ("Amazon Workdocs Metadata Response : " + str(resp_doc))
        
        # Retrieving the Amazon Workdocs Metadata
        parentfolderid = str(resp_doc['Metadata']['ParentFolderId'])
        docversionid = str(resp_doc['Metadata']['LatestVersionMetadata']['Id'])
        docname = str(resp_doc['Metadata']['LatestVersionMetadata']['Name'])
        
        logger.info ("Amazon Workdocs Parent Folder Id : " + parentfolderid)
        logger.info ("Amazon Workdocs Document Version Id : " + docversionid)
        logger.info ("Amazon Workdocs Document Name : " + docname)
        
        prefix_path = folder_ids.get(parentfolderid, None)
        logger.info ("Retrieving Amaozn S3 Prefix Path : " + prefix_path)
        
        ## Currently the provided sample code supports syncing documents for the configured Amazon Workdocs Folder Ids in AWS System Manager and not for the sub-folders.
        ## It can be extended to supported syncing documents for the sub-folders.
        if ( (prefix_path != None) and (docname.endswith( tuple(file_exts) )) ):
            resp_doc_version = workdocs_client.get_document_version (DocumentId = documentid,
                                                     VersionId= docversionid,
                                                     Fields = 'SOURCE'
            )
            logger.info ("Retrieve Amazon Workdocs Document Latest Version Details : " + str(resp_doc_version))
            
            ## Retrieve Amazon Workdocs Download Url
            url = resp_doc_version["Metadata"]["Source"]["ORIGINAL"]
            logger.info ("Amazon Workdocs Download url : " + url)
            ## Retrieve Amazon Workdocs Document contents
            ## As part of this sample code, we are reading the document in memory but it can be enhanced to stream the document in chunks to Amazon S3 to improve memory utilization 
            workdocs_resp = requests.get(url)
            ## Uploading the Amazon Workdocs Document to Amazon S3
            response = s3_client.put_object(
                Body=bytes(workdocs_resp.content),
                Bucket=bucketnm,
                Key=f'{prefix_path}/{docname}',
            )
            logger.info ("Amazon S3 upload response : " + str(response))
        else:
            logger.info ("Unsupported File type")
    except Exception as e:
        logger.error("Error with processing Document : " + str(documentid) + " Exception Stacktrace : " + str(e) )
# This would result in failing the AWS Lambda function and the event will be retried.
# One of the mechanism to handle retries would be to configure Dead Letter Queue (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) as part of the Amazon SQS service.
# Another mechanism could be to skip raising the error and Amazon Cloudwatch can be used to detect logged error messages to collect error metrics and trigger corresponding retry process.
        raise Exception("Error Processing Amazon Workdocs Events.")
    
    
def lambda_handler(event, context):
    sts_connection = boto3.client('sts') #added from here
    acct_b = sts_connection.assume_role(
        RoleArn="arn:aws:iam::[cross account number]:role/[your sts role]",
        RoleSessionName="cross_acct_lambda"
    )
    
    ACCESS_KEY = acct_b['Credentials']['AccessKeyId']
    SECRET_KEY = acct_b['Credentials']['SecretAccessKey']
    SESSION_TOKEN = acct_b['Credentials']['SessionToken']

    # create service client using the assumed role credentials, e.g. S3
    client = boto3.client(
        's3',
        aws_access_key_id=ACCESS_KEY,
        aws_secret_access_key=SECRET_KEY,
        aws_session_token=SESSION_TOKEN,
    )
    s3_client = client
    #to here
    
    
    logger.info ("Event Recieved from Amazon Workdocs : " + str(event))
        
    msg_body = json.loads(str(event['Records'][0]['body']))

    ## To Process Amazon Workdocs Subscription Confirmation Event
    if msg_body['Type'] == 'SubscriptionConfirmation':
        confirmsubscription (msg_body['TopicArn'], msg_body['Token'])
    ## To Process Amazon Workdocs Notifications
    elif (msg_body['Type'] == 'Notification') :
        event_msg = json.loads(msg_body['Message'])
        ## To Process Amazon Workdocs Move Document Event
        if (event_msg['action'] == 'move_document'):
            copyFileworkdocstos3 (event_msg['entityId'])
        ## To Process Amazon Workdocs Upload Document when a new version of the document is updated
        elif (event_msg['action'] == 'upload_document_version'):
            copyFileworkdocstos3 (event_msg['parentEntityId'])
        else:
        ## Currently the provided sample code supports two Amazon Workdocs Events but it can be extended to process other Amazon Workdocs Events.
        ## Refer this link for details on other supported Amazon Workdocs https://docs.aws.amazon.com/workdocs/latest/developerguide/subscribe-notifications.html.
            logger.info("Unsupported Action Type")
    else:
    ## Currently the provided sample code supports two Amazon Workdocs Events but it can be extended to process other Amazon Workdocs Events.
    ## Refer this link for details on other supported Amazon Workdocs https://docs.aws.amazon.com/workdocs/latest/developerguide/subscribe-notifications.html.
        logger.info("Unsupported Event Type")
   
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Amazon Workdoc sync to Amazon S3 Lambda!')
    }
# Send e-mail function
import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
 
 
def send_email(_me):
    emailhost='send.mx.example.com'
    title='It is test e-mail'
    bodytext='This mail informs you of the error about certain service with probe checking logic.' + '\n If you received this email, please check it.\n \n' + _me
    sender='me@example.com'
    receiver=['who@example.com']
     
    msg=MIMEText(bodytext)
    msg['Subject']=title
    msg['From']=sender
    msg['To']=', '.join(receiver)
    s=smtplib.SMTP(emailhost)
    s.sendmail(sender, receiver, msg.as_string())
    s.close()
 
send_email(_me)
callIEM1(_me)

+ Recent posts