It's a sample python code in AWS Lambda based on AWS documentation

import json
import boto3
import requests
import logging

sns_client = boto3.client('sns')
ssm_client = boto3.client('ssm')
workdocs_client = boto3.client('workdocs')
s3_client = boto3.client('s3')
sns_topic_arn = 'arn:aws:sns:ap-northeast-2:[AWS account number]:[your sns]'

logger = logging.getLogger()
logger.setLevel(logging.INFO)

## The function to confirm the subscription from Amazon Workdocs
def confirmsubscription (topicArn, subToken):
    try:
        response = sns_client.confirm_subscription(
            TopicArn=topicArn,
            Token=subToken
        )
        logger.info ("Amazon Workdocs Subscripton Confirmaiton Message : " + str(response)) 
    except Exception as e:
        logger.error("Error with subscription confirmation : " + " Exception Stacktrace : " + str(e) )
# This would result in failing the AWS Lambda function and the event will be retried.
# One of the mechanism to handle retries would be to configure Dead Letter Queue (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) as part of the Amazon SQS service.
# Another mechanism could be to skip raising the error and Amazon Cloudwatch can be used to detect logged error messages to collect error metrics and trigger corresponding retry process.
        raise Exception("Error Confirming Subscription from Amazon Workdocs")
    
def copyFileworkdocstos3 (documentid):

    # ssm parameter code
    # Reading the Amazon S3 prefixes to Amazon Workdocs folder id mapping, Bucket Name and configured File Extensions from AWS System Manager.
    try:
        bucketnm = str(ssm_client.get_parameter(Name='/[your_bucket_param]')['Parameter']['Value'])
        folder_ids = json.loads(ssm_client.get_parameter(Name='/[your workdocs folder id param]')['Parameter']['Value'])
        file_exts = str(json.loads(ssm_client.get_parameter(Name='/[your workdocs extension param]')['Parameter']['Value'])['file_ext']).split(",")
        
        logger.info ("Configured Amazon S3 Bucket Name : " + bucketnm)
        logger.info ("Configured Folder Ids to be synced : : " + str(folder_ids))
        logger.info ("Configured Supported File Extensions : " + str(file_exts))

        resp_doc = workdocs_client.get_document (DocumentId = documentid)
        logger.info ("Amazon Workdocs Metadata Response : " + str(resp_doc))
        
        # Retrieving the Amazon Workdocs Metadata
        parentfolderid = str(resp_doc['Metadata']['ParentFolderId'])
        docversionid = str(resp_doc['Metadata']['LatestVersionMetadata']['Id'])
        docname = str(resp_doc['Metadata']['LatestVersionMetadata']['Name'])
        
        logger.info ("Amazon Workdocs Parent Folder Id : " + parentfolderid)
        logger.info ("Amazon Workdocs Document Version Id : " + docversionid)
        logger.info ("Amazon Workdocs Document Name : " + docname)
        
        prefix_path = folder_ids.get(parentfolderid, None)
        logger.info ("Retrieving Amaozn S3 Prefix Path : " + prefix_path)
        
        ## Currently the provided sample code supports syncing documents for the configured Amazon Workdocs Folder Ids in AWS System Manager and not for the sub-folders.
        ## It can be extended to supported syncing documents for the sub-folders.
        if ( (prefix_path != None) and (docname.endswith( tuple(file_exts) )) ):
            resp_doc_version = workdocs_client.get_document_version (DocumentId = documentid,
                                                     VersionId= docversionid,
                                                     Fields = 'SOURCE'
            )
            logger.info ("Retrieve Amazon Workdocs Document Latest Version Details : " + str(resp_doc_version))
            
            ## Retrieve Amazon Workdocs Download Url
            url = resp_doc_version["Metadata"]["Source"]["ORIGINAL"]
            logger.info ("Amazon Workdocs Download url : " + url)
            ## Retrieve Amazon Workdocs Document contents
            ## As part of this sample code, we are reading the document in memory but it can be enhanced to stream the document in chunks to Amazon S3 to improve memory utilization 
            workdocs_resp = requests.get(url)
            ## Uploading the Amazon Workdocs Document to Amazon S3
            response = s3_client.put_object(
                Body=bytes(workdocs_resp.content),
                Bucket=bucketnm,
                Key=f'{prefix_path}/{docname}',
            )
            logger.info ("Amazon S3 upload response : " + str(response))
        else:
            logger.info ("Unsupported File type")
    except Exception as e:
        logger.error("Error with processing Document : " + str(documentid) + " Exception Stacktrace : " + str(e) )
# This would result in failing the AWS Lambda function and the event will be retried.
# One of the mechanism to handle retries would be to configure Dead Letter Queue (https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/sqs-dead-letter-queues.html) as part of the Amazon SQS service.
# Another mechanism could be to skip raising the error and Amazon Cloudwatch can be used to detect logged error messages to collect error metrics and trigger corresponding retry process.
        raise Exception("Error Processing Amazon Workdocs Events.")
    
    
def lambda_handler(event, context):
    sts_connection = boto3.client('sts') #added from here
    acct_b = sts_connection.assume_role(
        RoleArn="arn:aws:iam::[cross account number]:role/[your sts role]",
        RoleSessionName="cross_acct_lambda"
    )
    
    ACCESS_KEY = acct_b['Credentials']['AccessKeyId']
    SECRET_KEY = acct_b['Credentials']['SecretAccessKey']
    SESSION_TOKEN = acct_b['Credentials']['SessionToken']

    # create service client using the assumed role credentials, e.g. S3
    client = boto3.client(
        's3',
        aws_access_key_id=ACCESS_KEY,
        aws_secret_access_key=SECRET_KEY,
        aws_session_token=SESSION_TOKEN,
    )
    s3_client = client
    #to here
    
    
    logger.info ("Event Recieved from Amazon Workdocs : " + str(event))
        
    msg_body = json.loads(str(event['Records'][0]['body']))

    ## To Process Amazon Workdocs Subscription Confirmation Event
    if msg_body['Type'] == 'SubscriptionConfirmation':
        confirmsubscription (msg_body['TopicArn'], msg_body['Token'])
    ## To Process Amazon Workdocs Notifications
    elif (msg_body['Type'] == 'Notification') :
        event_msg = json.loads(msg_body['Message'])
        ## To Process Amazon Workdocs Move Document Event
        if (event_msg['action'] == 'move_document'):
            copyFileworkdocstos3 (event_msg['entityId'])
        ## To Process Amazon Workdocs Upload Document when a new version of the document is updated
        elif (event_msg['action'] == 'upload_document_version'):
            copyFileworkdocstos3 (event_msg['parentEntityId'])
        else:
        ## Currently the provided sample code supports two Amazon Workdocs Events but it can be extended to process other Amazon Workdocs Events.
        ## Refer this link for details on other supported Amazon Workdocs https://docs.aws.amazon.com/workdocs/latest/developerguide/subscribe-notifications.html.
            logger.info("Unsupported Action Type")
    else:
    ## Currently the provided sample code supports two Amazon Workdocs Events but it can be extended to process other Amazon Workdocs Events.
    ## Refer this link for details on other supported Amazon Workdocs https://docs.aws.amazon.com/workdocs/latest/developerguide/subscribe-notifications.html.
        logger.info("Unsupported Event Type")
   
    return {
        'statusCode': 200,
        'body': json.dumps('Hello from Amazon Workdoc sync to Amazon S3 Lambda!')
    }
# Send e-mail function
import smtplib
from email.MIMEMultipart import MIMEMultipart
from email.MIMEText import MIMEText
 
 
def send_email(_me):
    emailhost='send.mx.example.com'
    title='It is test e-mail'
    bodytext='This mail informs you of the error about certain service with probe checking logic.' + '\n If you received this email, please check it.\n \n' + _me
    sender='me@example.com'
    receiver=['who@example.com']
     
    msg=MIMEText(bodytext)
    msg['Subject']=title
    msg['From']=sender
    msg['To']=', '.join(receiver)
    s=smtplib.SMTP(emailhost)
    s.sendmail(sender, receiver, msg.as_string())
    s.close()
 
send_email(_me)
callIEM1(_me)

Sample of error: 
Unable to connect to the server: getting credentials: decoding stdout: no kind "ExecCredential" is registered for version "client.authentication.k8s.io/v1alpha1 " in scheme "pkg/client/auth/exec/exec.go:62"

 

Preferred check list:

 

-. Need to verify if the communication with the endpoint is normal or not.

-. The version of AWS CLI should be latest.

-. Please check if there's any difference in kubectl version for client and server. this gap can cause this kind of error.

The command of checking each version:

kubectl version --short

-. Please update apiVersion value in User section of .kube/config file to client.authentication.k8s.io/v1beta1
or run "aws eks update-kubeconfig --region region-code --name my-cluster" after updating kubectl version.

 

Reference:

https://github.com/aws/aws-cli/issues/6920
https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
https://kubernetes.io/releases/version-skew-policy/
https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html
https://stackoverflow.com/questions/73744199/unable-to-connect-to-the-server-getting-credentials-decoding-stdout-no-kind

 

애지중지 키우고 있는 레오파드 육지거북 튼튼..

다른 거북에 비해 커지지 않고 등갑도 그대로였으나

올해 여름의 뜨거운 온도를 만나 먹성이 폭발하여..

등갑도 제법 나이테같은게 생기고 흰색 바탕도 올라왔다.

물론 크기도 좀 커지고..

온욕시키는 대야도 새로 샀었는데 벌써 차오른다..

 

석판을 사줬더니 제대로 찜질을 즐기는 거북..

 

 

편해서 흘러나와있는 거북

 

 

사람품에 파고드는 걸 즐기는 거북

 

 

부드럽고 따듯한건 잠이 오게 한다..

 

 

파고들어서는 인간의 팔을 베고 자는 거북

 

 

 

사람처럼 뭔가 베고 자는 거북..

 

 

뭔가 베고 자는게 편한건 모두에게 같다.

 

 

근데 사진찍는건 의식하는 거북

 

 

인간의 팔이 보이면.. 일단 베고 본다

 

 

인간의 팔은 베게이다.

 

 

빨리 날씨가 따듯해져서 더 해삐한 거북이 되었으면 한다..

콧구멍도 좀더 커졌으면..

 

If you are using Windows joined to Active Directory on your AWS EC2 server and you have configured SSM for it, you can move out of the domain by entering the following commands line by line.

$securePass = ConvertTo-SecureString "******" -AsPlainText -Force
$cred = New-Object System.Management.Automation.PSCredential ("Domain\User", $securePass)
Remove-Computer -Credential $cred -Force -Verbose

 

Firstly, Create IAM Role with below permission for replication job.

 

Trust relationships

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": [
                    "s3.amazonaws.com",
                    "batchoperations.s3.amazonaws.com"
                ]
            },
            "Action": "sts:AssumeRole"
        }
    ]
}

IAM Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "s3:ListBucket",
                "s3:GetReplicationConfiguration",
                "s3:GetObjectVersionForReplication",
                "s3:GetObjectVersionAcl",
                "s3:GetObjectVersionTagging",
                "s3:GetObjectRetention",
                "s3:GetObjectLegalHold"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::source-bucket-A",
                "arn:aws:s3:::source-bucket-A/*",
                "arn:aws:s3:::destination-bucket-B",
                "arn:aws:s3:::destination-bucket-B/*"
            ]
        },
        {
            "Action": [
                "s3:ReplicateObject",
                "s3:ReplicateDelete",
                "s3:ReplicateTags",
                "s3:ObjectOwnerOverrideToBucketOwner"
            ],
            "Effect": "Allow",
            "Resource": [
                "arn:aws:s3:::source-bucket-A/*",
                "arn:aws:s3:::destination-bucket-B/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:InitiateReplication",
                "s3:GetReplicationConfiguration",
                "s3:PutInventoryConfiguration"
            ],
            "Resource": [
                "arn:aws:s3:::source-bucket-A",
                "arn:aws:s3:::source-bucket-B/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": "s3:PutObject",
            "Resource": "arn:aws:s3:::source-bucket-A/*"
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:Decrypt",
                "kms:GenerateDataKey"
            ],
            "Resource": [
                "arn:aws:kms:ap-northeast-2:[Source A Account ID]:key/[keystrings]"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "kms:GenerateDataKey",
                "kms:Encrypt"
            ],
            "Resource": [
                "arn:aws:kms:ap-northeast-2:[Destination B Account ID]:key/[keystrings]"
            ]
        }
    ]
}

The action of KMS will depend on the encryption setting of the S3 bucket.

 

Due to the requirement of the setting, a replication rule of S3 bucket management in source account A has been created with below.

Object ownership: Transfer to destination bucket owner
AWS KMS key for encrypting destination objects: The KMS arn of destination acccount B was put in place.
The batch job will be completed if the role was properly matched with enough permission.
 
To receive replicated objects in B Account, the S3 bucket should have relevant permission inside of bucket policy.
 
{
    "Sid": "S3PolicyStmt-DO-NOT-MODIFY-1670218832531",
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::[Source A Account ID]:root"
    },
    "Action": [
        "s3:GetBucketVersioning",
        "s3:PutBucketVersioning",
        "s3:ReplicateObject",
        "s3:ReplicateDelete"
    ],
    "Resource": [
        "arn:aws:s3:::destination-bucket-B",
        "arn:aws:s3:::destination-bucket-B/*"
    ]
}
 

Below is an included version of above permission to change object ownership to destination bucket owner.

{
    "Sid": "S3PolicyStmt-DO-NOT-MODIFY-1670219041656",
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::[Source Account A]:root"
    },
    "Action": [
        "s3:GetBucketVersioning",
        "s3:PutBucketVersioning",
        "s3:ReplicateObject",
        "s3:ReplicateDelete",
        "s3:ObjectOwnerOverrideToBucketOwner"
    ],
    "Resource": [
        "arn:aws:s3:::destination-bucket-B",
        "arn:aws:s3:::destination-bucket-B/*"
    ]
}

The KMS Policy also needs to allow source account as below.

{
    "Sid": "Enable cross account encrypt access for S3 Cross Region Replication",
    "Effect": "Allow",
    "Principal": {
        "AWS": "arn:aws:iam::[Source Account A]:root"
    },
    "Action": [
        "kms:Encrypt"
    ],
    "Resource": "*"
}

'aws > s3' 카테고리의 다른 글

AWS S3 CRR vs SRR replication (교차 리전 복제, 동일 리전 복제)  (0) 2023.03.13
S3 Storage Classes  (0) 2023.03.13

 

Code Sample

#!/bin/bash

InstanceID=("i-xxxxxxxxx" "i-xxxxxxxxxxxxxxxxxxx" "i-xxxxxxxxxxxx65")
ImageName="create-ami"
CurrentTime=`date +%Y%m%d`
ImageVersion=${2:-v1.1.1}
ImageDescription="create-ami-by-script"
num=1


for value in "${InstanceID[@]}";
do
            aws ec2 create-image \
                         --instance-id $InstanceID \
                          --name $ImageName-$CurrentTime-$ImageVersion-$value-$num \
                           --description "$ImageDescription" \
           --tag-specifications 'ResourceType=image,Tags=[{Key=Name,Value=create-ami}]' \
                            'ResourceType=snapshot,Tags=[{Key=Name,Value=create-ami}]' \
                --region ap-northeast-2 --debug --profile awsprofile
                			((num+=1))
                            echo $value
done

 

To get each ec2 instance ID with aws cli for comfort,

below is the sample code to implement.

 

$ aws ec2 describe-instances \
 --profile awsprofile \
 --query 'Reservations[*].Instances[*].{Instance:InstanceId,Subnet:SubnetId}' \
 --output text --region ap-northeast-2

 

adding below option will prevent inaccurate order of ami creation since the reboot will take some time to operate for each target server.

--no-reboot

 

reference: https://docs.aws.amazon.com/cli/latest/reference/ec2/create-image.html

 

 

[2022 년 6월 기준]

 

MFA 설정을 한 IAM 계정을 사용 중, 기변으로 인해 등록된 디바이스를 바꿀때

필요한 권한을 할당받은 상태에서 설정함에도 "개체 이미 존재" (MFA Device entity at the same path and name already exists) 에러가 발생하면서 MFA 토큰 추가 화면으로 넘어가지 않는 경우가 있다.

말그대로 이미 생성되어있어서 마주하는 에러지만, 이미 삭제 후 유저에게 재등록하도록 가이드를 한 경우에도 발생할 수 있다.

확인결과, 가상 MFA 디바이스 설정 버튼을 통해 MFA 디바이스 등록 진행 중, 아직 실제로 등록을 완료하지 못한 초기 안내 화면에서도 엔터티는 자동으로 생성되어, 실제로 등록을 끝마치치 않은 상태에서도 자꾸 엔터티가 생겨 해당 에러를 마주하는 것으로 보인다.

해당 경우에는 AWS CLI로 엔터티가 생기는지 확인하고, 생성된 엔터티를 바로 지운다음 등록화면 재 진입시 에러가 사라지고 QR코드 등록 화면으로 넘어가지는 것을 확인할 수 있었다.

 

 

MFA Device error

 

AWS CLI로 특정 유저의 mfa 엔터티 확인

$ aws iam list-virtual-mfa-devices | grep testuser
            "SerialNumber": "arn:aws:iam::265919665173:mfa/testuser"

 

AWS CLI로 delete-virtual-mfa-device 명령어를 통해 해당 엔터티 삭제 후, list-virtual-mfa-devices 를 하여 삭제된 것을 확인하였다. 그다음에 다시 MFA 디바이스 설정 버튼으로 설정 초기화면에 넘어가면 바로 엔터티가 다시 생성된 것을 확인하였다.

$ aws iam delete-virtual-mfa-device --serial-number arn:aws:iam::265919665173:mfa/testuser
$ aws iam list-virtual-mfa-devices | grep testuser
$ aws iam list-virtual-mfa-devices | grep testuser
            "SerialNumber": "arn:aws:iam::265919665173:mfa/testuser"

 

유저가 디바이스 등록을 하기전에 생성된 엔터티를 삭제하면 해당 에러를 건너뛰고 QR코드 등록안내 화면으로 넘어가진다.

 

 

+ Recent posts