In order to run Fargate task we need to manipulate code. With this configuration, wheneve a file is uploaded to our S3bucket, this function will be triggered. Under S3 trigger select your bucket, event type (PUT), prefix (e.g “./data”), suffix (e.g.Select/Create a role with permission to upload logs to Amazon CloudWatch Logs.Select “s3-get-object-python” from Blueprints and click “configure”.Inside Lambda page click “Create Function”.In order to build the real-world scenario (which we talked about), we need to create a blueprint function from Lambda UI: I promise after that we will run our tasks programmatically :) Real-world scenario Select “Security Group(s)” (select existing or create new if you wish).Select “Cluster VPC” (ideally the VPC which your RDS instance belongs).Inside your newly created “Task Definition” ( s3-to-rds-postgresql-pandas-td latest revision) click Actions>Run Task.Under “Container definitions” enter ECR image url and environment variables.Give “Task size” values (Task memory=1024, Task CPU=512).Select ecsTaskExecutionRole as “Task execution role”.Select ecsTaskExecutionRole as “Task role” (optional).Enter a “Task definition name” ( s3-to-rds-postgresql-pandas-td).Select FARGATE as “launch type compatibility”.Inside ECS page, navigate to “Task Definitions”.Since everything is ready, let’s create the first task definition on our ECS cluster via console (after that all of the task definitions (revisions) will be created/updated by CICD pipeline) Our image is published on the ECR repository now. We are going to push local image to newly created ECR repository with below commands (you should have necessary permission configurations locally): aws ecr get-login-password -region eu-central-1 | docker login -username AWS -password-stdin /s3-to-rds-postgresql-pandas:latest The ECR repository can be created the same way as well: aws ecr create-repository -respository-name s3-to-rds-postgresql-pandas -region eu-central-1 -image-scanning-configuration scanOnPush=true Attach policy “AmazonECSTaskExecutionRolePolicy” Give it a name “ecsTaskExecutionRole”Īfter role creation, we can easily create our ECS cluster via CLI: aws ecs create-cluster -cluster-name “airbnb-ecs-cluster” Under ”seelct your use case” choose “Elastic Container Service Task”. ECSTaskExecutionRole: role attached to ECS tasks Again, inside IAM/roles click “create role”.Under ”seelct your use case” choose “Elastic Container Service”. ECSRole: role which authorizes ECS to manage resources on your behalf Go to IAM page.There two required roles related to Fargate based ECS tasks: Now, we can run it on AWS ECS:Īgain in order to work with AWS ECS and ECR we need to create IAM user, role and policies. Our project works well on our local machine. env.txt akocukcu/s3-to-rds-postgresql-pandas Run Container-Task on AWS ECS (I prefer second one.) docker run -d -env-file. We can pass these one by one or with an env file.
Īs you can see in the python script there are 8 different parameters which comes from environment variables. docker build -t akocukcu/s3-to-rds-postgresql-pandas. At this point we can easily build our Docker image and even run it as a container.