patterndockerMinor
Automate custom docker image deployment on AWS
Viewed 0 times
imagedockerautomatecustomawsdeployment
Problem
This question is related to my previous one - but a bit more descriptive.
I am trying to build automate the generation and deployment of a docker container. The docker container is created with packer.
After the creation of the docker image, I am using packer's post processing modules to upload it to the AWS ECR:
This works just fine. The docker image got uploaded to my repository in the ECR.
Now, the next step is where I am stuck.
I want to run start the container on AWS.
I am thinking a little script that uses the aws-cli commands could be suited for that.
I do not want to use the web ui - and I am sure this can be done over the command line, especially when someone wants to integrate this into a pipeline of some sort.
However, I have a few questions, as I am not sure if I understand the big picture correctly. My docker image is available over the ECR. To run it, I need some base instance, right? According to a reply in the previous thread, the are various way of running docker containers, either by using the ECS or by spinning up a EC2 instance. Since this is related to some research project where docker and virtualization is analyzed I am questioning what would be the best option for my case. Probably EC2? It's important that I can automate it so that no user interaction is required.
I read the docs and checked several tutorials but I still am not entirely sure how to everything correctly in the right order. Especially part where the environment is set up and the image is started.
I know that I can list my images with:
How would I fire up a new instance and run my image on it?
If anything is unclear, please let me know so that I can update my question to be as detailed as possible.
I am trying to build automate the generation and deployment of a docker container. The docker container is created with packer.
After the creation of the docker image, I am using packer's post processing modules to upload it to the AWS ECR:
{
"type": "docker-push",
"ecr_login": true,
"aws_access_key": "",
"aws_secret_key": "",
"login_server": "https://.dkr.ecr.eu-west-1.amazonaws.com/"
}This works just fine. The docker image got uploaded to my repository in the ECR.
Now, the next step is where I am stuck.
I want to run start the container on AWS.
I am thinking a little script that uses the aws-cli commands could be suited for that.
I do not want to use the web ui - and I am sure this can be done over the command line, especially when someone wants to integrate this into a pipeline of some sort.
However, I have a few questions, as I am not sure if I understand the big picture correctly. My docker image is available over the ECR. To run it, I need some base instance, right? According to a reply in the previous thread, the are various way of running docker containers, either by using the ECS or by spinning up a EC2 instance. Since this is related to some research project where docker and virtualization is analyzed I am questioning what would be the best option for my case. Probably EC2? It's important that I can automate it so that no user interaction is required.
I read the docs and checked several tutorials but I still am not entirely sure how to everything correctly in the right order. Especially part where the environment is set up and the image is started.
I know that I can list my images with:
aws ecr list-images --repository-name prototypes.How would I fire up a new instance and run my image on it?
If anything is unclear, please let me know so that I can update my question to be as detailed as possible.
Solution
I will give it a try and explain all required steps with examples.
Please add comments with questions and improvement suggestions if anything is not well enough explained.
The OP (original poster) refers to
EC2 deployment
The purpose of this answer is to demonstrate that as OP correctly assumed, no Web UI access neither human interaction is mandatory to deploy a container on an AWS EC2 instance.
Scope and limitations
In this use case, we want to start just one container manually for demo/research purposes, therefore we explicitly decide to do no orchestration.
For a complex and possibly more realistic deployment scenario, this example is therefore not reusable. For same simplicity reasons, we will use EC2 service and create there a single virtual machine.
According to a short documentation research, there seems to be no official AMI with pre-installed Docker daemon. Thus we need to deploy some default EC2 instance and install there Docker daemon, then start the daemon.
While using a private registry, a login there is required; for simplicity purposes, I give also an example to run a container from the public Docker Hub registry.
Also, I am not going to provide a copy-paste automation script but single steps which can be easily adopted towards individual needs. In any case, intermediate results require parsing and passing operations which is then a pure programming activity concern.
A boilerplate automation could be done either with bash using awscli or with Python, using the boto3 SDK for AWS.
Python is worth to consider because you can do better processing of JSON responses you get from AWS. You can also tell Python to execute SSH login and bash commands on the remote host.
Execution plan
A. Identify recent Linux AMI image ID
Search for a recent and up-to-date Amazon Linux image following the official documentation:
B. Create keypair and VM
With
However, the API sends you your public key wrapped in a JSON data message. Therefore, using Python it is way easier to process this data message to a .pem file:
With the AMI ID we have got previously we can now create the VM:
AWS API allows also lookup of instance state and IP. With little bit more Python/boto3 stanzas you can acquire this data to know when you can proceed.
Note: Firewall consideration
To configure firewall access, please create a security group and link it to machines's VPC. For readability reasons, example is not included.
C. Test SSH connection
As soon as the instance is up and running, you can do everything you like by means of remote command execution.
In a Bash script, you would use
With Python, you use further modules like paramiko to establish SSH connection and run commands remotely.
For readability purposes, I list further commands unwrapped. In your script, you might want to use elementary helper subroutines to have calls like
D. Install and configure Docker daemon
As said above, anything else from here is running remote commands.
That is, the concluding steps are, following an example in the official documentation; proceeding is same as you would do it locally, but you need to wrap it in your script and pass over intermediate results if required:
Please add comments with questions and improvement suggestions if anything is not well enough explained.
The OP (original poster) refers to
awscli; while I provide corresponding examples, I also discuss limitations of this approach and give examples of doing same with Python.EC2 deployment
The purpose of this answer is to demonstrate that as OP correctly assumed, no Web UI access neither human interaction is mandatory to deploy a container on an AWS EC2 instance.
Scope and limitations
In this use case, we want to start just one container manually for demo/research purposes, therefore we explicitly decide to do no orchestration.
For a complex and possibly more realistic deployment scenario, this example is therefore not reusable. For same simplicity reasons, we will use EC2 service and create there a single virtual machine.
According to a short documentation research, there seems to be no official AMI with pre-installed Docker daemon. Thus we need to deploy some default EC2 instance and install there Docker daemon, then start the daemon.
While using a private registry, a login there is required; for simplicity purposes, I give also an example to run a container from the public Docker Hub registry.
Also, I am not going to provide a copy-paste automation script but single steps which can be easily adopted towards individual needs. In any case, intermediate results require parsing and passing operations which is then a pure programming activity concern.
A boilerplate automation could be done either with bash using awscli or with Python, using the boto3 SDK for AWS.
Python is worth to consider because you can do better processing of JSON responses you get from AWS. You can also tell Python to execute SSH login and bash commands on the remote host.
Execution plan
- A. Identify recent Linux AMI (Amazon Machine Image) ID
- B. Create keypair and instance (further, also referenced as VM)
- C. Connect to the instance
- D. Install and configure Docker daemon
- E. Run container
A. Identify recent Linux AMI image ID
Search for a recent and up-to-date Amazon Linux image following the official documentation:
$ aws ec2 describe-images --owners amazon --filters 'Name=name,Values=amzn2-ami-hvm-2.0.????????.?-x86_64-gp2' 'Name=state,Values=available' --query 'reverse(sort_by(Images, &CreationDate))[:1].ImageId' --output text
ami-01f14919ba412de34B. Create keypair and VM
With
awscli, you would just say:$ aws ec2 create-key-pair --key-name ec2-docker-testHowever, the API sends you your public key wrapped in a JSON data message. Therefore, using Python it is way easier to process this data message to a .pem file:
import boto3
ec2 = boto3.resource('ec2')
# create a file to store the key locally
outfile = open('ec2-keypair.pem','w')
# call the boto ec2 function to create a key pair
key_pair = ec2.create_key_pair(KeyName='ec2-docker-test')
# capture the key and store it in a file
KeyPairOut = str(key_pair.key_material)
print(KeyPairOut)
outfile.write(KeyPairOut)With the AMI ID we have got previously we can now create the VM:
instances = self.ec2r.create_instances(
ImageId=AMI_ID,
MinCount=1,
MaxCount=1,
InstanceType=instance_type,
KeyName='ec2-docker-test'
)AWS API allows also lookup of instance state and IP. With little bit more Python/boto3 stanzas you can acquire this data to know when you can proceed.
Note: Firewall consideration
To configure firewall access, please create a security group and link it to machines's VPC. For readability reasons, example is not included.
C. Test SSH connection
As soon as the instance is up and running, you can do everything you like by means of remote command execution.
In a Bash script, you would use
ssh only or scp to upload a local bash script and run it remotely.ssh -t -i With Python, you use further modules like paramiko to establish SSH connection and run commands remotely.
client = paramiko.SSHClient()
client.connect(ip, username=username, key_filename=key_filename, port=port,
timeout=timeout,
auth_timeout=timeout)
stdin, stdout, stderr = ssh.exec_command("whoami")For readability purposes, I list further commands unwrapped. In your script, you might want to use elementary helper subroutines to have calls like
ssh_call(command).D. Install and configure Docker daemon
As said above, anything else from here is running remote commands.
That is, the concluding steps are, following an example in the official documentation; proceeding is same as you would do it locally, but you need to wrap it in your script and pass over intermediate results if required:
- install and configure Docker daemon
- login to your Docker registry if needed
- pull image
- run Docker
Code Snippets
$ aws ec2 describe-images --owners amazon --filters 'Name=name,Values=amzn2-ami-hvm-2.0.????????.?-x86_64-gp2' 'Name=state,Values=available' --query 'reverse(sort_by(Images, &CreationDate))[:1].ImageId' --output text
ami-01f14919ba412de34$ aws ec2 create-key-pair --key-name ec2-docker-testimport boto3
ec2 = boto3.resource('ec2')
# create a file to store the key locally
outfile = open('ec2-keypair.pem','w')
# call the boto ec2 function to create a key pair
key_pair = ec2.create_key_pair(KeyName='ec2-docker-test')
# capture the key and store it in a file
KeyPairOut = str(key_pair.key_material)
print(KeyPairOut)
outfile.write(KeyPairOut)instances = self.ec2r.create_instances(
ImageId=AMI_ID,
MinCount=1,
MaxCount=1,
InstanceType=instance_type,
KeyName='ec2-docker-test'
)ssh -t <REMOTE_IP> -i <KEY_FILE> <COMMAND>Context
StackExchange DevOps Q#10312, answer score: 3
Revisions (0)
No revisions yet.