Quick Fix For the POODLE SSLv3 Vulnerability On AWS ELB

Another day, another SSL vulnerability. Today’s SSL vulnerability is called POODLE and you can read more about it here. In a nutshell, SSLv3 needs to be disabled on all AWS ELBs. If you only have a single ELB, you can easily switch to the newest ELB policy, ELBSecurityPolicy-2014-10, via the console. Select your ELB in the console, click the Listeners tab, and then click Change under Cipher. Select ELBSecurityPolicy-2014-10 from the Predefined Security Policy drop down.

If you have a large number of ELBs, you will need to use the CLI and a short script. To get a list of your ELB names run:

aws elb describe-load-balancers | grep LoadBalancerName | awk -F\" '{print $4}' > /tmp/lbs.txt

This will parse the output and dump the ELB names into a text file called /tmp/lbs.txt

The CLI does not allow a new policy to applied to an ELB that already has a policy. To work around this, I apply an empty policy and then the new policy. There is a potential for a second or two of downtime. I have not had an opportunity to check this. I ran a for loop through my ELB names and made the policy change:

[dcolon@dcolonbuntu ~]$ for i in $(cat /tmp/lbs.txt)
> do
>    echo "Modifying $i: "
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names [] --load-balancer-port 443
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names ELBSecurityPolicy-2014-10 --load-balancer-port 443
> done

At this point, all ELBs with a listener on port 443 will be using the ELBSecurityPolicy-2014-10 policy.

The Heartbleed Bug and AWS ELBs

In an earlier post, I outlined the steps to patch a Linux system and regenerate an SSL certificate in response to the Heartbleed bug. Amazon announced that the openssl code has been patched in the ELB service. If you are terminating SSL on an ELB, you need to regenerate your SSL certificate and upload it to AWS. Here I present the steps to do that.

First create a new private key and a CSR.
Generate an RSA private key:

$ openssl genrsa 2048 > private-key.pem
Generating RSA private key, 2048 bit long modulus
..................................................................................................................................................................................................................+++
.................+++
e is 65537 (0x10001)

Generate the CSR using the new private key:

$ openssl req -new -key private-key.pem -out csr.pem

Next upload the CSR to your SSL registrar. The details are different for each provider but you want to find the option to re-key or regenerate your certificate. When you select this option, your registrar will ask you to upload a new CSR. Copy and paste the CSR that you just created. Within a few minutes, the registrar will either email the new certificate or make it available on their website.

After you download the new certificate, log into the AWS Console and go to the Load Balancer section. Select the ELB and click on the Listeners tab. Click on the Change link next to your certificate name. Unfortunately you can’t simply overwrite the existing certificate with the regenerated certificate. Click Upload a new SSL Certificate. You will be presented with a box that looks like this:
ELB SSL Certificate Dialog

Create a new Certificate Name in the first box. Then copy the new private key that you created in the first step into the Private Key box. Finally copy the newly generated certificate from your registrar into the Public Key Certificate box. If your registrar provided the certificate chain bundle, you can copy that into the Certificate Chain box. This last step is optional. Click save and the new certificate should be used by the ELB within a few seconds.

To verify that the new certificate is being used, open your website in a browser and click the lock icon in the address field. View the certificate details and verify that the issue date is today and not the original date the certificate was issued. Here is a snippet from my updated certificate:

SSL Certificate Details

If you have any questions or comments, please post them below.

Heartbleed Logo

The Dreaded ‘Resource Has A Dependent Object’ Error

If you spend a moderate amount of time creating and modifying AWS security groups, you will inevitably encounter the “Error deleting security group sg-12345678: resource sg-12345678 has a dependent object” error message.
AWS Security Group Dependent Object Error
Trying to find the security group that includes the group you want to delete can be an exercise in futility in the console. Instead, I make use of the CLI. I dumped all of my security groups into a text file:

aws ec2 describe-security-groups > securitygroups.txt

From there, I opened the securitygroups.txt file in vim and searched for sg-12345678. One entry is for the security group itself and all other matches are for security groups that include the group I want to delete.

It’s also possible that the group is attached to a network interface. I found the solution for this situation here.

Painlessly Increase Your EBS Based Root Volume Without Rebuilding

You created an AWS instance and decided to make a small root volume. Maybe you were shortsighted or just being cheap. Now the root volume is nearly full and you can’t expand it because you didn’t use LVM.

[ec2-user@ip-172-16-14-78 ~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 8173060 8 100% /

To add insult to injury, AWS recently cut the price of EBS storage in half. This issue can be fixed with a handful of commands from the CLI.

Before you begin, take note of a few important details about the instance. You need to know the instance-id and the availability-zone that your instance is running in. You can get the instance-id from the running instance by querying the instance metadata service. More details about the metadata service are available here.

[ec2-user@ip-172-16-14-78 ~]$ curl http://169.254.169.254/latest/meta-data/instance-id/
i-2db6840d

Determine which availability-zone your instance is running in.

dcolon@dcolonbuntu:~$ aws ec2 describe-instances --instance-id i-2db6840d --output text | egrep ^PLACEMENT
PLACEMENT default None us-east-1a

Shut down the instance. You can’t get a reliable snapshot of a volume while it’s attached to a running instance.

dcolon@dcolonbuntu:~$ aws ec2 stop-instances --instance-ids i-2db6840d --output text
i-2db6840d
CURRENTSTATE 64 stopping
PREVIOUSSTATE 16 running
RESPONSEMETADATA e8b8f0ab-7cd1-478f-a7c1-bb5bf81b2355

To determine the volume-id of the root volume run describe-instances:

dcolon@dcolonbuntu:~$ aws ec2 describe-instances --instance-ids i-2db6840d --output text | egrep ^EBS
EBS attached True vol-b6b98bfb 2014-02-11T20:36:00.000Z

Take a snapshot of vol-b6b98bfb:

dcolon@dcolonbuntu:~$ aws ec2 create-snapshot --volume-id vol-b6b98bfb --output text
None vol-b6b98bfb pending 8 None 2014-02-11T21:10:41.000Z snap-e76a3b25 647956677678
RESPONSEMETADATA 03f83444-a473-4bc1-b138-1065f6d5cee0

Now that you have a snapshot (snap-e76a3b25) of the root volume, you can create an exact larger copy of it. If the snapshot state is still pending, you will get an error. Do not forget to specify the size or the new volume will be the same size as the old.

dcolon@dcolonbuntu:~$ aws ec2 create-volume --availability-zone us-east-1a --size 40 --snapshot-id snap-e76a3b25 --output text
us-east-1a standard vol-b4aa98f9 creating snap-e76a3b25 2014-02-11T21:14:51.280Z 40
RESPONSEMETADATA 2257dac5-1685-488c-b545-2f8f9417a6ac

Finally detach the original volume (vol-b6b98bfb) and attach the newly created volume (vol-b4aa98f9).

dcolon@dcolonbuntu:~$ aws ec2 detach-volume --volume-id vol-b6b98bfb --output text
2014-02-11T20:36:00.000Z i-2db6840d vol-b6b98bfb detaching /dev/sda1
RESPONSEMETADATA ea10cbbd-2adf-43c9-b5e3-cca19c1e5c26
dcolon@dcolonbuntu:~$ aws ec2 attach-volume --volume-id vol-b4aa98f9 --instance-id i-2db6840d --device /dev/sda1 --output text
2014-02-11T21:20:00.318Z i-2db6840d vol-b4aa98f9 attaching /dev/sda1
RESPONSEMETADATA 146b0ecc-6916-49aa-931c-24c6cdbaca8c

Start your instance and verify that your root volume is now 40 GB.

dcolon@dcolonbuntu:~$ aws ec2 start-instances --instance-id i-2db6840d --output text
i-2db6840d
CURRENTSTATE 0 pending
PREVIOUSSTATE 80 stopped
RESPONSEMETADATA 9320cb43-589f-4ab5-a1ce-d23718c02297
dcolon@dcolonbuntu:~$ ssh -i .ssh/prd.pem [email protected]
ssh: connect to host 172.16.14.78 port 22: Connection refused
dcolon@dcolonbuntu:~$ ssh [email protected]
Last login: Tue Feb 11 21:06:04 2014 from

__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|

[ec2-user@ip-172-16-14-78 ~]$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 7.8G 92M 99% /

The root volume is still 8 GB. This is because the volume was created based on a snapshot. Run resize2fs to resize the volume to 40 GB.

[ec2-user@ip-172-16-14-78 ~]$ sudo resize2fs /dev/xvda1
resize2fs 1.42.3 (14-May-2012)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/xvda1 is now 10485760 blocks long.
[ec2-user@ip-172-16-14-78 ~]$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 40G 7.8G 32G 20% /

For details on how to grow an EBS based LVM filesystem, see this article that I wrote.

If you have any questions, please post them in the comments below.