Quick Fix For the POODLE SSLv3 Vulnerability On AWS ELB

Another day, another SSL vulnerability. Today’s SSL vulnerability is called POODLE and you can read more about it here. In a nutshell, SSLv3 needs to be disabled on all AWS ELBs. If you only have a single ELB, you can easily switch to the newest ELB policy, ELBSecurityPolicy-2014-10, via the console. Select your ELB in the console, click the Listeners tab, and then click Change under Cipher. Select ELBSecurityPolicy-2014-10 from the Predefined Security Policy drop down.

If you have a large number of ELBs, you will need to use the CLI and a short script. To get a list of your ELB names run:

aws elb describe-load-balancers | grep LoadBalancerName | awk -F\" '{print $4}' > /tmp/lbs.txt

This will parse the output and dump the ELB names into a text file called /tmp/lbs.txt

The CLI does not allow a new policy to applied to an ELB that already has a policy. To work around this, I apply an empty policy and then the new policy. There is a potential for a second or two of downtime. I have not had an opportunity to check this. I ran a for loop through my ELB names and made the policy change:

[dcolon@dcolonbuntu ~]$ for i in $(cat /tmp/lbs.txt)
> do
>    echo "Modifying $i: "
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names [] --load-balancer-port 443
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names ELBSecurityPolicy-2014-10 --load-balancer-port 443
> done

At this point, all ELBs with a listener on port 443 will be using the ELBSecurityPolicy-2014-10 policy.

The Heartbleed Bug and AWS ELBs

In an earlier post, I outlined the steps to patch a Linux system and regenerate an SSL certificate in response to the Heartbleed bug. Amazon announced that the openssl code has been patched in the ELB service. If you are terminating SSL on an ELB, you need to regenerate your SSL certificate and upload it to AWS. Here I present the steps to do that.

First create a new private key and a CSR.
Generate an RSA private key:

$ openssl genrsa 2048 > private-key.pem
Generating RSA private key, 2048 bit long modulus
..................................................................................................................................................................................................................+++
.................+++
e is 65537 (0x10001)

Generate the CSR using the new private key:

$ openssl req -new -key private-key.pem -out csr.pem

Next upload the CSR to your SSL registrar. The details are different for each provider but you want to find the option to re-key or regenerate your certificate. When you select this option, your registrar will ask you to upload a new CSR. Copy and paste the CSR that you just created. Within a few minutes, the registrar will either email the new certificate or make it available on their website.

After you download the new certificate, log into the AWS Console and go to the Load Balancer section. Select the ELB and click on the Listeners tab. Click on the Change link next to your certificate name. Unfortunately you can’t simply overwrite the existing certificate with the regenerated certificate. Click Upload a new SSL Certificate. You will be presented with a box that looks like this:
ELB SSL Certificate Dialog

Create a new Certificate Name in the first box. Then copy the new private key that you created in the first step into the Private Key box. Finally copy the newly generated certificate from your registrar into the Public Key Certificate box. If your registrar provided the certificate chain bundle, you can copy that into the Certificate Chain box. This last step is optional. Click save and the new certificate should be used by the ELB within a few seconds.

To verify that the new certificate is being used, open your website in a browser and click the lock icon in the address field. View the certificate details and verify that the issue date is today and not the original date the certificate was issued. Here is a snippet from my updated certificate:

SSL Certificate Details

If you have any questions or comments, please post them below.

Heartbleed Logo

Mysqldump Specific Tables From An RDS Database and Archive To S3

I was recently tasked with doing a daily backup of specific tables from an RDS database and storing that backup in date formatted S3 bucket. I made use of the awesome s3cmd cli tool.

The first thing I did was manually dump the desired tables from the database to get the correct syntax.


mysqldump -h database.dcolon.org -u dbuser -pABCD1234 table1 table2 table3 > dump.sql

This worked as expected. The dump.sql file contains table1, table2, and table3. Next I created a shell script and defined a number of variables. The format for the date in the S3 bucket is year/month/day. Today is 3/24/2014 so the date format for the bucket s3://net.dcolon.backups/mysql/ is:


s3://net.dcolon.backups/mysql/2014/03/24

Using the date command I get each of the values that I need and store them in a variable in the script. I take the mysqldump and store it locally and verify that the process completed without an error. After copying the mysqldump I rename the local copy appending the date. You can also add some logic to keep a certain number of recent copies on local disk and delete everything older.

Here is the complete script:


#!/bin/bash

export PATH=/bin:/usr/bin

DBHOST=db.dcolon.net
DBUSER=dbuser
DBPASSWD=ABCD1234
DATABASE=somedb
TABLES="table1 table2 table3"
YEAR=$(date +"%Y")
MONTH=$(date +"%m")
DAY=$(date +"%d")
S3BUCKET="s3://net.dcolon.backups/mysql/$YEAR/$MONTH/$DAY/"
DUMPFILE="/storage/backups/dump.sql"

mysqldump -h $DBHOST -u $DBUSER -p$DBPASSWD $DATABASE $TABLES > $DUMPFILE 

# if successful copy dump.sql to S3            
if [ $? -eq 0 ]; then
        s3cmd put $DUMPFILE $S3BUCKET
fi

mv $DUMPFILE $DUMPFILE.$YEAR$MONTH$DAY

Note: There is an inherent security risk of storing the password in clear text in a script or configuration file. mysqldump will mask your password while the process is running so another user can’t get the password from the process list.


dcolon    4668  0.0  0.0  22816  1776 pts/3    R+   00:45   0:00 mysqldump -u root -px xxxxxxxxxxxxxxxxxx zm

This post shows how to use mysql_config_editor to generate a config file with your password encrypted. Note that this requires MySQL 5.6 is greater.

If you have any questions, please ask below.

The Dreaded ‘Resource Has A Dependent Object’ Error

If you spend a moderate amount of time creating and modifying AWS security groups, you will inevitably encounter the “Error deleting security group sg-12345678: resource sg-12345678 has a dependent object” error message.
AWS Security Group Dependent Object Error
Trying to find the security group that includes the group you want to delete can be an exercise in futility in the console. Instead, I make use of the CLI. I dumped all of my security groups into a text file:

aws ec2 describe-security-groups > securitygroups.txt

From there, I opened the securitygroups.txt file in vim and searched for sg-12345678. One entry is for the security group itself and all other matches are for security groups that include the group I want to delete.

It’s also possible that the group is attached to a network interface. I found the solution for this situation here.

Painlessly Increase Your EBS Based Root Volume Without Rebuilding

You created an AWS instance and decided to make a small root volume. Maybe you were shortsighted or just being cheap. Now the root volume is nearly full and you can’t expand it because you didn’t use LVM.

[ec2-user@ip-172-16-14-78 ~]$ df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 8256952 8173060 8 100% /

To add insult to injury, AWS recently cut the price of EBS storage in half. This issue can be fixed with a handful of commands from the CLI.

Before you begin, take note of a few important details about the instance. You need to know the instance-id and the availability-zone that your instance is running in. You can get the instance-id from the running instance by querying the instance metadata service. More details about the metadata service are available here.

[ec2-user@ip-172-16-14-78 ~]$ curl http://169.254.169.254/latest/meta-data/instance-id/
i-2db6840d

Determine which availability-zone your instance is running in.

dcolon@dcolonbuntu:~$ aws ec2 describe-instances --instance-id i-2db6840d --output text | egrep ^PLACEMENT
PLACEMENT default None us-east-1a

Shut down the instance. You can’t get a reliable snapshot of a volume while it’s attached to a running instance.

dcolon@dcolonbuntu:~$ aws ec2 stop-instances --instance-ids i-2db6840d --output text
i-2db6840d
CURRENTSTATE 64 stopping
PREVIOUSSTATE 16 running
RESPONSEMETADATA e8b8f0ab-7cd1-478f-a7c1-bb5bf81b2355

To determine the volume-id of the root volume run describe-instances:

dcolon@dcolonbuntu:~$ aws ec2 describe-instances --instance-ids i-2db6840d --output text | egrep ^EBS
EBS attached True vol-b6b98bfb 2014-02-11T20:36:00.000Z

Take a snapshot of vol-b6b98bfb:

dcolon@dcolonbuntu:~$ aws ec2 create-snapshot --volume-id vol-b6b98bfb --output text
None vol-b6b98bfb pending 8 None 2014-02-11T21:10:41.000Z snap-e76a3b25 647956677678
RESPONSEMETADATA 03f83444-a473-4bc1-b138-1065f6d5cee0

Now that you have a snapshot (snap-e76a3b25) of the root volume, you can create an exact larger copy of it. If the snapshot state is still pending, you will get an error. Do not forget to specify the size or the new volume will be the same size as the old.

dcolon@dcolonbuntu:~$ aws ec2 create-volume --availability-zone us-east-1a --size 40 --snapshot-id snap-e76a3b25 --output text
us-east-1a standard vol-b4aa98f9 creating snap-e76a3b25 2014-02-11T21:14:51.280Z 40
RESPONSEMETADATA 2257dac5-1685-488c-b545-2f8f9417a6ac

Finally detach the original volume (vol-b6b98bfb) and attach the newly created volume (vol-b4aa98f9).

dcolon@dcolonbuntu:~$ aws ec2 detach-volume --volume-id vol-b6b98bfb --output text
2014-02-11T20:36:00.000Z i-2db6840d vol-b6b98bfb detaching /dev/sda1
RESPONSEMETADATA ea10cbbd-2adf-43c9-b5e3-cca19c1e5c26
dcolon@dcolonbuntu:~$ aws ec2 attach-volume --volume-id vol-b4aa98f9 --instance-id i-2db6840d --device /dev/sda1 --output text
2014-02-11T21:20:00.318Z i-2db6840d vol-b4aa98f9 attaching /dev/sda1
RESPONSEMETADATA 146b0ecc-6916-49aa-931c-24c6cdbaca8c

Start your instance and verify that your root volume is now 40 GB.

dcolon@dcolonbuntu:~$ aws ec2 start-instances --instance-id i-2db6840d --output text
i-2db6840d
CURRENTSTATE 0 pending
PREVIOUSSTATE 80 stopped
RESPONSEMETADATA 9320cb43-589f-4ab5-a1ce-d23718c02297
dcolon@dcolonbuntu:~$ ssh -i .ssh/prd.pem [email protected]
ssh: connect to host 172.16.14.78 port 22: Connection refused
dcolon@dcolonbuntu:~$ ssh [email protected]
Last login: Tue Feb 11 21:06:04 2014 from

__| __|_ )
_| ( / Amazon Linux AMI
___|\___|___|

[ec2-user@ip-172-16-14-78 ~]$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 7.9G 7.8G 92M 99% /

The root volume is still 8 GB. This is because the volume was created based on a snapshot. Run resize2fs to resize the volume to 40 GB.

[ec2-user@ip-172-16-14-78 ~]$ sudo resize2fs /dev/xvda1
resize2fs 1.42.3 (14-May-2012)
Filesystem at /dev/xvda1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 3
The filesystem on /dev/xvda1 is now 10485760 blocks long.
[ec2-user@ip-172-16-14-78 ~]$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 40G 7.8G 32G 20% /

For details on how to grow an EBS based LVM filesystem, see this article that I wrote.

If you have any questions, please post them in the comments below.

Quick and Dirty LVM Volume Expansion Using EBS on Amazon EC2

Problem: I have a central rsyslog server that is filling up. I used LVM on the log partition so that I can grow it on demand. Here are the exact steps that I took to grow the partition.

I allocate fifteen 25 GB volumes and write the ec2-create-volume output to a file disks.txt. Volumes must be allocated in the same availability zone (AZ) as the instance (server) that they will be attached to.

dcolon@dcolonbuntu:~/f$ for ((i=1; i<16; i++));
> do
>    ec2-create-volume -s 25 -z us-east-1b >> disks.txt
> done
dcolon@dcolonbuntu:~/f$ awk '{print $2}' disks.txt 
vol-61dac038
vol-0adac053
vol-3bdac062
vol-d5dac08c
vol-efdac0b6
vol-92dac0cb
vol-84dac0dd
vol-acdac0f5
vol-45dbc11c
vol-15dbc14c
vol-07dbc15e
vol-34dbc16d
vol-25dbc17c
vol-e1dbc1b8
vol-aedbc1f7

Next I attach the fifteen volumes to the instance. I found that you can’t have more than 15 partitions per device letter, ie you can’t go above /dev/sdk15.

dcolon@dcolonbuntu:~/f$ export i=1
dcolon@dcolonbuntu:~/f$ awk '{print $2}' disks.txt | while read disk
> do
>    ec2-attach-volume $disk -i i-1c4b3265 -d /dev/sdk$i
>    i=$((i+1))
> done
ATTACHMENT      vol-61dac038    i-1c4b3265      /dev/sdk1       attaching      2013-05-02T16:31:43+0000
ATTACHMENT      vol-0adac053    i-1c4b3265      /dev/sdk2       attaching      2013-05-02T16:31:45+0000
ATTACHMENT      vol-3bdac062    i-1c4b3265      /dev/sdk3       attaching      2013-05-02T16:31:48+0000
ATTACHMENT      vol-d5dac08c    i-1c4b3265      /dev/sdk4       attaching      2013-05-02T16:31:50+0000
ATTACHMENT      vol-efdac0b6    i-1c4b3265      /dev/sdk5       attaching      2013-05-02T16:31:53+0000
ATTACHMENT      vol-92dac0cb    i-1c4b3265      /dev/sdk6       attaching      2013-05-02T16:31:55+0000
ATTACHMENT      vol-84dac0dd    i-1c4b3265      /dev/sdk7       attaching      2013-05-02T16:31:58+0000
ATTACHMENT      vol-acdac0f5    i-1c4b3265      /dev/sdk8       attaching      2013-05-02T16:32:00+0000
ATTACHMENT      vol-45dbc11c    i-1c4b3265      /dev/sdk9       attaching      2013-05-02T16:32:03+0000
ATTACHMENT      vol-15dbc14c    i-1c4b3265      /dev/sdk10      attaching      2013-05-02T16:32:05+0000
ATTACHMENT      vol-07dbc15e    i-1c4b3265      /dev/sdk11      attaching      2013-05-02T16:32:09+0000
ATTACHMENT      vol-34dbc16d    i-1c4b3265      /dev/sdk12      attaching      2013-05-02T16:32:11+0000
ATTACHMENT      vol-25dbc17c    i-1c4b3265      /dev/sdk13      attaching      2013-05-02T16:32:14+0000
ATTACHMENT      vol-e1dbc1b8    i-1c4b3265      /dev/sdk14      attaching      2013-05-02T16:32:16+0000
ATTACHMENT      vol-aedbc1f7    i-1c4b3265      /dev/sdk15      attaching      2013-05-02T16:32:18+0000
dcolon@dcolonbuntu:~/f$

Up to this point, I have been working on my local Linux box. The next steps need to be done directly on the instance.

I loop through all fifteen partitions and create physical volumes on each using pvcreate so they can be used by LVM.

[root@syslog]# for ((i=1; i<16; i++));
> do
>   pvcreate /dev/sdk$i
> done
  Writing physical volume data to disk "/dev/sdk1"
  Physical volume "/dev/sdk1" successfully created
  Writing physical volume data to disk "/dev/sdk2"
  Physical volume "/dev/sdk2" successfully created
  Writing physical volume data to disk "/dev/sdk3"
  Physical volume "/dev/sdk3" successfully created
  Writing physical volume data to disk "/dev/sdk4"
  Physical volume "/dev/sdk4" successfully created
  Writing physical volume data to disk "/dev/sdk5"
  Physical volume "/dev/sdk5" successfully created
  Writing physical volume data to disk "/dev/sdk6"
  Physical volume "/dev/sdk6" successfully created
  Writing physical volume data to disk "/dev/sdk7"
  Physical volume "/dev/sdk7" successfully created
  Writing physical volume data to disk "/dev/sdk8"
  Physical volume "/dev/sdk8" successfully created
  Writing physical volume data to disk "/dev/sdk9"
  Physical volume "/dev/sdk9" successfully created
  Writing physical volume data to disk "/dev/sdk10"
  Physical volume "/dev/sdk10" successfully created
  Writing physical volume data to disk "/dev/sdk11"
  Physical volume "/dev/sdk11" successfully created
  Writing physical volume data to disk "/dev/sdk12"
  Physical volume "/dev/sdk12" successfully created
  Writing physical volume data to disk "/dev/sdk13"
  Physical volume "/dev/sdk13" successfully created
  Writing physical volume data to disk "/dev/sdk14"
  Physical volume "/dev/sdk14" successfully created
  Writing physical volume data to disk "/dev/sdk15"
  Physical volume "/dev/sdk15" successfully created
[root@syslog]#

Next I loop through and extend the volume group varlogVG to include the new physical volumes.

[root@syslog]# for ((i=1; i<16; i++));
> do
>   vgextend varlogVG /dev/sdk$i
> done
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
[root@syslog]#

Finally I extend the logical volume and the filesystem.

[root@syslog]# df -h /var/log
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/varlogVG-varlogLV
                      374G  140G  235G  38% /var/log
[root@syslog]# lvextend -L+375G /dev/mapper/varlogVG-varlogLV
  Extending logical volume varlogLV to 749.00 GB
  Logical volume varlogLV successfully resized
[root@syslog]# xfs_growfs /var/log
meta-data=/dev/mapper/varlogVG-varlogLV isize=256    agcount=41, agsize=2441216 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=98041856, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=19072, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 98041856 to 196345856[root@syslog-1b-217 servers]# df -h /var/log
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/varlogVG-varlogLV
                      749G  140G  610G  19% /var/log
[root@syslog]#

Note, because I’m using XFS the fileystem can be grown while the filesystem is mounted read/write. An ext3 filesystem can be grown online (mounted) using resize2fs.

To learn how to increase the size of an EBS based root filesystem without rebuilding, see this article that I wrote on the topic.