Quick Fix For the POODLE SSLv3 Vulnerability On AWS ELB

Another day, another SSL vulnerability. Today’s SSL vulnerability is called POODLE and you can read more about it here. In a nutshell, SSLv3 needs to be disabled on all AWS ELBs. If you only have a single ELB, you can easily switch to the newest ELB policy, ELBSecurityPolicy-2014-10, via the console. Select your ELB in the console, click the Listeners tab, and then click Change under Cipher. Select ELBSecurityPolicy-2014-10 from the Predefined Security Policy drop down.

If you have a large number of ELBs, you will need to use the CLI and a short script. To get a list of your ELB names run:

aws elb describe-load-balancers | grep LoadBalancerName | awk -F\" '{print $4}' > /tmp/lbs.txt

This will parse the output and dump the ELB names into a text file called /tmp/lbs.txt

The CLI does not allow a new policy to applied to an ELB that already has a policy. To work around this, I apply an empty policy and then the new policy. There is a potential for a second or two of downtime. I have not had an opportunity to check this. I ran a for loop through my ELB names and made the policy change:

[dcolon@dcolonbuntu ~]$ for i in $(cat /tmp/lbs.txt)
> do
>    echo "Modifying $i: "
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names [] --load-balancer-port 443
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names ELBSecurityPolicy-2014-10 --load-balancer-port 443
> done

At this point, all ELBs with a listener on port 443 will be using the ELBSecurityPolicy-2014-10 policy.

Quick and Dirty LVM Volume Expansion Using EBS on Amazon EC2

Problem: I have a central rsyslog server that is filling up. I used LVM on the log partition so that I can grow it on demand. Here are the exact steps that I took to grow the partition.

I allocate fifteen 25 GB volumes and write the ec2-create-volume output to a file disks.txt. Volumes must be allocated in the same availability zone (AZ) as the instance (server) that they will be attached to.

dcolon@dcolonbuntu:~/f$ for ((i=1; i<16; i++));
> do
>    ec2-create-volume -s 25 -z us-east-1b >> disks.txt
> done
dcolon@dcolonbuntu:~/f$ awk '{print $2}' disks.txt 
vol-61dac038
vol-0adac053
vol-3bdac062
vol-d5dac08c
vol-efdac0b6
vol-92dac0cb
vol-84dac0dd
vol-acdac0f5
vol-45dbc11c
vol-15dbc14c
vol-07dbc15e
vol-34dbc16d
vol-25dbc17c
vol-e1dbc1b8
vol-aedbc1f7

Next I attach the fifteen volumes to the instance. I found that you can’t have more than 15 partitions per device letter, ie you can’t go above /dev/sdk15.

dcolon@dcolonbuntu:~/f$ export i=1
dcolon@dcolonbuntu:~/f$ awk '{print $2}' disks.txt | while read disk
> do
>    ec2-attach-volume $disk -i i-1c4b3265 -d /dev/sdk$i
>    i=$((i+1))
> done
ATTACHMENT      vol-61dac038    i-1c4b3265      /dev/sdk1       attaching      2013-05-02T16:31:43+0000
ATTACHMENT      vol-0adac053    i-1c4b3265      /dev/sdk2       attaching      2013-05-02T16:31:45+0000
ATTACHMENT      vol-3bdac062    i-1c4b3265      /dev/sdk3       attaching      2013-05-02T16:31:48+0000
ATTACHMENT      vol-d5dac08c    i-1c4b3265      /dev/sdk4       attaching      2013-05-02T16:31:50+0000
ATTACHMENT      vol-efdac0b6    i-1c4b3265      /dev/sdk5       attaching      2013-05-02T16:31:53+0000
ATTACHMENT      vol-92dac0cb    i-1c4b3265      /dev/sdk6       attaching      2013-05-02T16:31:55+0000
ATTACHMENT      vol-84dac0dd    i-1c4b3265      /dev/sdk7       attaching      2013-05-02T16:31:58+0000
ATTACHMENT      vol-acdac0f5    i-1c4b3265      /dev/sdk8       attaching      2013-05-02T16:32:00+0000
ATTACHMENT      vol-45dbc11c    i-1c4b3265      /dev/sdk9       attaching      2013-05-02T16:32:03+0000
ATTACHMENT      vol-15dbc14c    i-1c4b3265      /dev/sdk10      attaching      2013-05-02T16:32:05+0000
ATTACHMENT      vol-07dbc15e    i-1c4b3265      /dev/sdk11      attaching      2013-05-02T16:32:09+0000
ATTACHMENT      vol-34dbc16d    i-1c4b3265      /dev/sdk12      attaching      2013-05-02T16:32:11+0000
ATTACHMENT      vol-25dbc17c    i-1c4b3265      /dev/sdk13      attaching      2013-05-02T16:32:14+0000
ATTACHMENT      vol-e1dbc1b8    i-1c4b3265      /dev/sdk14      attaching      2013-05-02T16:32:16+0000
ATTACHMENT      vol-aedbc1f7    i-1c4b3265      /dev/sdk15      attaching      2013-05-02T16:32:18+0000
dcolon@dcolonbuntu:~/f$

Up to this point, I have been working on my local Linux box. The next steps need to be done directly on the instance.

I loop through all fifteen partitions and create physical volumes on each using pvcreate so they can be used by LVM.

[root@syslog]# for ((i=1; i<16; i++));
> do
>   pvcreate /dev/sdk$i
> done
  Writing physical volume data to disk "/dev/sdk1"
  Physical volume "/dev/sdk1" successfully created
  Writing physical volume data to disk "/dev/sdk2"
  Physical volume "/dev/sdk2" successfully created
  Writing physical volume data to disk "/dev/sdk3"
  Physical volume "/dev/sdk3" successfully created
  Writing physical volume data to disk "/dev/sdk4"
  Physical volume "/dev/sdk4" successfully created
  Writing physical volume data to disk "/dev/sdk5"
  Physical volume "/dev/sdk5" successfully created
  Writing physical volume data to disk "/dev/sdk6"
  Physical volume "/dev/sdk6" successfully created
  Writing physical volume data to disk "/dev/sdk7"
  Physical volume "/dev/sdk7" successfully created
  Writing physical volume data to disk "/dev/sdk8"
  Physical volume "/dev/sdk8" successfully created
  Writing physical volume data to disk "/dev/sdk9"
  Physical volume "/dev/sdk9" successfully created
  Writing physical volume data to disk "/dev/sdk10"
  Physical volume "/dev/sdk10" successfully created
  Writing physical volume data to disk "/dev/sdk11"
  Physical volume "/dev/sdk11" successfully created
  Writing physical volume data to disk "/dev/sdk12"
  Physical volume "/dev/sdk12" successfully created
  Writing physical volume data to disk "/dev/sdk13"
  Physical volume "/dev/sdk13" successfully created
  Writing physical volume data to disk "/dev/sdk14"
  Physical volume "/dev/sdk14" successfully created
  Writing physical volume data to disk "/dev/sdk15"
  Physical volume "/dev/sdk15" successfully created
[root@syslog]#

Next I loop through and extend the volume group varlogVG to include the new physical volumes.

[root@syslog]# for ((i=1; i<16; i++));
> do
>   vgextend varlogVG /dev/sdk$i
> done
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
  Volume group "varlogVG" successfully extended
[root@syslog]#

Finally I extend the logical volume and the filesystem.

[root@syslog]# df -h /var/log
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/varlogVG-varlogLV
                      374G  140G  235G  38% /var/log
[root@syslog]# lvextend -L+375G /dev/mapper/varlogVG-varlogLV
  Extending logical volume varlogLV to 749.00 GB
  Logical volume varlogLV successfully resized
[root@syslog]# xfs_growfs /var/log
meta-data=/dev/mapper/varlogVG-varlogLV isize=256    agcount=41, agsize=2441216 blks
         =                       sectsz=512   attr=0
data     =                       bsize=4096   blocks=98041856, imaxpct=25
         =                       sunit=0      swidth=0 blks, unwritten=1
naming   =version 2              bsize=4096
log      =internal               bsize=4096   blocks=19072, version=1
         =                       sectsz=512   sunit=0 blks, lazy-count=0
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 98041856 to 196345856[root@syslog-1b-217 servers]# df -h /var/log
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/varlogVG-varlogLV
                      749G  140G  610G  19% /var/log
[root@syslog]#

Note, because I’m using XFS the fileystem can be grown while the filesystem is mounted read/write. An ext3 filesystem can be grown online (mounted) using resize2fs.

To learn how to increase the size of an EBS based root filesystem without rebuilding, see this article that I wrote on the topic.