Problem: I have a central rsyslog server that is filling up. I used LVM on the log partition so that I can grow it on demand. Here are the exact steps that I took to grow the partition.
I allocate fifteen 25 GB volumes and write the ec2-create-volume output to a file disks.txt. Volumes must be allocated in the same availability zone (AZ) as the instance (server) that they will be attached to.
[email protected]:~/f$ for ((i=1; i<16; i++));
> do
> ec2-create-volume -s 25 -z us-east-1b >> disks.txt
> done
[email protected]:~/f$ awk '{print $2}' disks.txt
vol-61dac038
vol-0adac053
vol-3bdac062
vol-d5dac08c
vol-efdac0b6
vol-92dac0cb
vol-84dac0dd
vol-acdac0f5
vol-45dbc11c
vol-15dbc14c
vol-07dbc15e
vol-34dbc16d
vol-25dbc17c
vol-e1dbc1b8
vol-aedbc1f7
Next I attach the fifteen volumes to the instance. I found that you can’t have more than 15 partitions per device letter, ie you can’t go above /dev/sdk15.
[email protected]:~/f$ export i=1
[email protected]:~/f$ awk '{print $2}' disks.txt | while read disk
> do
> ec2-attach-volume $disk -i i-1c4b3265 -d /dev/sdk$i
> i=$((i+1))
> done
ATTACHMENT vol-61dac038 i-1c4b3265 /dev/sdk1 attaching 2013-05-02T16:31:43+0000
ATTACHMENT vol-0adac053 i-1c4b3265 /dev/sdk2 attaching 2013-05-02T16:31:45+0000
ATTACHMENT vol-3bdac062 i-1c4b3265 /dev/sdk3 attaching 2013-05-02T16:31:48+0000
ATTACHMENT vol-d5dac08c i-1c4b3265 /dev/sdk4 attaching 2013-05-02T16:31:50+0000
ATTACHMENT vol-efdac0b6 i-1c4b3265 /dev/sdk5 attaching 2013-05-02T16:31:53+0000
ATTACHMENT vol-92dac0cb i-1c4b3265 /dev/sdk6 attaching 2013-05-02T16:31:55+0000
ATTACHMENT vol-84dac0dd i-1c4b3265 /dev/sdk7 attaching 2013-05-02T16:31:58+0000
ATTACHMENT vol-acdac0f5 i-1c4b3265 /dev/sdk8 attaching 2013-05-02T16:32:00+0000
ATTACHMENT vol-45dbc11c i-1c4b3265 /dev/sdk9 attaching 2013-05-02T16:32:03+0000
ATTACHMENT vol-15dbc14c i-1c4b3265 /dev/sdk10 attaching 2013-05-02T16:32:05+0000
ATTACHMENT vol-07dbc15e i-1c4b3265 /dev/sdk11 attaching 2013-05-02T16:32:09+0000
ATTACHMENT vol-34dbc16d i-1c4b3265 /dev/sdk12 attaching 2013-05-02T16:32:11+0000
ATTACHMENT vol-25dbc17c i-1c4b3265 /dev/sdk13 attaching 2013-05-02T16:32:14+0000
ATTACHMENT vol-e1dbc1b8 i-1c4b3265 /dev/sdk14 attaching 2013-05-02T16:32:16+0000
ATTACHMENT vol-aedbc1f7 i-1c4b3265 /dev/sdk15 attaching 2013-05-02T16:32:18+0000
[email protected]:~/f$
Up to this point, I have been working on my local Linux box. The next steps need to be done directly on the instance.
I loop through all fifteen partitions and create physical volumes on each using pvcreate so they can be used by LVM.
[[email protected]]# for ((i=1; i<16; i++));
> do
> pvcreate /dev/sdk$i
> done
Writing physical volume data to disk "/dev/sdk1"
Physical volume "/dev/sdk1" successfully created
Writing physical volume data to disk "/dev/sdk2"
Physical volume "/dev/sdk2" successfully created
Writing physical volume data to disk "/dev/sdk3"
Physical volume "/dev/sdk3" successfully created
Writing physical volume data to disk "/dev/sdk4"
Physical volume "/dev/sdk4" successfully created
Writing physical volume data to disk "/dev/sdk5"
Physical volume "/dev/sdk5" successfully created
Writing physical volume data to disk "/dev/sdk6"
Physical volume "/dev/sdk6" successfully created
Writing physical volume data to disk "/dev/sdk7"
Physical volume "/dev/sdk7" successfully created
Writing physical volume data to disk "/dev/sdk8"
Physical volume "/dev/sdk8" successfully created
Writing physical volume data to disk "/dev/sdk9"
Physical volume "/dev/sdk9" successfully created
Writing physical volume data to disk "/dev/sdk10"
Physical volume "/dev/sdk10" successfully created
Writing physical volume data to disk "/dev/sdk11"
Physical volume "/dev/sdk11" successfully created
Writing physical volume data to disk "/dev/sdk12"
Physical volume "/dev/sdk12" successfully created
Writing physical volume data to disk "/dev/sdk13"
Physical volume "/dev/sdk13" successfully created
Writing physical volume data to disk "/dev/sdk14"
Physical volume "/dev/sdk14" successfully created
Writing physical volume data to disk "/dev/sdk15"
Physical volume "/dev/sdk15" successfully created
[[email protected]]#
Next I loop through and extend the volume group varlogVG to include the new physical volumes.
[[email protected]]# for ((i=1; i<16; i++));
> do
> vgextend varlogVG /dev/sdk$i
> done
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
Volume group "varlogVG" successfully extended
[[email protected]]#
Finally I extend the logical volume and the filesystem.
[[email protected]]# df -h /var/log
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/varlogVG-varlogLV
374G 140G 235G 38% /var/log
[[email protected]]# lvextend -L+375G /dev/mapper/varlogVG-varlogLV
Extending logical volume varlogLV to 749.00 GB
Logical volume varlogLV successfully resized
[[email protected]]# xfs_growfs /var/log
meta-data=/dev/mapper/varlogVG-varlogLV isize=256 agcount=41, agsize=2441216 blks
= sectsz=512 attr=0
data = bsize=4096 blocks=98041856, imaxpct=25
= sunit=0 swidth=0 blks, unwritten=1
naming =version 2 bsize=4096
log =internal bsize=4096 blocks=19072, version=1
= sectsz=512 sunit=0 blks, lazy-count=0
realtime =none extsz=4096 blocks=0, rtextents=0
data blocks changed from 98041856 to 196345856[[email protected] servers]# df -h /var/log
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/varlogVG-varlogLV
749G 140G 610G 19% /var/log
[[email protected]]#
Note, because I’m using XFS the fileystem can be grown while the filesystem is mounted read/write. An ext3 filesystem can be grown online (mounted) using resize2fs.
To learn how to increase the size of an EBS based root filesystem without rebuilding, see this article that I wrote on the topic.