package is in a very bad inconsistent state; you should reinstall it before attempting a removal

I encountered this error in the course of applying updates on an Ubuntu 16.04.5 LTS server. The error prevented the apt-get upgrade process from completing. Attempting to remove the package (dpkg --remove unattended-upgrades) failed. A quick Google search found this solution that worked.

# dpkg –remove –force-remove-reinstreq unattended-upgrades
dpkg: warning: overriding problem because –force enabled:
dpkg: warning: package is in a very bad inconsistent state; you should
reinstall it before attempting a removal
(Reading database … 100727 files and directories currently installed.)
Removing unattended-upgrades (0.90ubuntu0.9) …
Processing triggers for man-db (2.7.5-1) …

That removed the package and then I was able to reinstall it and complete my package updates.

Moving Blocks of Text in Vim

This gem comes courtesy of Mastering Vim Quickly. Before learning this trick, to move text I would yank/delete (Y/dd) the lines and then paste (p/P) where I wanted it to go. Inevitably I would either copy the wrong number of lines or paste the text in the wrong spot.

Add these lines to your .vimrc:

vnoremap J :m ‘>+1gv=gv
vnoremap K :m ‘<-2gv=gv

Open your file and enter Visual Mode (V) at the beginning of the block of text that you want to move. Using j or k move the visual selection over the text. To move the text up or down use K and J. Hit escape once the text is where you want it.

Here it is in action:

Touch ID Authentication For Sudo Commands in Iterm2

While reading an article about possible new Apple Watch features, one sentence caught my eye: “The biometric mechanism can be used to unlock the Mac, authorize Apple Pay purchases, autofill usernames and passwords, and (for the more advanced users) authenticate with sudo in Terminal.” I have a Macbook Pro with Touch ID and this was the first I’ve heard about using Touch ID to authenticate sudo commands. A quick Google search uncovered the simple change. Add the following line to /etc/pam.d/sudo:

auth sufficient pam_tid.so

Simple enough. I made the change but didn’t get the Touch ID prompt when I issued a sudo command. I opened a new window and restarted Iterm2 but I still got prompted for a password when issuing a sudo command. After some more Googling I found this ticket in the Iterm2 issue tracker. Turning off the following Iterm2 option fixed the problem:

Turn off Prefs > Advanced > Allow sessions to survive logging out and back in

This is what it looks like:

Let’s Encrypt Automatic SSL Certificate Renewal Does Not Work With Certbot

By now everyone should have an SSL enabled website thanks to the Let’s Encrypt project which provides free SSL certificates that are good for three months at a time. If you don’t, check out their site and generate your free SSL certificate. The main problem with a three month SSL certificate is remembering to renew it before it expires. Fortunately Let’s Encrypt provides an application called Certbot that does automatic certificate renewal. The instructions were simple. Set up a daily cronjob that calls certbot and it will automatically renew your SSL certificate when it reaches the thirty day expiration period. No problem, I set up the following:

0 0 * * *       certbot renew --quiet --no-self-upgrade

Because certbot does not renew the certificate until it reaches the thirty day threshold, there was no easy way to test it. Fortunately I stuck a reminder on my calendar to check. Sixty days later I opened my website, clicked the padlock and checked the certificate expiration date. It still had the same expiration date, ie it didn’t get renewed. Puzzled I ran the command manually:

# certbot renew --quiet --no-self-upgrade
#

That seemed to work fine. I checked the certificate’s expiration date and it was now ninety days into the future. There was nothing obvious in the logs so I did a Google search. I found a lot of other people reporting the same issue. It wasn’t until I saw this post that I understood the issue. The crontab had a PATH that did not include the location of certbot. When I downloaded certbot, I installed it in /usr/local/bin. By default, my crontab’s PATH did not include /usr/local/bin. There were a few ways to fix this.

  1. Update the crontab to contain the complete path to the certbot application:
    0 0 * * *       /usr/local/bin/certbot renew --quiet --no-self-upgrade
    
  2. Update the PATH variable to include the missing PATH location. If no PATH is defined in the crontab, simply add the following at the top:
    PATH=$PATH:/usr/local/bin
    
  3. Move the certbot application to a common location like /usr/bin

If you were struggling to figure out why your Let’s Encrypt SSL certificate was not renewing automatically with certbot, I hope you found this page before spending a lot of time debugging.

Google Music Extension Hijacks Macbook Pro Media Keys

I have a Macbook Pro and use the media keys (play/pause, skip, volume) extensively when listening to iTunes. I recently noticed that the media keys stopped working. I tried the standard things like restarting iTunes and rebooting. A few weeks back I upgraded to El Capitan (OSX 10.11). I was a bit skeptical of that being the issue since iTunes is a core part of OSX but you never know.

I did some Google searches that pointed at options in Accessibility solving the issue. None of these solutions worked. Finally I came upon this post on the Mac Rumors forums. I remembered a few weeks back installing the Google Music extension in Chrome; likely in a moment of weakness. I disabled that Chrome extension and BOOM my media keys started working again.

Quick Fix For the POODLE SSLv3 Vulnerability On AWS ELB

Another day, another SSL vulnerability. Today’s SSL vulnerability is called POODLE and you can read more about it here. In a nutshell, SSLv3 needs to be disabled on all AWS ELBs. If you only have a single ELB, you can easily switch to the newest ELB policy, ELBSecurityPolicy-2014-10, via the console. Select your ELB in the console, click the Listeners tab, and then click Change under Cipher. Select ELBSecurityPolicy-2014-10 from the Predefined Security Policy drop down.

If you have a large number of ELBs, you will need to use the CLI and a short script. To get a list of your ELB names run:

aws elb describe-load-balancers | grep LoadBalancerName | awk -F\" '{print $4}' > /tmp/lbs.txt

This will parse the output and dump the ELB names into a text file called /tmp/lbs.txt

The CLI does not allow a new policy to applied to an ELB that already has a policy. To work around this, I apply an empty policy and then the new policy. There is a potential for a second or two of downtime. I have not had an opportunity to check this. I ran a for loop through my ELB names and made the policy change:

[dcolon@dcolonbuntu ~]$ for i in $(cat /tmp/lbs.txt)
> do
>    echo "Modifying $i: "
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names [] --load-balancer-port 443
>    aws elb set-load-balancer-policies-of-listener --load-balancer-name $i --policy-names ELBSecurityPolicy-2014-10 --load-balancer-port 443
> done

At this point, all ELBs with a listener on port 443 will be using the ELBSecurityPolicy-2014-10 policy.

The Heartbleed Bug and AWS ELBs

In an earlier post, I outlined the steps to patch a Linux system and regenerate an SSL certificate in response to the Heartbleed bug. Amazon announced that the openssl code has been patched in the ELB service. If you are terminating SSL on an ELB, you need to regenerate your SSL certificate and upload it to AWS. Here I present the steps to do that.

First create a new private key and a CSR.
Generate an RSA private key:

$ openssl genrsa 2048 > private-key.pem
Generating RSA private key, 2048 bit long modulus
..................................................................................................................................................................................................................+++
.................+++
e is 65537 (0x10001)

Generate the CSR using the new private key:

$ openssl req -new -key private-key.pem -out csr.pem

Next upload the CSR to your SSL registrar. The details are different for each provider but you want to find the option to re-key or regenerate your certificate. When you select this option, your registrar will ask you to upload a new CSR. Copy and paste the CSR that you just created. Within a few minutes, the registrar will either email the new certificate or make it available on their website.

After you download the new certificate, log into the AWS Console and go to the Load Balancer section. Select the ELB and click on the Listeners tab. Click on the Change link next to your certificate name. Unfortunately you can’t simply overwrite the existing certificate with the regenerated certificate. Click Upload a new SSL Certificate. You will be presented with a box that looks like this:
ELB SSL Certificate Dialog

Create a new Certificate Name in the first box. Then copy the new private key that you created in the first step into the Private Key box. Finally copy the newly generated certificate from your registrar into the Public Key Certificate box. If your registrar provided the certificate chain bundle, you can copy that into the Certificate Chain box. This last step is optional. Click save and the new certificate should be used by the ELB within a few seconds.

To verify that the new certificate is being used, open your website in a browser and click the lock icon in the address field. View the certificate details and verify that the issue date is today and not the original date the certificate was issued. Here is a snippet from my updated certificate:

SSL Certificate Details

If you have any questions or comments, please post them below.

Heartbleed Logo

Quick Fix For the Heartbleed Bug

Unless you are living under a rock, you have heard all of the hysteria surrounding the Heartbleed openssl bug. Due to the nature of the bug and the possible exposure of SSL private keys, the openssl package needs to be updated and the SSL certificate needs to be regenerated. I will present the procedure that I used to patch a CentOS Linux server.

First I updated the openssl package:

# yum update openssl

Next I regenerated my SSL certificate. I needed to create a new private key and a CSR.
Generate an RSA private key:

$ openssl genrsa 2048 > private-key.pem
Generating RSA private key, 2048 bit long modulus
..................................................................................................................................................................................................................+++
.................+++
e is 65537 (0x10001)

Generate the CSR using the new private key:

$ openssl req -new -key private-key.pem -out csr.pem

I am leaving out the details that I used as they are different for each certificate. It’s important that you set the Common Name correctly. The Common Name is the Fully Qualified Domain Name (FQDN) for the certificate. If you are creating a wildcard certificate for foobar.com, then the Common Name is *.foobar.com. I now have two files in my directory, private-key.pem and csr.pem.

Next I uploaded my CSR to my SSL certificate registrar. The exact details will be different between registrars. In my case, I used the User Portal for Geotrust. There was an option to Reissue Certificate. That opened a text box for me to copy and paste my CSR. A few minutes later Geotrust sent me an email with the new certificate. I regenerated a certificate with Godaddy and the option was called Re-Key. Instead of emailing me the new certificate, Godaddy made it available for download from their website.

The last step is to overwrite the existing private key and certificate with the newly created files and restart Apache. To verify that Apache is using the new SSL certificate, visit your site and click on the lock icon in the address bar. View the certificate details and verify that the issue date is today and not the original date the certificate was issued. Here is a snippet from my updated certificate:

SSL Certificate Details

Good luck and do not delay patching your systems. Please leave any comments or questions below.

Heartbleed Logo

Mysqldump Specific Tables From An RDS Database and Archive To S3

I was recently tasked with doing a daily backup of specific tables from an RDS database and storing that backup in date formatted S3 bucket. I made use of the awesome s3cmd cli tool.

The first thing I did was manually dump the desired tables from the database to get the correct syntax.


mysqldump -h database.dcolon.org -u dbuser -pABCD1234 table1 table2 table3 > dump.sql

This worked as expected. The dump.sql file contains table1, table2, and table3. Next I created a shell script and defined a number of variables. The format for the date in the S3 bucket is year/month/day. Today is 3/24/2014 so the date format for the bucket s3://net.dcolon.backups/mysql/ is:


s3://net.dcolon.backups/mysql/2014/03/24

Using the date command I get each of the values that I need and store them in a variable in the script. I take the mysqldump and store it locally and verify that the process completed without an error. After copying the mysqldump I rename the local copy appending the date. You can also add some logic to keep a certain number of recent copies on local disk and delete everything older.

Here is the complete script:


#!/bin/bash

export PATH=/bin:/usr/bin

DBHOST=db.dcolon.net
DBUSER=dbuser
DBPASSWD=ABCD1234
DATABASE=somedb
TABLES="table1 table2 table3"
YEAR=$(date +"%Y")
MONTH=$(date +"%m")
DAY=$(date +"%d")
S3BUCKET="s3://net.dcolon.backups/mysql/$YEAR/$MONTH/$DAY/"
DUMPFILE="/storage/backups/dump.sql"

mysqldump -h $DBHOST -u $DBUSER -p$DBPASSWD $DATABASE $TABLES > $DUMPFILE 

# if successful copy dump.sql to S3            
if [ $? -eq 0 ]; then
        s3cmd put $DUMPFILE $S3BUCKET
fi

mv $DUMPFILE $DUMPFILE.$YEAR$MONTH$DAY

Note: There is an inherent security risk of storing the password in clear text in a script or configuration file. mysqldump will mask your password while the process is running so another user can’t get the password from the process list.


dcolon    4668  0.0  0.0  22816  1776 pts/3    R+   00:45   0:00 mysqldump -u root -px xxxxxxxxxxxxxxxxxx zm

This post shows how to use mysql_config_editor to generate a config file with your password encrypted. Note that this requires MySQL 5.6 is greater.

If you have any questions, please ask below.

The Dreaded ‘Resource Has A Dependent Object’ Error

If you spend a moderate amount of time creating and modifying AWS security groups, you will inevitably encounter the “Error deleting security group sg-12345678: resource sg-12345678 has a dependent object” error message.
AWS Security Group Dependent Object Error
Trying to find the security group that includes the group you want to delete can be an exercise in futility in the console. Instead, I make use of the CLI. I dumped all of my security groups into a text file:

aws ec2 describe-security-groups > securitygroups.txt

From there, I opened the securitygroups.txt file in vim and searched for sg-12345678. One entry is for the security group itself and all other matches are for security groups that include the group I want to delete.

It’s also possible that the group is attached to a network interface. I found the solution for this situation here.