Personal Security

A coworker asked me a question yesterday about what I use to protect my personal data.  He knew that I was using a password vault of some sort, but was curious as to what I did for files, etc.  What followed was a pretty lengthy discussion of the tools I used and why.  Most of the why on specific tools was due to their ability to be used across all the platforms that I use.  On a daily basis I use Windows 7, Mac, Linux, Android, and until recently IOS.

Password Save – Keepass
I have been using Keepass as my password safe for probably the last 5 years or so.  It provides a number of key features that I really wanted, including auto type and password generation.  It also does a pretty good job with some of the websites that make you enter a username first and then goes to a second page to enter your password.  The password generation tool allows me to make long, very complex passwords that I don’t have to remember.  As a result, outside of my keepass password I only have to remember about 10 passwords.  Compare that to the 200+ entries I currently have in Keepass and you can see the benefit.
I keep a copy of the database in SpiderOak Hive (we will discuss SpiderOak further down) so that it syncs securely across all of my devices.  That way, if I am every using a computer that is not mine and I need to enter a password, I can pull it up on my phone and type it in manually.  I also keep a copy of it on my Portable Apps USB drive I keep on my keychain.

Computer Backup – SpiderOak
When I started using SpiderOak a few years ago, there were only a handful of online backup providers.  Carbonite was the big dog, but there was also Mozy and Backblaze.  There were a number of things that drew me to SpiderOak.  First of course was that it worked on all the platforms, and right behind it was that you paid for space rather than computers.  With most providers you paid a fee for a computer and could backup as much as you want, but when you are only backing up a couple of important gigs, per PC I think SpiderOak provides a better value.  I pay one fee annually for 100G and can backup as many computers or devices as I want to it.  The thing that really sealed the deal was SpiderOak’s Zero Knowledge privacy (https://spideroak.com/zero-knowledge/).  It essentially means that they have no way of decrypting the contents of your files once they reach their servers.

Cloud Drive Encryption – BoxCryptor Classic
Dropbox, Google Drive, and Microsoft’s OneDrive provide a lot of free storage (I think I have around 100G between them), but none of them provide encryption for your files, other than in-transit.  For those services I use BoxCryptor Classic.    It allows me to encrypt locally before they are synced to the cloud drive.  While BoxCryptor only has a Windows and Mac client, you can you EncFS on a Linux system to decrypt and mount the drive.  The free version allows you to encrypt the files of one provider, but it doesn’t encrypt the file names.  I use the paid version that lets me use it with as many providers as I want as well as encrypting file names.

Local Disk/Device Encryption – TrueCrypt
There is still local drives and devices, and for that I use TrueCrypt.  TrueCrypt allows me to encrypt my system drives, portable USB drives, and provides portable containers for other types of files.  Right now, I only have one encrypted container on my laptop, and it contains all of my financial information (Quicken, Turbo Tax, Budget, etc) so that if my laptop is ever lost they don’t get my bank information as well.
Posted in Security | Leave a comment

Happy New Year

I look forward to great things in 2014.  Hopefully you do to.

Happy New Years, everybody.  Make it your best year yet.

Posted in Uncategorized | Leave a comment

Saltstack and Bonding – Part 2

(Part 1 can be found here)

After I wrote my custom grains and was able to get the information that I needed (IP, netmask, gateway) to configure my bonding and bridging, my first attempt to get SaltStack to configure it was with the salt.states.network module:

bond0:
  network.managed:
    - type: bond
    - enabled: True
    - proto: dhcp
    - mode: 802.3ad
    - miimon: 100
    - require:
    - network: eth1
    - network: eth2
eth1:
  network.managed:
    - type: slave
    - proto: none
    - master: bond0

eth2:
  network.managed:
    - type: slave
    - proto: none
    - master: bond0

br0:
  network.managed:
    - enabled: True
    - type: bridge
    - ipaddr: {{ grains['bondnet_ip'] }}
    - netmask: {{ grains['bondnet_mask'] }}
    - gateway: {{ grains['bondnet_gw'] }}

This only partially worked for me.  It would configure the interfaces and change the necessary files, but it would leave the system in a broken state with the IP running on both br0 and eth0.  Once I logged in to the host and restarted networking, everthing was fine, but I don’t what to do that for every system that I build.

Next I tried to do it using file.managed and Jinja templates to make the changes and restart networking:

/etc/sysconfig/network:
  file.managed:
    - source: salt://kvm/network
    - user: root
    - group: root
    - mode: 644
    - template: jinja  
    - defaults:
      hostname: {{ grains['fqdn'] }}

/etc/sysconfig/network-scripts/ifcfg-eth0:
  file.managed:
    - source: salt://kvm/ifcfg-eth0
    - user: root
    - group: root
    - mode: 644

/etc/sysconfig/network-scripts/ifcfg-eth1:
  file.managed:
    - source: salt://kvm/ifcfg-eth1
    - user: root
    - group: root
    - mode: 644

/etc/sysconfig/network-scripts/ifcfg-bond0:
  file.managed:
    - source: salt://kvm/ifcfg-bond0
    - user: root
    - group: root
    - mode: 644

/etc/sysconfig/network-scripts/ifcfg-br0:
  file.managed:
    - source: salt://kvm/ifcfg-br0
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - defaults:
      ip: {{ grains['bondnet_ip'] }}
      netmask: {{ grains['bondnet_mask'] }}
      gateway: {{ grains['bondnet_gw'] }}

/etc/init.d/network restart>/tmp/out:
  cmd.wait:
    - watch:
    - file: '/etc/sysconfig/network-scripts/ifcfg-br0'

This didn’t work very well for me initially. It would restart the network after it changed ifcfg-br0, but it would change it before the rest of the network files, so it would break as well.  To solve that I added “- order” to each of the lines to make sure that the processed in the order that I wanted them to and my problem was solved:

/etc/sysconfig/network:
  file.managed:
    - source: salt://kvm/network
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - order: 1
    - defaults:
      hostname: {{ grains['fqdn'] }}

/etc/sysconfig/network-scripts/ifcfg-eth0:
  file.managed:
    - source: salt://kvm/ifcfg-eth0
    - user: root
    - group: root
    - mode: 644
    - order: 2

/etc/sysconfig/network-scripts/ifcfg-eth1:
  file.managed:
    - source: salt://kvm/ifcfg-eth1
    - user: root
    - group: root
    - mode: 644
    - order: 2

/etc/sysconfig/network-scripts/ifcfg-bond0:
  file.managed:
    - source: salt://kvm/ifcfg-bond0
    - user: root
    - group: root
    - mode: 644
    - order: 3

/etc/sysconfig/network-scripts/ifcfg-br0:
  file.managed:
    - source: salt://kvm/ifcfg-br0
    - user: root
    - group: root
    - mode: 644
    - template: jinja
    - order: 4
    - defaults:
      ip: {{ grains['bondnet_ip'] }}
      netmask: {{ grains['bondnet_mask'] }}
      gateway: {{ grains['bondnet_gw'] }}

/etc/init.d/network restart>/tmp/out:
  cmd.wait:
    - watch:
      - file: '/etc/sysconfig/network-scripts/ifcfg-br0'

Once I added the “- order”, everything worked exactly the way that I wanted it.  My KVM server was build and ready to be added to CloudStack with the proper network configuration.

Posted in SaltStack | 4 Comments

Saltstack and Bonding – Part 1

For the last week or so I have been working on creating a SaLt State (SLS) file to build my KVM servers. Most of the SLS file was straight forward, adding a custom repo for CloudStack, installing the necessary packages for KVM and CloudStack.  The most difficult aspect of building it revolved around bonding and bridging the interfaces.

First, a little bit about my setup.  I use a Kickstart server to build all of my hosts.  It builds every host to the same minimalist configuration, with the only additional thing that it does post installation is install SaltStack.  I then use SaltStack to do all of the post installation; packages specific to a server type, configuration files that are specific to an environment, etc.

My SLS file is for all KVM hosts, so I wanted it to take the IP address, netmask, and gateway  already assigned to the host when it was kickstarted and then reconfigure the server using the same information.  The first challenge that I faced was getting that information.

My first attempt was to set a variable:

- defaults:
  ip: {{ salt['network.interfaces']()['eth0']['inet'][0]['address'] }}

This worked for the initial build, but then all future state.highstate runs would fail  because there is no IP address assigned to the eth0 interface because it was now bridged.  Being new at SaltStack, I tried to find a way to do some sort of if statement to check if eth0 had an address assigned, but I could not find a way.

In any event, there wasn’t a good way for me to get the subnet mask or the default gateway so I decided to write my own grains to collect the information.  I’m still learning python so there is probably a much better way to do this, but was able to come up with this:

import socket
import fcntl
import struct
import sys
import os

def bondnet_ip():
    '''
    Return the path
    '''
    # Provides:
    # path
    return {'bondnet_ip': socket.gethostbyname(socket.gethostname())}

def bondnet_mask():
    '''
    Return the netmask
    '''
    # Provides:
    # mask
    try :
        iface = "br0"
        mask = socket.inet_ntoa(fcntl.ioctl(socket.socket(socket.AF_INET, socket.SOCK_DGRAM), 35099, struct.pack('256s', iface))[20:24])
    except:
        pass

    try :
        iface = "eth0"
        mask = socket.inet_ntoa(fcntl.ioctl(socket.socket(socket.AF_INET, socket.SOCK_DGRAM), 35099, struct.pack('256s', iface))[20:24])
    except:
        pass

    return {'bondnet_mask': mask}

# Get the default Gateway
    def bondnet_gw():
    '''
    Return the Gateway
    '''
    """Read the default gateway directly from /proc."""
    with open("/proc/net/route") as fh:
    for line in fh:
        fields = line.strip().split()
        if fields[1] != '00000000' or not int(fields[3], 16) & 2:
            continue

        return {'bondnet_gw': socket.inet_ntoa(struct.pack("<L", int(fields[2], 16)))}

Now I can pull them as grains in and set the variable that way:

- defaults:
  ip: {{ grains['bondnet_ip'] }}
  netmask: {{ grains['bondnet_mask'] }}
  gateway: {{ grains['bondnet_gw'] }}

This has gotten pretty long, so part 2 will focus on how I used the new grains that I created to build my bonded and bridged interface.

Update 11/1:  Part 2 can be found here

Posted in SaltStack | 4 Comments

Four Levels of Maturity

There are a lot of written words on maturity models, but none of them have ever seemed to fit my vision for where an infrastructure should be. In my mind, there are 4 primary levels of maturity:

    • Ad-Hoc
    • Standardization
    • Automation
    • Orchestration

Ad-Hoc.  At the Ad-Hoc level, everything is a manual process.  Every systems build is unique.

Standardization. When we reach standardization, everything is still a manual process, but it is a documented, repeatable process.

Automation. From standardization we move into automation.  At this point, the documented processes have been used to build tools to automate system builds and managment.  Tools such as Kickstart, Jumpstart, CFEngine, Chef, and Salt are used to deploy and build systems to exact specifications every time.

Orchestration.  The final phase is tying all of the automation processes together.  Orchestration ties all of the various automation tools together into one process from beginning to end.

I currently fall between standardization and automation.  While my server builds and deployments are mostly automated, we still have some manual processes that need to be completed, mostly revolving around IP Management and server managment (physical and virtual).  My goal is complete orchestration.

Posted in Philosophy | Leave a comment

Looking to Grow

As a father, Scoutmaster, and leader, one of the key messages that I have for people is to never stop learning. I’ve never been a big fan of the saying “you can’t teach an old dog new tricks”, and I become even less so as I have gotten older. For me personally, if you stop learning you might as well stop living. There is no real joy in being stuck in the same old.

Lately, my looking to grow opportunity is centered around automation. I have been a big CFEngine user over the last few years, but I have started to become a little frustrated with the complexity of the syntax as well as some of the changes that have come in more recent versions. Recently I have started to look into SaltStack, which so far seems extremely promising. I have also started looking into using Python to do my programming, especially around APIs.

While I still have a long way to go before I am any sort of expert, I plan on sharing my experiences here on these pages. Since my use cases are a little more complex, or at the very least rare, I have not been able to find a lot of information on how to solve them through the web so I plan to share my challenges and solutions here. Hopefully I can help somebody solve a similar problem down the road.

Let the journey begin. Each day is an opportunity to learn something new.

Posted in Site News | Leave a comment

Configuring a RHN Satellite Server with a Third Party Cert

Before making any adjustments, I made a backup of all of the files that I would be messing with using the tar command.

# tar -cvjf /root/ssl-backup.tar.bz2 /etc/httpd/conf/ssl.* \
/var/www/html/pub/RHN-ORG-TRUSTED-SSL-CERT /etc/pki/spacewalk/jabberd/server.pem

The first think that I wanted to do was change the hostname of my server to be something more user friendly. In my environment we have a very specific host naming convention that is extremely useful in determining the location, type, and environment of a server. They do not always make great URLs, so I wanted to CNAME it to satelite.example.com.

To change the hostname of the satellite server, you need to use the spacewalk-hostname-rename command. Unfortunately, it checks the hostname in a couple of different way, including /proc/sys/kernel/hostname, the hostname command, and in /etc/sysconfig/network. In order to make the command work, I temporarily changed the hostname of the box.

# hostname satellite.example.com
# vi /etc/sysconfig/network
HOSTNAME=satellite.example.com

After the hostname change is complete, you can run the spacewalk-hostname-rename command. When you run the command it will generate new certificates for you as
well. Make sure you use the correct values for the CA you plan to use.

# spacewalk-hostname-rename

Once you have completed the rename and the services have been restarted, you will need to get the CSR and upload it to your CA. The CSR is located in /root/ssl-build/satellite/server.csr. After you have processed it through your CA and have received the cert, you need to install it on the server. To do that you will need to create a package and then install it.

# rhn-ssl-tool --gen-server --set-hostname=satellite --rpm-only
rpm -Uvh ./ssl-build/satellite/rhn-org-httpd-ssl-key-pair-satellite-1.0-2.noarch.rpm

The last thing is to setup your Root CA. Copy the Root CA into the RHN-ORG -TRUSTED-SSL-CERT file in /var/www/html/pub, /usr/share/rhn, and /root/ssl_build. Once you have copied the file over you can update the SSL package for the hosts and copy it to the pub directory. Once you have created the DB you can add need to add it to the local database with the rhn-ssl-dbstore command.

# rhn-ssl-tool --gen-ca --rpm-only
# cp rhn-org-trusted-ssl-cert-1.0-1.noarch.rpm /var/www/html/pub/
# rhn-ssl-dbstore --ca-cert=/var/www/html/pub/RHN-ORG-TRUSTED-SSL-CER

If you cert has an intermediate cert in addition to a root cert, you can chain them by combining both of them into the RHN-ORG-TRUSTED-SSL-CERT file, adding the intermediate followed by the root.

Posted in Linux, RHEL | 1 Comment

CFEngine 3 Access Controls

Yesterday I installed CFEngine 3 Enterprise in my home lab to start checking out what the differences are with the community edition that we use at work. The installation was extremely easy, and I like how CFEngine uses itself to install and configure it’s dependencies.

The first promise that I added was one to create my user accounts on any Linux box and copy over my .bashrc and .ssh/authorized_keys file to all my Linux boxes. The account creation worked just fine, but the copying of files kept failing.

At work, we use a seperate directory outside of the normal CFEngine structure to
store all of the files that we copy over to servers. Inside the directory it is structured like a normal Linux file system. For example, the generic httpd.conf file that we copy to our all our servers would be in /cfrepo/etc/httpd/httpd.conf. For files that only go to specific servers or classes of servers, we do the same thing, only we add and extension to it, such as /cfrepo/etc/ntp.conf.colo1.

While this worked without any issues in the community edition, it didn’t work in the Enterprise edition. After increasing the debugging level on the both the server and client I determined it had something to do with access controls.

After doing some research, I figured out that I needed to udpate /var/cfengine/masterfiles/controls/cf_serverd.cf and add my directory to the access_rules bundle. The code looks like:

"/cfrepo"
handle => "cfengine_dir_access_policy",
comment => "Grant Access to the cfengine repository",
admit => { "192\..*"};

The first line is the directory that you want to be available. It is followed by the handle and the comment. Finally, the admit string allows access to anybody on the 192 network.

Once I made the change, the files copied over as expected. I need to do a little bit more research on access controls work on the commercial version, but for now I am happy.

Posted in CFEngine, Linux | Leave a comment

CFEngine

I have been working with CFEngine for a couple of years now. Up until about a year ago we were strictly running CFEngine 2, using it primarily to enforce compliance with our security policies. We had kicked around using it for more, but with everything else we had to do we never really got around to it.

We started our virtualization efforts about a year ago, and decided since we were going to be rebuilding a lot of systems and retiring many of our old physical boxes, it would be a great time to both move to CFEngine 3 as well as start using it for more than just enforcing some security policies.

We rebuilt our kickstart environment to use only one profile for every server that we built. Rather than using finish scripts to customize each host for it’s purpose (apache, java app server, db, etc), we starting putting it into CFEngine.  Account creation, package installation, and management of configuration files are all a part of our environment.

The key to our success is our host naming convention. Our contention is 9 characters: 6 letters and 3 numbers. A typical host name might look like this: c0htpr001. The first two letters tell the type and location of the server, in this case, it is a physical server colo-0 (we use v0 for a virtual server in colo-0). The second two letters describe the type of server, such as an httpd server or db server. The final two letters indicate the environment it is in, qa, staging, production, etc. The numbers are used to iterate for servers that do the same thing.

This convention allows us to create classes in CFEngine to deploy specific files
to the correct servers. If our qa server configuration is slightly different from prod, we just create a class called qa_servers. When we combine it with the type of server, such as ht_servers.qa_servers, we can now kickstart a new server and let CFEngine configure it for us.

We have come a long way from our CFEngine 2 days. We still have a lot of work to do and a few more challenges to figure out, we are definitely on the right path.

Posted in CFEngine, Linux, Operations | 4 Comments

Switching Distro’s

Over the course of the last 15 years, I have ran many different distro’s of Linux. My first exposure to Linux was back in 1998, when my buddy and I ordered a couple of different distro’s from cheapbytes.com, including Red Hat and Caldera. Back then, installation was difficult I was doing it a lot so I gravitated towards Caldera since it had a game you could play during installation (Pac-Man in 1.2 and Tetris in 2.3, I think).

I have mostly run distro’s on the Red Hat side of the tree, including Red Hat, Fedora, Mandrake, and openSUSE. I favored Mandrake for a long time until I took my first pure Linux Administrator position (I was primarily a Solaris Administrator prior to that) and the corporation favored SUSE.

I switched my laptop distro to openSUSE and was immediately impressed. The YaST configuration tools are far superior to anything the other distro’s were doing. Manual configuration was a little more tricky, but once I got the hang of how things were done it was pretty straight forward. I became a little bit of an openSUSE evangelist, at least with my co-workers.

That lasted up until a few weeks ago, when I made the switch to Ubuntu for all my Linux desktop systems. The reason? Encryption. At the risk of sounding a little paranoid, I am a little uncomfortable with some of the recent pushes by government officials and law enforcement to have more access to data. If my elected official don’t seem to feel that I deserve my privacy, I figured I better take matters into my own hands.

The switch has been pretty seamless with the biggest learning curve centered around package management and installation. I am primarily a command line guy, so figuring out dpkg and apt-get as opposed to yum, zypper, or rpm was took a little bit of time, mainly in figuring out the proper switches. I did have to switch from the default Unity desktop to KDE, which is my preferred desktop.

I haven’t run into any issues that I haven’t been able to solve, though there were a few that took a little bit of research. So far, I am pretty happy with the switch.

Posted in Linux | 1 Comment