W3tTr3y's blog

A personal technology focused blog

Encryption Overview

TLS Overview

While any sort of in-depth discussions of certificates let alone assymetric encryption is far beyond the scope of this page, I am constantly surprised by how little people understand about certificates. While I wouldn’t consider myself to be an expert, I understand the basics and can help you understand enough to make good decisions. What’s truely scary is this knowledge is enough to get you called a crypto-geek in many circles.

Symetric Encryption

Symetric encryption can best be described as having a shared secret. If you know that secret, then you can both encrypt and decrypt a message.

Encryption
plain text message + secret  ->  cipher text

Decryption
cipher text        + secret  ->  plain text message

While symmetric encryption is by far the easiest to understand, the problem is if you know the secret, you can encrypt and decrypt. If a party can intercept the messages and knows the secret, they can alter the communications and it is very hard to securely share a secret. On the positive side, symmetric encryption is faster, so in practice asymetric encryption is used to protect secrets and then symmetric encryption is used to protect the actual data.

Assymetric Encryption

For assymetric encryption, a key pair is generated. This key pair has a really interesting property that anything encrypted with one half is decryptable with the other half. More interestingly, even if you have a cipher text and the key used to do the encryption, you cannot decrypt the message without the other key. When a key pair is generated, one half is designated public while the other is designated private.

Encryption
plain text message + public key = cipher text

Decryption
cipher text + private key = plain text message

Note: Alternatively, you can use the private key in the encryption step, but then you must use the public key during the decryption step.

Certificate Creation

For the purposes of this conversation, we’ll assume you are attempting to create a certificate signed by a trusted CA.

Process Overview

  1. Generate an assymetric key pair
  2. Generate a certificate signing request (CSR)
  3. Submit the CSR to the certificate authority (CA)
  4. Obtain the certificate from the CA

Splunk: One SSL Certificate to Rule Them All

Background

In most Splunk setups you have 2-3 SSL certificates in use:

  1. $SPLUNK_HOME/etc/auth/splunkweb/.pem for SplunkWeb, typically 8443
  2. $SPLUNK_HOME/etc/auth/server.pem Splunk’s management port, typically 8089
  3. A certificate for 9997

Typically it doesn’t matter that there are so many certificates; as long as the forwarders know what certificate to expect when connecting to an indexer on 9997 and splunk internally can validate the cert on 8089, there is no reason for them to be signed by a 3rd party CA.

For the SplunkWeb certificate, a user using a browser will be connecting; if you have a private CA whose certificate if installed in all of your company’s browsers then great, use it. For our purposes, we have standardized on a 3rd party CA that’s widely trusted so we have a certificate signed by them.

Desire for Fewer Certificates

Now comes the fun part, I’m managing two SSL certs on all of our Splunk infrastructure servers: SplunkWeb and Splunkd certs on searchheads and Data (9997) and Splunkd on the indexers. Being the lazy person that I am, I’d love to get that down to one cert to make renewals easier.

Another benefit of using 3rd party CA’s is our risk people want every certificate signed by a 3rd party CA. There’s nothing magical about a 3rd party CA so that requirements doesn’t increase security in anyway. Their desire means I have to fill out an exception for every self-signed or internally signed certificate, so switching to a 3rd party signed certificate makes my life easier.

Combine key and cert into one

In $SPLUNK_HOME/etc/auth/splunkweb, combine the key and certificate into one file (e.g. cd /opt/splunk/etc/auth/splunkweb && cat server.key server.pem > ../server.pem )

where

  • splunkweb/server.key is the private key
  • splunkweb/server.pem is the certificate chain (certificate and any intermediates)

Then in web.conf update:

  1. privKeyPath = etc/auth/server.pem
  2. caCertPath = etc/auth/server.pem

Splunk DMC: No PDF Export Option

We have our own dashboards that predate the DMC, but the memory one broke. While I typically prefer to use SAR since that output works across our systems, I used DMC and noticed that there was no export to PDF button.

Aside: THe quick workaround is to append &hideEdit=false

I made a feature request (copied below to job my memory):

It would be great if you didn’t hide the edit controls in the DMC.

My specific use case was I wanted to export a PDF of a dashboard to show support how upgrading caused our deployment server / license master to use significantly more memory. I don’t thinks its unreasonable for people to want to export copies of those dashboards for reference when having discussions like this.

At a larger level, I could see it being useful to have the option to show/hide edit controls separate from the print/export buttons. If you wanted to take that as a second feature request (assuming that’s a different team) that’s fine. Within the context of the DMC, I think it would be great to allow people to edit as the DMC could be a built-in example of how powerful dashboards can be and help serve as a reference.

SSH Config: Dynamically Appending Domain (Host -> Fqdn via .ssh/config))

I the ssh config file to set a number of options; for several years now I’ve known there had to be a way to dynamically add a domain name to a hostname but I ahven’t been able to figure out how to do it.

(As an aside) The TLDR version is the %h variable will hold the host value. HostName %h.example.com will append .example.com to a matched host

For example, the username configured on my laptop is different from our servers so I have rules in place that automatically set my user.

Host db99 User jdoe

In reality, we have a number of servers so I have a rule similar to: Host db? log?? web? User jdoe

My problem has been I have a bad habbit that while I’m at the office, I just use hostnames (e.g. ssh db1). When I VPN in from home, I must use fully qualified domain names (e.g. ssh db1.example.com). You can use the HostName setting to set the fully qualified domain name, but the only way I could find was to make a setting per server. For exsample:

Host db1 User jdoe HostName db1.example.com

Host db2 User jdoe HostName db2.example.com

They need not match

Host fred User jdoe HostName db99.example.com

As you can probably guess that’s great, but it really doesn’t scale. I wrote a bash script to loop over all over our settings and generate the configuratin, so its not completely unmanagable but I was convinced there was a better way. Unfortunately my searches turned up blank so I’ve suffered in pain for years.

Today I just happened to come across an askubuntu post (https://askubuntu.com/questions/281988/can-i-configure-ubuntu-to-append-a-domain-to-the-end-of-an-ssh-hostname-request/760686#760686?newreg=14a526c965824d93854059e1ed7e8988) asking this exact question and the answer is so simple, %h will return the hostname.

Thus I can go back to my previous entry, just with one line added: Host db? log?? web? User jdoe HostName %h.example.com

As a slight aside, in case I type the full qualified domain name I also have: Host *.example.com User jdoe

I feel really dumb that its 2016 and I just found this setting, but hopefully this helps others in a similar situation.

Cron Jobs Not Executing

Last week we started getting some interesting error messages on the servers I manage:

Dec  1 07:32:28 hostexample kernel: type=1101 audit(1448977801.109:10043532): pid=28074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:crond_t:s0-s0:c0.c1023 msg='op=PAM:accounting grantors=? acct="root" exe="/usr/sbin/crond" hostname=? addr=? terminal=cron res=failed'

Background

My first reaction was to assume that it was due to some of the system hardening stuff that I have been implementing. I was a bit confused as the last change was nine days before we started getting this error messaging. Since I’m implementing the changes via Puppet, I was concerned not just with the most recent changes, but was something implemented earlier working (or breaking) inconsistently. One source of comfort was the last puppet update had been 9 days before the log messages started.

Root’s Password Expired

While the leeds I’d found online weren’t incorrect that pam was causing cron to fail, the initial issue was that root’s password had expired.

# chage -l root
Last password change                                   : Aug 26, 2015
Password expires                                      : Nov 24, 2015
Password inactive                                     : never
Account expires                                       : never
Minimum number of days between password change     :  7
Maximum number of days between password change     : 90
Number of days of warning before password expires  :  7

We only had distribution stock cron jobs running and they were all configured to run as root. If you’re lucky you’ll have a more diverse environment that will help you narrow it down to being specific to one account.

If an expired password isn’t your issues, I found some other interesting links suggesting adding [+ : root : cron crond to /etc/security/access.conf] (http://linuxfollies.blogspot.com/2014/06/root-cron-jobs-and-etcsecurityaccessconf.html), [adusting /etc/pam.d/system-ac] (TODO: find link), or editing /etc/cron.allow (although /etc/cron.allow won’t block root)

Immediate Action

After confirming that this was the issue, I set root’s password way into the future. This will prevent the account from expiring which breaks things.

I also configured our monitoring system to check for root’s password expiring once a day. This will provide a safety net in case things get misconfigured again. TODO: add a link to a new article about Nagios monitoring

Planned Actions

Our curent security requirements dictate that all passwords expire; at a high level that’s not a bad policy. The problem is the actual auditing looks at two things: PASS_MAX_DAYS in /etc/login.defs and at the value in /etc/shadow (TODO: confirm /etc/shadow vs /etc/passwd)

Meeting Technical Requirements

To meet the technical requirements, I want to continue to allow our configuration management system to configure root’s password to expire. This way we will pass

Meeting High Level Requirements

To meet the spirit of the agreement, I want to lock the root password.

While that’s not a bad policy in general, there are a number of conisderations here: 1. There are a small (minuscle) number of people with access to this system 2. All users have sudo access 3. We already block remote access via root

If the user is local but has forgotten their password, they can reboot the server using media they provide and reset the root password.

Thus the only situation where we are now leaving the person high and dry is if they have ssh keys to remotely access the box, but can’t remember their own password.

My guess is we’ll deem that an acceptable risk. If not, we can always create a backdoor account where the necessary password / keys are stored in a break-the-glass style safe.

#

Big Data & Security

While I am biased as I work on a Big Data implementation within a security group,

Connecting to an Unconfigured iLO Card

I heard back from HPe support about an open case we have with one of our raid controller cards; they wanted me to check the firmware version via iLO, integrated lights out, HP’s out-of-band management card. While I asked for alternatives (like checking from the operating system since I had an open ssh connection), they insisted on this methodology. Things then took a turn for the worst, I’ve been waiting on our network redesign to put our out-of-bound cards on the network and HPe’s iLO conifguration instructions started with rebooting so I could hit F10 to enter Intelligent Provisioning. While I appreciate that I had an issue on a production server that I need to resolve, having to reboot just to check the firmware version was disappointing and I was determined to find an alternative.

Foo

CentOS 7 USB Kickstart

While I really enjoy my current employment opportunity, servers are typically installed by students and we have several generations of servers all installed by different people, so there can be inconsistencies. Since our current servers distribution is not going to be supported by the vendor for much longer and we recently purchased 15 new servers, it seemed like the perfect opportunity to expand our use of configuration management (puppet in our case) and to introduce the idea of using Kickstart since we’re migrating to CentOS.

While I used Kickstart with Redhat, RHEL, Fedora, and CentOS at a previous position, that was over five years ago and was on a network with dhcp, tftp, and nfs servers. I’m currently in a security group with no dhcp, tftp or nfs servers, so I really wanted to find a way to do everything off of USB. Since that proved to be more challenging then I expected, I wanted to write up my solution so I could find it again and hopefully spare some one else some of the pain.

Naïve Attempts

The CentOS wiki suggests using dd to copythe iso to the usb drive:

1
dd if=CentOS-7.0-1406-x86_64-Everything.iso of=/dev/sdb

Based on that, my first (very naïve) attempts were to dd the iso over and then to try to re-partition the drive to create a partition in the spare space to hold the kickstart files. There’s still a part of me that feels there has to be a way to make this work, but I never got it to work.

Progress

In Feburary, I first posed the question to the SuperUser community and thankfully karlacio posted an answer yesterday (April 22, 2015) which referenced gist 4b9ad551cb6c35870d7c

My first deviation was I used labels so my syslinux.cfg had an append line like:

1
append initrd=initrd.img inst.stage2=hd:LABEL=DATA:/ ks=hd:LABEL=BOOT:/ks/laptop.cfg

EFI Support

While that worked great for systems with BIOSes, it failed to work on machines with EFI. After reading up some, I knew that the EFI folder would be required since EFI’s look for partitions with that folder instead of utilizing an MBR. So I copied the EFI folder from the ISO to the BOOT partition.

Similar to the syslinux.cfg file, I had to update EFI/BOOT/grub.cfg to refernce the updated location; this change was quite simple I just changed the search line to read:

1
search --no-floppy --set=root -l 'BOOT'

In looking at the grub.cfg file, I noticed that it referenced files under images, so I copied that DVD/images folder over to BOOT/images. In order to have room to copy the folder, I had to start over and make the first partition 530MB. You could go larger, but since I had two 8GB USB3 thumb drives, I didn’t want to waste too much space.

I was quite frustrated when that didn’t work, but I finally found mgorven’s answer which suggested creating BOOT/EFI/BOOT/startup.nsh which contained one line:

1
BOOTX64

Conclusion

Using this method I was able to create a usb-based installation media that booted on my MacBook Pro, our newer EFI based systems, and are older ones using BIOSes.

Update – 1/9/2016 – I was able to use this method to sucessfully create usb-based media for RHEL 7.

Iptables

Trying to debug why we weren’t seeing any syslog entries.

iptables -L -n —line-numbers

iptables -D IN_public 2

Can also delete but what they’re doing

iptables -I input_ext 7 -s 128.146.106.98 -p udp —dport syslog -m limit —limit 10/min -j LOG —log-prefix “DLP: ” —log-level 4

iptables -D input_ext -s 128.146.106.98 -p upd —dport syslog -m limit —limit 10/min -j LOG —log-prefix “DLP: ” —log-level 4