June 7, 2010

Running dnsmasq behind my router

Filed under: tech — Ernest Hymel @ 10:07 am

Tonight I added dnsmasq to my Fedora 12 server, and had to do some looking to figure out how to get other client computers on my internal LAN to see the new dns.

There is an EXCELLENT setup tutorial here that gave me *almost* all I needed to get things working. I did not use the DHCP feature of dnsmasq, so I ignored all those parts. Getting things working was really a breeze, really just a matter of running yum install dnsmasq, editing /etc/dnsmasq.conf and /etc/resolv.conf per the recommendations of the reference above. Here are my versions of those files:

/etc/dnsmasq.conf All of these are optional, and well documented in the link above, but for the record they are the only things I changed:

# Configuration file for dnsmasq.
domain-needed
bogus-priv
strict-order
interface=eth0

/etc/resolv.conf Taken largely from the link above, but modified for my ISP:

#Allow applications on the machine hosting dnsmasq to also use it too
nameserver 127.0.0.1

#Google DNS
nameserver 8.8.8.8

#OpenDNS
nameserver 208.67.222.222
nameserver 208.67.220.220

#Time Warner Cable Business Class
nameserver 24.25.5.60
nameserver 24.25.5.61
nameserver dns4.rr.com

As is, dnsmasq works great on the server itself, and domain name lookups were substantially faster after they got cached locally (generally from 100ms down to 1ms or less). There were now 2 problems.

Problem 1: Everytime the network interface resets, Network Manager changes /etc/resolv.conf back to whatever the router tells it to be. To prevent this, you have to add a line (PEERDNS=no) to the ifcfg file for the nework interface. In this case, my server is wired to the router and configured by eth0 in /etc/sysconfig/network-scripts/ifcfg-eth0. This is a static local ip address (192.168.0.10), and my gateway (router) is 192.168.0.1, so the relevant parts of this file:

DEVICE=eth0
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.0.10
GATEWAY=192.168.0.1
PEERDNS=no
NAME=”eth0″

The important line here is PEERDNS=no. Problem 1 solved.

Problem 2: The bigger problem was how to get my clients to use this. I have a mix of windows clients, legacy equipment, and linux workstations already configured to connect to my network. I actually have 2 wireless routers, one over 802.11n (I’ll call that one Router 1) and another over 802.11b (Router 2) for my legacy equipment. The problem is that how do my routers tell the clients to use my dnsmasq server?

The answer is that it depends on the router and it depends on the client as to how exactly to do this. In my case, Router 1 is a D-Link DIR-655. The setting for this router I needed to change was to uncheck the “Enable DNS Relay” under the “Setup” –> “Network Settings” –> “Router Settings”. Leaving this enabled makes DNS a pass-through to my ISP’s DNS, the very thing I was trying to avoid. I also set my Primary DNS Server setting to the IP of my dnsmasq server (192.168.0.10). As a backup, I set my Secondary DNS Server to point to OpenDNS (208.67.222.222).

For Router 2, a Linksys WRT54GS, there is no similar setting, but instead I changed the Primary DNS to the IP address of Router 1 (192.168.0.1).

I need to experiment a little more on the client side, but on my wife’s Windows laptop it looks like I have to tell Windows to use a specific DNS server, which I manually configured to point to my dnsmasq server (192.168.0.10). My fedora laptop had no problems gathering that information from the router during the DHCP process.

Hopefully this helps someone!

March 18, 2009

Checking the need to run e2fsck

Filed under: tech — Ernest Hymel @ 12:04 pm

I backup regularly, 4 times daily with the fantastic utility rsnapshot, and between runs I keep the backup disk safe by remounting in read-only mode. How I remount programmatically will be the subject of another post, but basically the backup disk has to be in read-write mode during the backup, but otherwise I keep the disk in read-only mode so none of my backups get accidentally overwritten.

A downside is that after a while, mount starts complaining about the disk being re-mounted too many time between checking the filesystem with e2fsck. It would be easy to set the time between mount checks to a very high number using the utility tune2fs (tune2fs -c max-mount-counts) but eventually I would run into this same problem. Besides, there is probably some logic behind having the default max-mount-count be set to something low, like the default value of 39.

So, the magic of bash scripting and cron jobs can help out.

First, the script needs to figure out where my backup disk is mounted. In my /etc/fstab file, I use a mount point in my system of /.snapshots. Here’s the corresponding line in /etc/fstab:

UUID=0dc5c715-96aa-4299-aec4-0a226da191c9 /.snapshots             ext3    ro,noauto,noexec        0 0

So, if I run /bin/mount from the command line, I can figure out the device name of my backup disk:

# /bin/mount |grep .snapshots
/dev/sdc1 on /.snapshots type ext3 (ro,noexec)

See the final script below for teasing out the device name from this line using sed.

Next, dumpe2fs gives us lots of information about a device’s filesystem, including how many times it has been mounted (mount count) as well as the “max_mount_counts” set by tune2fs (or set by default). See the script below for details.

Now, the script needs to get the work done by running e2fsck, but we need to be careful. e2fsck should only be run on UNmounted devices. If we can’t unmount the device, we shouldn’t run e2fsck. Also, I don’t want to run this script if we’re int he middle of a backup session. Fortunately rsnapshot let’s us know if that’s the case by writing the pid to a file, /var/run/rsnapshot.pid, while it’s running. We can also take advantage of this by writing our own rsnapshot.pid file so that we don’t start a backup session if e2fsck is running.

Finally, I want this to be run from cron so I don’t have to worry about this all the time. Unfortunately (or perhaps fortunately), e2fsck won’t run without an interactive session (from the command line). It won’t run from a script. So, I need to run my cron script in such a way that it will report to me (through email) that e2fsck needs to be run from the command line. The script below therefore accepts a command-line option (-reportonly) that does this. My /etc/cron.d/backup-server therefore looks something like this:

MAILTO=root

12 */4 * * * root /bin/nice /usr/bin/rsnapshot -v hourly
40 0 * * * root /usr/local/bin/fsck_backup_disk -reportonly

This sets my backup routine to run 4 times daily at 12 after the hour, and the script below (which I named fsck_backup_disk) runs daily at 40 minutes after midnight. I then get notified by cron through email if I need to go to my command line and run fsck_backup_disk manually.

Here’s my full /usr/local/bin/fsck_backup_disk:

#! /bin/bash

backupDisk=`/bin/mount |/bin/grep snapshot |/bin/sed ‘s/\(\/dev\/sd[a-h][1-3]\).*/\1/’`

mountCount=`/sbin/dumpe2fs -h $backupDisk 2> /dev/null |/bin/grep “Mount count” |/bin/sed ‘s/^.* \([0-9]*\)/\1/’`

maxMountCount=`/sbin/dumpe2fs -h $backupDisk 2> /dev/null |/bin/grep “Maximum mount count” |/bin/sed ‘s/^.* \([0-9]*\)/\1/’`

# If mountCount > maxMountCount then will need to run e2fsck
# but not on a mounted system! Will need to umount, but make sure it’s not in use first

# If this script run from cron, then pass the option “-reportonly” since e2fsck won’t run without the command line.

if [ -z $1 ]; then
        /bin/echo -n “The backup disk ($backupDisk) has been mounted $mountCount out of maximum $maxMountCount times. “;
        if [ $mountCount -gt $maxMountCount ]; then {
                # check whether backup is running.
                if [ ! -f “/var/run/rsnapshot.pid” ]; then
                        /bin/echo -e “Checking backup disk file system.\n”;
                        /bin/touch /var/run/rsnapshot.pid;
                        /bin/umount $backupDisk && /sbin/e2fsck $backupDisk
                        /bin/mount $backupDisk;
                        /bin/rm -f /var/run/rsnapshot.pid;
                fi
        } else {
                /bin/echo “No need to check backup disk file system.”;
        } fi
elif [ $1 == ‘-reportonly’ ]; then
        if [ $mountCount -gt $maxMountCount ]; then
                /bin/echo -n “The backup disk ($backupDisk) has been mounted $mountCount out of maxiumum $maxMountCount times. ”
                /bin/echo “Please run /usr/local/bin/fsck_backup_disk.”;
        fi
fi

asdf
December 25, 2008

Bayes scores stopped appearing in amavis email

Filed under: tech — Ernest Hymel @ 9:23 pm

Out of the blue today my email started filling with spam that I normally don’t see. A quick look at the headers showed a notable lack of a bayes score from spamassassin. Hmm.

A look in /etc/mail/spamassassin/local.cf shows no change, with the option to use bayes turned on.

use_bayes 1
bayes_path /var/spool/amavisd/.spamassassin/bayes

I took a look a the directory mentioned in the second config line and saw that the files bayes_journal and bayes_toks were both owned by root instead of my amavis user. I checked my backup files and sure enough, they should be owned by amavis (user and group). Now, I have no idea how the change happened, but nevertheless this command:

chown amavis.amavis /var/spool/amavisd/.spamassassin/*

has fixed the problem.

August 1, 2008

Changing passwords in squirrelmail using change_passwd plugin and poppassd for Fedora

Filed under: tech — Ernest Hymel @ 10:09 am

One of my clients uses squirrelmail for their email. Today I added the change_passwd plugin, which relies on poppassd on the backend to make it work. If anyone has a better way, I’d love to hear about it.

After downloading and installing the plugin, the first clue I had to do more was when I tried to change a password from within squirrelmail. Doing so gave me the message Error: Connection refused (111). Hmmm. The config file for the plugin was set to use poppassd on the backend, and a google search suggested the obvious, make sure poppassd was installed and listening on port 106.

I followed the instructions on the poppassd site, first downloading then compiling the source as per their instructions. So far so good. To get poppassd to listen on port 106, I needed xinetd, so I ran yum install xinetd, then created a poppassd config file for xinetd… here’s my /etc/xinetd.d/poppassd:

service poppassd
{
        disable = no
        socket_type             = stream
        protocol                = tcp
        wait                    = no
        user                    = root
        server                  = /usr/local/bin/poppassd
        only_from               = 127.0.0.1
        log_on_success  += HOST DURATION
}

Then /etc/init.d/xinetd start had me up and running, as confirmed with netstat:

# netstat -an | grep 106
tcp 0 0 :::106 :::* LISTEN

July 4, 2008

Details about brute force ssh attack

Filed under: tech — Ernest Hymel @ 1:22 pm

I use a swatch-based approach to monitoring my /var/log/secure* log files for brute force attacks on my ssh server. Today I was curious about which usernames were being used to try to get into the system.

This command tells me what I want:

# cat /var/log/secure* | cut -d " " -f7-12 |grep Failed |cut -d " " -f6 |sort |uniq -c

Output shows this:

      5 admin
      1 alias
      1 fluffy
      2 guest
      1 recruit
     15 root
      1 sales
      1 staff
      3 test

as you see, ‘root’ dominates the list. Obviously, my ssh config file (/etc/ssh/sshd_config) does not allow for root login. And for good reason!

May 28, 2008

Using squid, squidGuard, havp, and ramdisk as an antivirus proxy & internet filter

Filed under: tech — Ernest Hymel @ 1:23 pm

For a couple of years I have used havp in my home. Since my web/email server is in my home and already running clamav as my antivirus, once I learned that I could route all my internet traffic through havp proxy, it was a no-brainer that I needed to do this as another layer of protection for my home computers. Setup was straightforward, I just downloaded, ran the usual configure/make/make install, and it worked. The only tricky part was setting up the mandatory locking on the file system. Initially I had a spare hard drive in the server that I just remounted with mandatory locking and all was fine.

Then a few things happened around the same time. First, I needed that spare hard drive to go into a refurbished computer that went to my father-in-law. Time to use a ramdisk for my havp scanning filesystem. Next, my son (now 10) started using the web more. Time for a filter.

This post will document how I implemented the squidGuard proxy server as a content filter, havp as an antivirus proxy, with a ramdisk as the havp scanning filesystem. Getting the ramdisk to mount with each reboot took some work, so here’s what I did.

First, the ramdisk. Hopefully someone can correct me, but I found that I could not simply put a line in my /etc/fstab for the ramdisk and just have it get mounted with a system reboot. Reboots are pretty rare, but this is the problem… they are so rare that when they do happen, I have to re-learn (a la google) how to create the ramdisk properly, then when starting havp fails I have to re-remember that I have to reset permissions on the mounted ramdisk, etc. Time for a script. Here’s my script, called /usr/local/bin/mount-havp-ramdisk.sh:

#! /bin/bash
# HAVP requires a filesystem with mandatory locks.
# I use a ramdisk for the filesystem, which must be created
# before use by HAVP.
# The script is called from the /etc/init.d/havp startup script,
# and verifies that the ramdisk exists and is mounted, and if not
# it creates it and sets proper permissions.

# Set some variables
RAMDISK=/dev/ram0
MOUNTPOINT=/var/tmp/havp
HAVPUSER=havp

#
# If the ramdisk is already exists and is mounted, then no need to continue.
#
MP="`/bin/mount |/bin/grep $RAMDISK`"
if [ "$MP" != "" ]; then
        # ramdisk is mounted; exit with success.
        exit 0;
fi

#
# Since ramdisk not mounted, we won't assume it exists.
# First we'll create the ramdisk, then mount it with mandatory locking
# and finally set permissions
#
/sbin/mke2fs -q -m 0 /dev/ram0 && \
        /bin/mount -o mand $RAMDISK $MOUNTPOINT && \
        /bin/chown $HAVPUSER:root $MOUNTPOINT && \
        /bin/chmod 0750 $MOUNTPOINT
exit $?

As noted in the script, this gets called from the startup script for havp (/etc/init.d/havp). I have made a few modifications of the havp startup script that is provided from its author. First, within the start section, I added a call to my mounting script. Second, I wanted to make the startup script compatible with chkconfig for my fedora server for easy management of starting and stopping havp on reboots. Finally, the inti.d script provided by the author doesn’t really have a working “status” function, so I hacked around to get it like I wanted. Here’s my /etc/init.d/havp:

#!/bin/sh
#
# Provides: havp
# chkconfig: - 87 26
# pidfile: /var/run/havp/havp.pid
# config: /etc/local/etc/havp/havp.config
# Short-Description: starting and stopping HTTP AntiVirus Proxy
# Description: HAVP provides a proxy through which HTTP requests are \
# routed with antivirus on the response sent back to the client
#
####
# This init-script tries to be LSB conform but platform independent.
#
# Therefore check the following two variables to fit to your requests:
# HAVP_BIN HAVP_CONFIG PIDFILE
# Any configuration of HAVP is done in havp.config
# Type havp --help for help and read havp.config you should have received.

HAVP_BIN=/usr/local/sbin/havp
HAVP_CONFIG=/usr/local/etc/havp/havp.config
PIDFILE=/var/run/havp/havp.pid
HAVP=havp

# Return values acc. to LSB for all commands but status:
# 1       generic or unspecified error (current practice)
# 2       invalid or excess argument(s)
# 3       unimplemented feature (for example, "reload")
# 4       user had insufficient privilege
# 5       program is not installed
# 6       program is not configured
# 7       program is not running
# 8-99    reserved for future LSB use
# 100-149 reserved for distribution use
# 150-199 reserved for application use
# 200-254 reserved
# Note that starting an already running service, stopping
# or restarting a not-running service as well as the restart
# with force-reload (in case signaling is not supported) are
# considered a success.

reload_havp()
{
        echo "Reloading HAVP ..."
        PID="`cat $PIDFILE`"
        if [ "$PID" != "" ]; then
                kill -HUP "$PID" >/dev/null 2>&1
                if [ $? -ne 0 ]; then
                        echo "Error: HAVP not running"
                        exit 1
                fi
        else
                echo "Error: HAVP not running or PIDFILE not readable"
                exit 1
        fi
        exit 0
}

case "$1" in
        start)
                echo "Starting HAVP ..."
                # mount ramdisk
                /usr/local/bin/mount-havp-ramdisk.sh
                if [ $? -ne 0 ]; then
                        echo "Error: Could not mount ramdisk; cannot start HAVP"
                        exit 1
                fi
                if [ ! -f $HAVP_BIN ]; then
                        echo "Error: $HAVP_BIN not found"
                        exit 5
                fi
                $HAVP_BIN -c $HAVP_CONFIG
                exit $?
                ;;

        stop)
                echo "Shutting down HAVP ..."
                if [ ! -f "$PIDFILE" ]; then
                  echo "Error: HAVP not running or PIDFILE unreadable"
                  exit 1
                fi
                PID="`cat $PIDFILE`"
                if [ "$PID" != "" ]; then
                        kill -TERM "$PID" >/dev/null 2>&1
                        if [ $? -ne 0 ]; then
                                echo "Error: HAVP not running"
                                exit 1
                        fi
                else
                        echo "Error: HAVP not running or PIDFILE unreadable"
                        exit 1
                fi
                sleep 2
                exit 0
                ;;

        restart)
                echo "Shutting down HAVP ..."
                $0 stop >/dev/null 2>&1
                $0 start
                exit $?
                ;;

        reload-lists)
                reload_havp
                ;;

        force-reload)
                reload_havp
                ;;

        reload)
                reload_havp
                ;;

        status)
                PID="`cat $PIDFILE 2>/dev/null`"
                if [ "$PID" != "" ]; then
                        echo "$HAVP (pid $PID) is running..."
                        exit 0
                else
                        echo "$HAVP not running..."
                        exit 0
                fi
                ;;

        *)
                echo "Usage: $0 {start|stop|status|restart|force-reload|reload|reload-lists}"
                exit 0
                ;;
esac

The majority of this is straight from the provided sample script. With my modifications, a simple command (chkconfig havp on) makes sure that havp comes up with each reboot, and exits cleanly when the system goes down.

OK, now to get havp and squid/squidGuard to work together.

For havp, you must edit the havp.config  and at a minimum enable a scanner… I have enabled the ClamAV Library Scanner, the preferred method for havp. See the havp.config file for details, it is pretty straightforward. Note that the scanner temp files have the default location of /var/tmp/havp/, which is the mount point for my ramdisk. The havp user (which I had to create) must have write permissions on this directory. My mount-havp-ramdisk.sh script takes care of those permissions. Also note that the default port for havp is 8181, meaning that you can run an anti-virus proxy off of this port without worrying about squid.

For squid & squidGuard, it’s a little more complicated. I have based my setup on some notes made on the “Ideas” page for havp, whereby I have a sandwich setup:

Squid —> HAVP —> Squid

The rationale is that content comes into squid, which acts as a local cache for faster retrieval the next time it’s requested. At the same time, squid is configured to pass everything through squidGuard, which acts as a content filter. Squid then passes the contents to HAVP for virus scanning. The content then goes back through squid, which pulls the content from the local cache (instead of retrieving again from the internet) for performance reasons. If at any point if there is problem (inappropriate page as marked by the content filter squidGuard, or a virus as marked by HAVP), then the content does not continue through and appropriate messages are sent to the requesting user.

The “ideas” page from the HAVP site lists a sample configuration, which must be tweaked for the latest version of squid. Here’s my /etc/squid/squid.conf:

##################################################
#
# /etc/squid/squid.conf
#
# Sandwich config for HAVP
# From http://server-side.de/ideas.htm
#

# bind port for users
http_port 3128

# Disabling icp
icp_port 0

# scanning through HAVP
cache_peer localhost parent 8181 0 no-query no-digest no-netdb-exchange default

# Memory usage values
cache_mem 64 MB
maximum_object_size 65536 KB
memory_pools off

# 4 GB store on disk
cache_dir aufs /var/spool/squid 4096 16 256

# no store log
cache_store_log none

# Passive FTP off
ftp_passive off

# no X-Forwarded-For header
forwarded_for off

# Speed up logging
#buffered_logs on

# no logfile entry stripping
strip_query_terms off

# Speed, speed, speed
pipeline_prefetch on
half_closed_clients off
shutdown_lifetime 1 second

# don't query neighbour at all
hierarchy_stoplist cgi-bin ?

# And now: define caching parameters
refresh_pattern ^ftp: 20160 50% 43200
refresh_pattern -i \.(jpe?g|gif|png|ico)$ 43200 100% 43200
refresh_pattern -i \.(zip|rar|arj|cab|exe)$ 43200 100% 43200
refresh_pattern windowsupdate.com/.*\.(cab|exe)$ 43200 100% 43200
refresh_pattern download.microsoft.com/.*\.(cab|exe)$ 43200 100% 43200
refresh_pattern -i \.(cgi|asp|php|fcgi)$ 0 20% 60
refresh_pattern . 20160 50% 43200

#
# Access ACLs
#
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl to_localhost dst 127.0.0.0/8
http_access deny to_localhost

acl SSL_ports port 443
acl Safe_ports port 80-81       # http
acl Safe_ports port 21          # ftp
acl Safe_ports port 443         # https
acl Safe_ports port 1025-65535  # unregistered ports
acl CONNECT method CONNECT
acl QUERY urlpath_regex cgi-bin \?
acl HTTP proto HTTP

# Do not scan the following domains
acl noscan urlpath_regex -i \.(jpe?g|gif|png|ico)$

http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports

acl our_networks src 192.168.0.0/24 24.173.221.168
http_access allow our_networks

http_access allow localhost
http_access deny all

# For sandwich configuration we have to disable the "Via" header or we
# get a "forwarding loop".
request_header_access Via deny all
reply_header_access Via deny all

# Do not cache requests from localhost, SSL-encrypted or dynamic content.
no_cache deny QUERY
no_cache deny localhost
no_cache deny CONNECT
no_cache allow all

# Do not forward parent requests from localhost (loop-prevention) or
# to "noscan"-domains or SSL-encrypted requests to parent.
always_direct allow localhost
always_direct allow CONNECT
always_direct allow noscan
always_direct deny HTTP

never_direct deny localhost
never_direct deny CONNECT
never_direct deny noscan
never_direct allow HTTP

# Extras not in havp sample
access_log /var/log/squid/access.log squid
acl apache rep_header Server ^Apache

# Direct through squidGuard
url_rewrite_program /usr/bin/squidGuard
url_rewrite_children 8

The last section in this script directs through squidGuard. Setup of squidGuard takes a little work in itself, but this is fairly well documented on the squidguard site install pages.

By the way, I installed squid & squidGuard through yum, as in <code>yum install squid squidGuard</code>. This takes care of making sure the daemons get started at boot time. With my modifications above, havp along with a ramdisk for temp scanning files also start at boot time.

May 7, 2008

Multiple secure email sites with a single IP address with apache

Filed under: tech — Ernest Hymel @ 2:58 pm

(UPDATE 2/25/09: See the end of this article for an update to get this working with mod_gnutls-0.5.4)

I host several websites on my server, and several of my clients require email as well. Obviously this can easily be achieved with non-secure virtual hosts using the apache config file option VirtualHost. For email sites, though, a secure (https) connection is more desirable. The problem is that out of the box, apache does not allow for multiple domain names on a single IP address over a secure connection.

In the beginning (a couple of years ago) I had a total of 2 email sites running, one using horde and a second using squirrelmail. I only have a single IP address, so I configured 2 different ssl ports through apache (443 and 444). Redirection through apache got users to the right spot, so that different clients were routed to the correct site & port, along the lines of https://email.site1.com (equivalent to https://email.site1.com:443) and https://email.site2.com:444. This worked well enough.

The trouble started when another client wanted a secure connection. I could find another port to configure, but what about the next time? Besides, the URL gets messy when you start having the port numbers showing up in the URL. I could get another IP address, but if there’s a cheaper way that’s easier to manage, then … you know the drill. What I need is name-based VirtualHost over an ssl connection.

Enter the wonderful article by George Notaras. This is what is needed. Here I want to document how I got things working on my machine (Fedora 8).

The workhorse to get this running is an apache module from OutOfOrder.cc called mod_gnutls. mod_gnutls depends on a Gnu library, GnuTLS, which in turn depends on libgcrypt (link available from GnuTLS download page, see previous link). mod_gnutls provides SNI (Server Name Indication), a TLS extension which makes the configuration of SSL-enabled name-based virtual hosts possible.

I followed all directions in the article referenced above by George Notara and found that everything worked very well. Thanks, George, for the wonderful article.

There were a few problems. I frequently had error messages in my apache error logs complaining about the TLS connection. The article from George covers mod_gnutls version 0.2.0, which was released in April 2005. I recently have noticed that after a long lag, development on mod_gnutls has restarted, and regular updates have appeared on the outoforder.cc website since November 2007. Their stable version is now up to 0.4.3, and development version 0.5.1. This is what I did to get things working.

Let’s work backwards to get components in place.

libgcrypt
Download the most recent version here, currently version 1.4.1. Configure, compile, and install per the README file in the downloaded package:
# cd /usr/local/src
# wget ftp://ftp.gnupg.org/gcrypt/libgcrypt/libgcrypt-1.4.1.tar.gz
# tar xjf libgcrypt-1.4.1.tar.gz
# cd libgcrypt-1.4.1
# ./configure
# make && make check && make install

gnutls
Download most recent (development) version here, currently version 2.3.8. Repeat configure & install as above:
# cd /usr/local/src
# wget ftp://ftp.gnutls.org/pub/gnutls/devel/gnutls-2.3.8.tar.bz2
# tar xjf gnutls-2.3.8.tar.bz2
# cd gnutls-2.3.8
# ./configure
# make && make check && make install

mod_gnutls
Now for the star… download from here, most recent stable version is 0.4.3. Configure and make this, but don’t install.
# cd /usr/local/src
# wget http://www.outoforder.cc/downloads/mod_gnutls/mod_gnutls-0.4.3.tar.bz2
# tar xjf mod_gnutls-0.4.3.tar.bz2
# cd mod_gnutls-0.4.3
# ./configure
# make && make check

OK, so I don’t really know what happens if you install, but following the original directions of George Notaras, I chose not to make install. In my testing, nothing that we’ve done so far actually affects apache, so what I wanted was a way to easily undo the install step. Here’s what I did.

From within the directory where ou just ran make for mod_gnutls, copy one file you need (libmod_gnutls.so) to your httpd modules directory (usually /usr/lib/httpd/modules). I wanted to keep track of what version I was using, so I renamed the file with a version number while moving to the modules directory and then created a soft link:

# cp -a /usr/local/src/mod_gnutls-0.4.3/src/.libs/libmod_gnutls.so /usr/lib/httpd/modules/libmod_gnutls.so.0.4.3
# cd /usr/lib/httpd/modules
# ln -s libmod_gnutls.so.0.4.3 libmod_gnutls.so

Almost there. Per the suggestion by George Notaras, I want to follow the naming convention of other apache modules, so I created one more soft link:

# ln -s libmod_gnutls.so mod_gnutls.so

Now to activate this new module. The documentation for mod_gnutls is very good, and I highly recommend having a look here. For me the relevant portion of my vhosts file (/etc/httpd/conf.d/vhosts-email.conf) is below. Line 2 loads our module, mod_gnutls.so (which is a soft link to lib_modgnutls.so, which in turn links to lib_modgnutls.so.0.4.3).

Listen 443
LoadModule gnutls_module modules/mod_gnutls.so
GnuTLSCacheTimeout 500

NameVirtualHost *:443

<VirtualHost *:443>
# Horde
ServerName horde.server1.com
ServerAlias mail.server1.com
GnuTLSEnable on
GnuTLSPriorities PERFORMANCE
GnuTLSCertificateFile /etc/httpd/ssl-keys/server.crt
GnuTLSKeyFile /etc/httpd/ssl-keys/server.pem
DocumentRoot /var/www/sites.email/horde
ErrorLog “| /usr/sbin/cronolog /var/log/httpd/horde-%Y-%m-error_log”
CustomLog “| /usr/sbin/cronolog /var/log/httpd/horde-%Y-%m-access_log” common

<Directory “/var/www/sites.email/horde”>
allow from all
Options +Indexes
</Directory>
</VirtualHost>

<VirtualHost *:443>
# Squirrelmail
DocumentRoot /var/www/sites.email/squirrelmail
ServerName webmail.server2.com
ServerAlias email.server2.com
GnuTLSEnable on
GnuTLSPriorities PERFORMANCE
GnuTLSCertificateFile /etc/httpd/ssl-keys/server.crt
GnuTLSKeyFile /etc/httpd/ssl-keys/server.pem
ErrorLog “| /usr/sbin/cronolog /var/log/httpd/squirrelmail-%Y-%m-error_log”
CustomLog “| /usr/sbin/cronolog /var/log/httpd/squirrelmail-%Y-%m-access_log” common

<Directory “/var/www/sites.email/squirrelmail”>
allow from all
Options +Indexes
</Directory>
</VirtualHost>

Note that I only have one port configured (443), with both sites running on the port. A quick restart of apache, and I have both site running on port 443 as secure, name-based virtual hosts on my single IP address. Very slick!

UPDATE: The three components listed above (mod_gnutls, libgcrypt, and gnutls) are continuously updated. I had some problems getting mod_gntls 0.5.x to compile; here’s how I fixed the problem.

The process and order is mostly the same as above.

1. libgcrypt is now on version 1.4.4 and was easily updated by yum update. No need to compile for the latest version, although it does compile and install easily as per the above instructions.

2. gnutls is now on version 2.7.5, and this compiles as per the instructions above.

3. The latest mod_gnutls was a problem to configure. When I download and try to ./configure the current 0.5.4 version, I get an error that complains that pkg-config is reporting version 2.4.2 but was finding the more recent version 2.7.5. Looking into this showed that I had 2 different pkg-config files, /usr/lib/pkgconfig/gnutls.pc and /usr/local/lib/pkgconfig/gnutls.pc. I wanted the one in /usr/local…, but google searches didn’t explain very well how to make that happen, at least to the point that I could get it to work.

So here’s what I did:

# mv /usr/lib/pkgconfig/gnutls.pc /usr/lib/pkgconfig/gnutls.pc.2.4.2
# mv /usr/lib/pkgconfig/gnutls-extra.pc /usr/lib/pkgconfig/gnutls-extra.pc.2.4.2

Trying to run ./configure at this point complains that it can’t find the pkg-config file for gnutls, and suggests setting the environment variable PKG_CONFIG_PATH:

# export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

Now ./configure runs smoothly! From there, I followed the directions as above (remember don’t make install!) and all is great!

January 18, 2008

NX Server for remote desktop access to Fedora 8 (update: Now on Fedora 10)

Filed under: tech — Ernest Hymel @ 9:40 am

I’ve been writing code in the wonderful Aptana Studio, including projects in Ruby on Rails. Because of the way Aptana is able to generate code on the fly during development, it’s a lot easier to develop on the target server than it is to develop on my Windows Vista laptop and transfer (or sync) the pages repeatedly. So, I thought I’d see if there was an easy way to remotely access the desktop of my Fedora box.

VNC has been around a long time, and I’ve used it for years in the Windows world, but it’s performance is not so great, and setting it up to run over ssh is not so fun.

 Searching around a bit, I found freenx, a GPL implementation of NX by NoMachine. Assuming you already have sshd running on the server, then a great set of installation instructions is here. Basically, you use yum to install 2 packages on the server, install a client program on the clients, sync the ssh keys, and you’re done. It works great!

One minor problem is that there is an updated version of the NX Server available (as of this writing, version 3.1.0-2 [update: now up to verion 3.3.0-15 as of 1/29/09]). According to the changelogs, there are a number of fixes over the version 2.1.0 installed by yum. So, I decided to see how difficult it would be to install the free version of nxserver 3.1.0 on my Fedora server. Details are below. In summary: installation was trivial, and performance is remarkably better with far fewer quirks seen in the previous version. Here’s what I did.

  1. Uninstall nx and freenx previously installed through yum.
    > yum remove nx freenx
  2. The previous install of nx/freenx created a user ‘nx’, and the ‘yum remove’ command above failed to delete that user, so I manually removed user ‘nx’.
  3. Installation of NoMachine’s NX server involves installing 3 packages that depend on each other to function. From this page, I chose the rpm packages.
  4. Once downloaded, install as usual per rpm:
    > rpm -iv nxclient-3.1.0-2.i386.rpm
    > rpm -iv nxnode-3.1.0-3.i386.rpm
    > rpm -iv nxserver-3.1.0-2.i386.rpm
  5. Now run the nxserver install script.
    > /usr/NX/scripts/setup/nxserver --install
  6. On your client machine, you should download the nx client from this page appropriate for your platform. Once installed, start the client and configure your connection (ip address, username, etc). More details about the setup, as well as troubleshooting tips, are available on the page I referenced above (here).
  7. In order to connect from the client machine, the ssh keys must match between client and server. The easiest approach is to generate the key on the server. On the server, issue this command:
    > /usr/NX/bin/nxserver --keygen
    The resulting ssh key is stored in /usr/NX/share/keys/default.id_dsa.key. Using your favorite tool, copy the contents of this file to your clipboard. Back on your client machine,  within the configuration window, click “Key” on the first tab. Paste the key from your clipboard here.
  8. Done!! You should be able to start a new session and you’re rolling!
November 24, 2007

NSLU2 and Debian/NSLU2

Filed under: tech — Ernest Hymel @ 11:45 am

These are steps used to get my new NSLU2 running Debian/NSLU2 then backing up my Fedora Core 7 (soon to be FC8) box.

  1. Unpack new NSLU2 (slug) and Seagate 250GB external drive. Plug in. Change networking on slug to static IP address on my network. (Much to my surprise, there is an issue with the Seagate FreeAgent external drive with Linux. See here for details about solution.)
  2. Make sure that I have redboot access in case it’s needed.
  3. Ready the drive following instructions here.
  4. Download and re-flash (using this tool) slug with Debian/NSLU2
    This is done in a couple of steps, as shown here.
    1. Move tar file of base to the drive and unpack.
    2. Flash the “etch” binary firmware.
  5. Reboot, should now have working Debian on NSLU2.
  6. By default, networking uses DHCP to get its IP address.
    edit /etc/hostname to rename the system
    edit /etc/resolv.conf
    edit /etc/network/interfaces, adding:
    iface eth0 inet static
      address 192.168.0.4
      netmask 255.255.255.0
      broadcast 192.168.0.4
      network 192.168.0.0
      gateway 192.168.0.1
  7. Follow the “what to do now” section of the page here.
  8. Do some maintenance:
    apt-get install ntp-simple ntpdate
    /usr/sbin/ntpdate -s
    /sbin/hwclock --adjust
    /sbin/hwclock --systohc
  9. Enable login via ssh without password using instructions here
  10. Install necessary packages for rsync and nfs:
    apt-get install rsync nfs-kernel-server
  11. Set rsync to run in daemon mode by editing the appropriate line in /etc/default/rsync
  12. Create /etc/rsyncd.conf, make it look something like this:
    uid = root
    gid = root
    use chroot = yes
    max connections = 1
    pid file = /var/run/rsyncd.pid
    read only = no
    hosts allow = 192.168.0.10
    hosts deny = *
    dont compress *.tgz *.gz *.bz2 *.iso *.jpg *.jpeg *.tif *.tiff *.
  13. Edit /etc/exports to allow nfs sharing of backup destination directory (/home/backup) by adding a line like this:
    /home 192.168.0.0/255.255.255.0(rw,sync,no_root_squash)

    Then make sure there is a corresponding line in /etc/fstab on the server to be backed up:
    192.168.0.4:/home       /mnt/snapshot         nfs     defaults        0 0
  14. Make sure copy of local_cpio.sh exists in backup destination directory of slug
  15. Run the snapshot-rotate and snapshot-make scripts from the machine to be backed up.
  16. Done!
August 20, 2007

Flex

Filed under: tech — Ernest Hymel @ 10:53 pm

I’ve recently learned about Adobe’s Flex. If you accidentally stumbled across this page, there’s probably not much useful information here, just my collection of links that I’ll add to periodically as a bookmark.

First, an overview of Flash and Flex from an architect of Adobe’s Flex is here. I’ll admit, I had no idea Flash had grown so big. The benefits of a cross-platform development system many…

There’s a cool web-based word processor called ‘buzzword’— home is here — with a cool demo here.

Lots of support and ideas are found on flex.org.