Cats and Code » devops http://blog.gorwits.me.uk by Oliver Gorwits Sat, 29 Mar 2014 23:28:44 +0000 en-US hourly 1 http://wordpress.org/?v=3.6.1 Deploying mod_spnego http://blog.gorwits.me.uk/2012/04/22/deploying_mod_spnego/?utm_source=rss&utm_medium=rss&utm_campaign=deploying_mod_spnego http://blog.gorwits.me.uk/2012/04/22/deploying_mod_spnego/#comments Sun, 22 Apr 2012 14:34:58 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=748 Continue reading ]]> SPNEGO is a negotiated authentication mechanism for HTTP which can be used to take advantage of Kerberos credentials for web site login (an alternative to simple username/password, or client digital certificates).

The reference implementation for Apache, mod_spnego, can be downloaded from SourceForge and contains straightforward build instructions. It’s also possible to use Stanford WebAuth in SPNEGO mode.

To build the module you need development libraries for the following (I’ve added the SLES package names, for reference):

  • openssl (libopenssl-devel)
  • krb5 (krb5-devel, krb5-devel-32bit)
  • apache (apache2-devel)

Follow the instructions in the module source. On SLES, be sure to run the apxs command as root, because it goes and installs the module directly after compilation.

Enabling the module is again quite straightforward:

    Krb5AuthEachReq Off
    <Directory "/foo/bar/quux">
        AllowOverride AuthConfig
        Krb5KeyTabFile /etc/apache2/HTTP.keytab
        Krb5ServiceName HTTP
        AuthType SPNEGO
        Require valid-user
    </Directory>

You’ll need to install a keytab for the HTTP service principal. The method differs depending on the type of KDC you have, but for Windows AD this would be:

net ads -U 'username@realm%password' keytab add HTTP

As verification I wrote a simple Perl CGI script to echo back $ENV{REMOTE_USER} which emitted user@REALM, as expected.

Sadly when testing this out I found the use of SPNEGO is not enabled by default in all browsers (for example, Google Chrome). A managed desktop seems the only way to ensure the user has both kerberos credentials and a browser started with the correct features enabled. Otherwise, it’d be just too much work?

]]>
http://blog.gorwits.me.uk/2012/04/22/deploying_mod_spnego/feed/ 0
Virtual Machine on Mythbuntu http://blog.gorwits.me.uk/2012/01/04/virtual-machine-on-mythbuntu/?utm_source=rss&utm_medium=rss&utm_campaign=virtual-machine-on-mythbuntu http://blog.gorwits.me.uk/2012/01/04/virtual-machine-on-mythbuntu/#comments Wed, 04 Jan 2012 23:45:10 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=728 Continue reading ]]> I have a Linux box running the excellent Mythbuntu (Ubuntu-based) distribution, headless (that is, without a monitor). Quite a lot of the time it’s sat around doing nothing (and even during recording or playback the CPU is idle).

For some side-projects I wanted a clean Linux installation to mess about with. It seemed a good idea to run virtual machines and make the most of existing hardware; what surprised me was just how easy this turned out to be :-)

The Ubuntu documentation for KVM is excellent, I must say, but I fancied distilling things further and blogging here, as I typically do to record most of my technical adventures. I’m not going to bother with any of the GUI VM builder tools or even the Q&A install script, but simply specify the VM config fully, up front.

Optionally, check whether your CPU has virtualization extensions – any fairly recent desktop chip should do. On Ubuntu there’s a command called kvm-ok, or you can poke /proc/cpuinfo:

# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

# egrep -q '(vmx|svm)' /proc/cpuinfo && echo 'good to go!'
good to go!

First up install the KVM software:

# apt-get install qemu-kvm virtinst

This will pull in all the necessary packages. On other platforms it should be similar, but the virtinst package is often renamed (e.g. virt-install or vm-install).

Before getting stuck in to KVM we need to reconfigure the system’s network adapter to be a bridge. I prefer to set a static IP for servers on my home LAN and use the /etc/network/interfaces file for configuration:

# cat > /etc/network/interfaces
auto lo eth0 br0
iface lo inet loopback
iface eth0 inet manual
iface br0 inet static
    address <IP-ADDRESS>
    network <NETWORK-ADDRESS>
    netmask <NETMASK>
    broadcast <BROADCAST>
    gateway <GATEWAY>
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0
    post-up ip link set br0 address <MAC-ADDRESS>

(hit ctrl-D)

Obviously, fill in the blanks for your own system’s IP and MAC address details. Next we can blow away Ubuntu’s network mangler daemon and poke the KVM service into life:

# apt-get --purge remove network-manager
# /etc/init.d/networking restart
# service libvirt-bin start

Now find somewhere on your disk for the VMs and a little script to live, and create a directory. I named mine /opt/vm. In there, try starting with this little shell script:

#!/bin/bash
virt-install --name=sandbox --ram=512 --vcpus=2 --os-type=linux \
  --autostart --disk=path=/opt/vm/sandbox.img,size=50 \
  --graphics=vnc,listen=0.0.0.0,port=5900 --noautoconsole \
  --cdrom=/opt/vm/mythbuntu-11.10-desktop-i386.iso

Walking through the above, it should be clear we’re creating a new VM called sandbox (this is the name KVM knows it by, not a hostname), with 512MB RAM, two virtual CPUs, a Linux-friendly boot environment, and 50GB (sparse) disk. The VM will be automatically booted by the KVM service when its host system boots. The last line specifies an installation CD image from which the new VM will boot.

For the graphics configuration I’ve asked for a headless system with the console being offered up via a VNC port on the host server. Note that the listen=0.0.0.0 is essential to connect remotely (e.g. over your home LAN) to the console because otherwise the VNC port is simply bound to the loopback interface.

Running the above will bring the new VM into life:

# ./sandbox.sh

Starting install...
Creating storage file sandbox.img                      |  50 GB     00:00
Creating domain...                                     |    0 B     00:01
Domain installation still in progress. You can reconnect to
the console to complete the installation process.

What KVM means by “installation still in progress” is that it knows this system is installing from the boot CD, so you should go right ahead and fire up VNC and connect to the console (port 5900 on the host server) to complete the process.

You’ll find that KVM saved the sandbox VM configuration in XML format in the /etc/libvirt/qemu directory, so that’s where to go to tweak the settings. Good documentation is available at the KVM website.

Be aware, however, that because KVM assumed the attached CD ISO was only needed for initial install, it’s not featured in the saved config as a permanent connection. You can, of course, remedy this (check out the virt-install man page for starters).

To finish off, here’s how to manage the lifecycle (start, restart, blow away, etc) of the VM. Use the virsh utility which can either be run with a single instruction or with no parameters for an interactive CLI:

# virsh
Welcome to virsh, the virtualization interactive terminal.
virsh # list
 Id Name                 State
----------------------------------
 10 sandbox              running

virsh # destroy
error: command 'destroy' requires <domain> option
virsh # destroy sandbox
Domain sandbox destroyed

virsh # create sandbox
error: Failed to open file 'sandbox': No such file or directory

virsh # create sandbox.xml
Domain sandbox created from sandbox.xml

virsh # list
 Id Name                 State
----------------------------------
 11 sandbox              running

Try the help command, and note that the VM’s XML settings file may need updating if you change things (see dumpxml).

I hope this is a useful and quick tutorial for KVM on Ubuntu… Good Luck!

]]>
http://blog.gorwits.me.uk/2012/01/04/virtual-machine-on-mythbuntu/feed/ 0
Painless MythTV Channel Configuration http://blog.gorwits.me.uk/2011/11/10/painless-mythtv-channel-configuration/?utm_source=rss&utm_medium=rss&utm_campaign=painless-mythtv-channel-configuration http://blog.gorwits.me.uk/2011/11/10/painless-mythtv-channel-configuration/#comments Thu, 10 Nov 2011 23:07:46 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=687 Continue reading ]]> MythTV – a brilliant homebrew digital video recorder system. Killer features include being able to play content over the LAN at home, scheduling recordings via the web, and generally poke it to integrate with all kinds of devices (e.g. see my previous posts on H.264 transcoding). Even better, Mythbuntu makes installation a doddle.

However the most hated part for me is configuring TV sources and channels – digital terrestrial via an aerial, and digital via satellite. MythTV’s built-in scanner works at best intermittently (for me), and when it does, comes up with 1,000 shopping and adult channels which drown out the 20 or so I’m really interested in.

Then there’s TV listings. All credit to the folks working on XMLTV and the Radio Times listings grabber – that’s some impressive work. But stitching it into MythTV usually ends up with hand-editing the database to insert XMLTV IDs. User friendly? I think not.

Partly this is because these tools are used internationally and nothing is standardised between countries. Even in the UK there are three ways to get TV listings (EIT over the air, Bleb, and Radio Times).

Finally I snapped, and wrote a Perl program to do all this work. It feels so nice now to have a simple, lightweight, repeatable process to configure sources and channels. That’s what good automation is all about.

The code will only work in the UK, but might be a starting point for those elsewhere. It configures XMLTV IDs, but that doesn’t mean you have to use the Radio Times grabber. You still have to go through MythTV’s setup program to tell it about tuner cards (before running the import program) but that’s not hard work.

The code and instructions are hosted on GitHub. Let me know if you use it, and how you get on. Don’t forget to back up your database (using MythTV’s mythconverg_* scripts) before starting!

]]>
http://blog.gorwits.me.uk/2011/11/10/painless-mythtv-channel-configuration/feed/ 0
Hosting the AutoCRUD Demo http://blog.gorwits.me.uk/2011/10/19/hosting-the-autocrud-demo/?utm_source=rss&utm_medium=rss&utm_campaign=hosting-the-autocrud-demo http://blog.gorwits.me.uk/2011/10/19/hosting-the-autocrud-demo/#comments Wed, 19 Oct 2011 18:19:59 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=680 Continue reading ]]> In my previous entry here (syndicated from blogs.perl.org), I linked at the end to a demo Catalyst::Plugin::AutoCRUD application running on DotCloud. I’m much happier with this than running something on my own personal server, and here’s the notes on its setup.

For those unfamiliar, DotCloud is a Platform as a Service (PaaS) offering a freemium model. I’m grateful to them for this as the free account provides all I need for my demo.

First, I followed to the letter Phillip Smith’s comprehensive guide on deploying a Perl Catalyst application to the DotCloud service. Next I customised the basic application created in the guide to use AutoCRUD:

  • removed the Root controller
  • added two Models and their supporting DBIx::Class Result classes
  • set basepath in the configuration
  • installed an hourly cron job to:
    • restore the SQLite databases
    • restart the web service (supervisorctl restart uwsgi)

Next I wanted a more tidy looking domain for the demo, so purchased autocrud.pl through NETIM. My plan is to have demo.autocrud.pl pointing to the DotCloud instance, and sometime in the future to have autocrud.pl be used for a secret feature I’m still working on. Sadly NETIM only offers HTTP redirects from subdomains, so I delegated hosting of the DNS to ClouDNS.

ClouDNS is another freemium service, again where the free part provides just what I need. They offer not only a bit of a smarter interface than NETIM for DNS zone management, but also HTTP redirects from the zone apex.

I do of course know that nothing lasts forever, particularly with freemium services, and I’m grateful for what’s available because it works very well (I’ve added promotional icons for ClouDNS and DotCloud to the demo site).

The end result of this is that I now have the AutoCRUD demo safely hosted on DotCloud with a friendly URL to pass out in documentation or blog posts :-)

]]>
http://blog.gorwits.me.uk/2011/10/19/hosting-the-autocrud-demo/feed/ 0
Is it silly that tmux is fun? http://blog.gorwits.me.uk/2011/08/15/is-it-silly-that-tmux-is-fun/?utm_source=rss&utm_medium=rss&utm_campaign=is-it-silly-that-tmux-is-fun http://blog.gorwits.me.uk/2011/08/15/is-it-silly-that-tmux-is-fun/#comments Mon, 15 Aug 2011 15:08:00 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=575 Continue reading ]]> No, I don’t think it’s a bad thing to get a zing of excitement when you find a new tool that improves your life. Maybe you know what I mean – that feeling of happiness at saving time, remembering more easily how to do things, and satisfaction with a new workflow.

Recently I migrated from the venerable screen, to tmux, and whilst it’s one of those changes where the old tool had no real show-stopping problems, tmux immediately feels much cleaner and well thought-through.

I’ll leave you to read the docs and list of features yourself, but please do check this tool out if you’re an avid screen user. I’ve already got many more tmux sessions/windows/panes open than ever felt comfortable with screen, saving me a lot of time and effort when working remotely.

]]>
http://blog.gorwits.me.uk/2011/08/15/is-it-silly-that-tmux-is-fun/feed/ 1
Smokeping+lighttpd+TCPPing on Debian/Ubuntu http://blog.gorwits.me.uk/2011/08/11/smokepinglighttpdtcpping-on-debianubuntu/?utm_source=rss&utm_medium=rss&utm_campaign=smokepinglighttpdtcpping-on-debianubuntu http://blog.gorwits.me.uk/2011/08/11/smokepinglighttpdtcpping-on-debianubuntu/#comments Thu, 11 Aug 2011 12:45:12 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=572 Continue reading ]]> Some notes on getting Smokeping to work on Debian/Ubuntu using the lighttpd web server, and the TCPPing check.

Install the lighttpd package first, as then the subsequent smokeping package installation will notice that it doesn’t require the Apache web server. However, Smokeping doesn’t auto-configure for lighttpd so a couple of commands are necessary:

# lighttpd-enable-mod cgi
# /etc/init.d/lighttpd force-reload
# ln -s /usr/share/smokeping/www /var/www/smokeping

Visiting your web server’s base url should show a lighttpd help page, and visiting the /cgi-bin/smokeping.cgi path should show the Smokeping home page with logo images working.

Install the TCPPing script by downloading from http://www.vdberg.org/~richard/tcpping and saving to somewhere like /usr/local/bin/tcpping (setting execute bit, also). Obviously, use this path in your Smokeping Probe configuration:

+ TCPPing

binary = /usr/local/bin/tcpping
forks = 10
offset = random
# can be overridden in Targets
pings = 5
port = 21

For the TCPPing check, make sure you have the standalone tcptraceroute package installed. You might find an existing /usr/sbin/tcptraceroute command is available, but this is from the traceroute package and won’t work with the TCPPing script.

]]>
http://blog.gorwits.me.uk/2011/08/11/smokepinglighttpdtcpping-on-debianubuntu/feed/ 1
The Limoncelli Test http://blog.gorwits.me.uk/2011/07/28/the-limoncelli-test/?utm_source=rss&utm_medium=rss&utm_campaign=the-limoncelli-test http://blog.gorwits.me.uk/2011/07/28/the-limoncelli-test/#comments Thu, 28 Jul 2011 13:40:48 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=563 Continue reading ]]> Over at the excellent Everything Sysadmin blog is a simple test which can be applied to your Sysadmin team to assess its productivity and quality of service. It’s quite straightforward – just 32 things a good quality team ought to be doing, with a few identified as must-have items.

Of course I’m not going to say anything about my current workplace but thought it would be interesting to assess my previous team as of October 2010 when I left. I’m incredibly proud of the work we did and both our efficiency and effectiveness in delivering services with limited resources. That’s reflected in the score of (drumroll…) 31 out of 32!

If you have a Sysadmin team, or work in one, why not quickly run through the test for yourself?

]]>
http://blog.gorwits.me.uk/2011/07/28/the-limoncelli-test/feed/ 0
Migrate SourceForge CVS repository to git http://blog.gorwits.me.uk/2011/06/22/migrate-sourceforge-cvs-repository-to-git/?utm_source=rss&utm_medium=rss&utm_campaign=migrate-sourceforge-cvs-repository-to-git http://blog.gorwits.me.uk/2011/06/22/migrate-sourceforge-cvs-repository-to-git/#comments Wed, 22 Jun 2011 19:27:00 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=529 Continue reading ]]> Updated to include promoting and pushing tags.

I recently had need to migrate some SourceForge CVS repositories to git. I’ll admit I’m no git expert, so Googled around for advice on the process. What I ended up doing was sufficiently distinct from any other guide that I feel it worth recording the process, here.

The SourceForge wiki page on git is a good start. It explains that you should log into the Project’s Admin page, go to Features, and tick to enable git. Although it’s not made clear, there’s no problem having both CVS and git enabled concurrently.

Enabling git for the first time will initialize a bare git repository for your project. You can have multiple repositories; the first is named the same as the project itself. If you screw things up, it’s OK to delete the repository (via an SSH login) and initialize a new one.

Just like the SourceForge documentation, I’ll use USERNAME, PROJECTNAME and REPONAME within commands. As just mentioned, the initial configuration is that the latter two are equal, until you progress to additional git repositories.

Let’s begin by grabbing a copy of the CVS repository with complete history, using the rsync utility. When you rsync, there will be a directory containing CVSROOT (which can be ignored) and one subdirectory per module:

mkdir cvs && cd cvs
rsync -av rsync://PROJECTNAME.cvs.sourceforge.net/cvsroot/PROJECTNAME/* .

Grab the latest cvs2git code and copy the default options file. Change the run_options.set_project setting to point to your project’s module subdirectory:

svn export --username=guest http://cvs2svn.tigris.org/svn/cvs2svn/trunk cvs2svn-trunk
cp cvs2svn-trunk/cvs2git-example.options cvs2git.options
vi cvs2git.options
# edit the string after run_options.set_project, to mention cvs/PROJECTNAME

Also in the options file, set the committer name mappings in the author_transforms settings. This is needed because CVS logs only show usernames but git commit logs show human name and email – a mapping can be used during import to create a sensible git history.

vi cvs2git.options
# read the comments above author_transforms and make changes

But how do you know what CVS usernames need mapping? One solution is to run through this export and git import without a mapping, then run git shortlog -se to dump the commiters. Blow the new git repo away, and re-import after configuring cvs2git author_transforms.

The cvs2git utility works by generating the input files used by git’s fast-import command:

cvs2svn-trunk/cvs2git --options=cvs2git.options --fallback-encoding utf-8
git clone ssh://USERNAME@PROJECTNAME.git.sourceforge.net/gitroot/PROJECTNAME/REPONAME
cd REPONAME
cat ../cvs2svn-tmp/git-{blob,dump}.dat | git fast-import
git reset --hard

At this point, if you’re going to continue using this new git repository for work, remember to set your user.name, user.email and color.ui options.

Now you’re ready to push the repo back to SourceForge. I did test myself that disabling so-called developer access to the repo in the SourceForge Project Member settings page does in fact prevent write access, as expected.

git push origin master

If you had tags on the CVS repo (git tag -l), they’ll have been imported as lightweight tags. Best practice is always to use annotated tags, so this short script will promote them for you:

git config user.name "Firstname Lastname"
git config user.email "me@example.com"
git tag -l | while read ver;
  do git checkout $ver;
  git tag -d $ver;
  GIT_COMMITTER_DATE="$(g show --format=%aD | head -1)" git tag -a $ver -m "prep for $ver release";
  done
git checkout master

Verify the tags are as you want, using something like:

git tag -l | while read tag; do git show $tag | head -3; echo; done

And then push them to the repository with:

git push --tags

Something you might want to do is set a post-commit email hook. For this you SSH to SourceForge, and if you have multiple projects remember to connect to the right one!

ssh -t USER,PROJECT@shell.sourceforge.net create
cd /home/scm_git/P/PR/PROJECTNAME

Download the post-receive-email script and place it in the hooks subdirectory; make it executable. Also set the permissions to have group-write, so your project colleagues can alter it if required. Set the necessary git options to allow the script to email someone after a commit. Season to taste.

curl -L http://tinyurl.com/git-post-commit-email > hooks/post-receive
chmod +x hooks/post-receive
chmod g+w hooks/post-receive
git config hooks.emailprefix "[git push]"
git config hooks.emailmaxlines 500
git config hooks.envelopesender noreply@sourceforge.net
git config hooks.showrev "t=%s; printf 'http://PROJECTNAME.git.sourceforge.net/git/gitweb.cgi?p=PROJECTNAME/REPONAME;a=commitdiff;h=%%s' ; echo;echo; git show -C ; echo"
git config hooks.mailinglist PROJECTNAME-COMMITS@lists.sourceforge.net

Remember to subscribe noreply@sourceforge.net to your announce list, if needed. Finally, set a friendly description on the repository for use by the git web-based repo browser:

echo 'PROJECTNAME git repository' > description

One other thing I did was enable an SSH key on my SourceForge account, as this makes life with SSH-based git much smoother :-) If you have the need to create additional git repositories, or even to replace the one created automatically, then it’s just a case of issuing the git command:

cd /home/scm_git/P/PR/PROJECTNAME
git --git-dir=REPONAME init --shared=all --bare

Good luck with your own migrations, and happy coding!

]]>
http://blog.gorwits.me.uk/2011/06/22/migrate-sourceforge-cvs-repository-to-git/feed/ 7
A Strategy for Opsview Keywords http://blog.gorwits.me.uk/2011/05/20/a-strategy-for-opsview-keywords/?utm_source=rss&utm_medium=rss&utm_campaign=a-strategy-for-opsview-keywords http://blog.gorwits.me.uk/2011/05/20/a-strategy-for-opsview-keywords/#comments Fri, 20 May 2011 15:04:54 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=492 Continue reading ]]> At my previous employer, and recently at my current one, I’ve been responsible for migration to an Opsview based monitoring system. Opsview is an evolution of Nagios which brings a multitude of benefits. I encourage you to check it out.

Since the 3.11.3 release, keywords have been put front and centre of the system’s administration, so I want to present here what I’ve been working on as a strategy for their configuration. Keywords can support three core parts of the Opsview system:

  1. The Viewport (a traffic-lights status overview)
  2. User access controls (what services/hosts can be acknowledged, etc)
  3. Notifications (what you receive emails about)

Most important…

My first bit of advice is do not ever set the keywords when provisioning a new Host or Service Check. This is because on these screens you can’t see the complete context of keywords, and it’s far too easy to create useless duplication. You should instead associate keywords with hosts and services from the Configuration/Keywords screen.

Naming Convention

Okay, let’s go to that screen now, and talk about our naming convention. Yes, there needs to be one, so that you can look at a keyword in another part of Opsview and have a rough idea what it might be associated with. Here’s the template I use, and some examples:

<type>-[<owner>-]<thing>

device-ups
server-linux
service-smtpmsa
service-nss-ntpstratum3

Let’s say you have a Linux server running an SMTP message submission service and an NTP Stratum 3 service. I would create one keyword for the underlying operating system (CPU, memory, disk, etc), named “server-linux“. I’d create another for the SMTP service as “service-smtpmsa” and another for the NTP as “service-ntpstratum3“. If your Opsview is shared between a number of teams, it might also be useful to insert the managing team for that service in the name, as I’ve done with NSS, above. The type “device” tends to be reserved for appliances which fulfil one function, so you don’t need to separate out their server/service nature.

With this in place, if the UNIX Systems team manages the server and OS, and another team manages the applications stack on the box, we’ve got keywords for each, allowing easy and fine grained visibility controls. When creating the keywords, you should go into the Objects tab and associate it with the appropriate hosts and service checks. I find this much more straightforward than using the Keywords field on the actual host and service check configuration pages.

Viewport

Let’s look at each of the three cornerstone uses I mentioned above, in turn. First is the Viewport. Well, that’s easy enough to enable for a keyword by toggling the radio button and assigning a sensible description (such as ”Email Message Submission Service” for “service-smtpmsa“). Which users can see which items in their own viewport is configured in the role (Advanced/Roles) associated to that user. I’d clone off one new role per user, and go to the Objects tab, remove all Host Groups or Service Groups and select only some Keywords. Job done – the user now sees those items in their viewport.

Actions

Next up is the ability for a user to acknowledge, or mark as down, an item. In fact it’s done in the same way as the viewport, that is, through a role. That’s because roles contain, on the Access tab, the VIEWPORTACCESS item for viewports and the ACTIONSOME/NOTIFYSOME items for actioning alerts. Because it’s currently only possible for a user to have one role, you cannot easily separate these rights for different keywords – a real pity. But I have no doubt multiple roles will come along, just like multiple notification profiles.

Notifications

Which brings us to the final item. Again I’d create a new notification profile for each user, so that it’s possible to opt them in or out of any service notifications. Using keywords makes things simple – are you just managing the underlying OS? Then you can have notifications about that, and not the application stack. It doesn’t stop you seeing the app stack status in your viewport, though. Because the notification profile is associated with a user, you’ll only be offered keywords that have been authorized in their role, which is a nice touch.

And finally…

In each of these steps the naming convention has really helped, because when looking at keywords the meaning “these hosts” or “this service” will (hopefully) jump out. If I were scaling this up, I’d have it all provisioned via the Opsview API from a configuration management or inventory database, and updated nightly. This is another way naming conventions help – they are friendly to automation.

]]>
http://blog.gorwits.me.uk/2011/05/20/a-strategy-for-opsview-keywords/feed/ 0
Cfengene3 on Debian Squeeze for local management http://blog.gorwits.me.uk/2011/05/10/cfengene3-on-debian-squeeze-for-local-management/?utm_source=rss&utm_medium=rss&utm_campaign=cfengene3-on-debian-squeeze-for-local-management http://blog.gorwits.me.uk/2011/05/10/cfengene3-on-debian-squeeze-for-local-management/#comments Tue, 10 May 2011 20:31:11 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=479 Continue reading ]]> Dialling the nerd factor up to 11, I’ve decided to store configuration for my VPS server in git and manage it with Cfengine3. Joking aside, this is a sound decision: having the VCS repo makes backups simple and trustworthy, and configuration management motivates me to keep on using that repository.

On Debian Squeeze it’s a simple case of apt-get install cfengine3, with the caveat of this packaging bug meaning I hacked /etc/cfengine3 to symlink from /var/lib/cfengine3/inputs.

[Edit: A colleague of mine, David, suggests that the package should link cfengine3's masterfiles to /etc, and I'm inclined to agree.]

Anyone familiar with Cfengine2 will have a good head start on the Cfengine3 configuration, however it’s still a bit of a learning curve (but we know complex problems rarely have simple solutions). The first file read is promises.cf which can include other files (“inputs“, in any order), and lists the promise bundles and their order of execution:

body common control {
    bundlesequence  => {
            "main"
    };

    inputs => {
        "site.cf",
        "library.cf"
    };
}

The library.cf file is simply a bunch of macros or templates. For example, the built-in copy_from command is augmented with some sane defaults and named local_copy:

body copy_from local_copy(from) {
    source  => "$(from)";
    compare => "digest";
    copy_backup => false;
}

This is then used in my site.cf file to install some cron jobs:

bundle agent main {
    vars:
        "repo" string => "/path/to/git/repo";

    files:
        "/etc/cron.d"
            handle => "cron_files",
            comment => "copy crontab files to /etc/cron.d",

            copy_from => local_copy("$(repo)/etc/cron.d"),
            depth_search => recurse("inf"),
            perms => p("root","444");
}

This is a trivial example, and could be made better. For example all files in the target directory have their permissions changed (via the “p” macro), whereas it makes sense only to set those files we copy, not any already existing.

Hopefully this post shows that Cfengine3 configuration isn’t that hairy, and once the principles are installed in your head it’s a case of browsing the reference manual and building up promise libraries.

Postscript

I’d like to note that the Cfengine3 configuration mini-language could be better designed. Some statements are terminated by semicolons as in the body, above, others separated by commas but still semicolon-terminated, as in the bundle, and braced sections inconsistently semicolon-terminated. This leads to awkward syntax errors when designing new promises :-(

Furthermore, I feel the language would benefit from some noise keywords, for example:

body copy_from local_copy(from) {

versus

body copy_from as local_copy(from) {

The latter makes it slightly more clear which is the base primitive and which the new macro name. I’m a great fan of the use of syntactic sugar, in moderation, and intuitive configuration mini-languages.

]]>
http://blog.gorwits.me.uk/2011/05/10/cfengene3-on-debian-squeeze-for-local-management/feed/ 2