Cats and Code » linux http://blog.gorwits.me.uk by Oliver Gorwits Sat, 29 Mar 2014 23:28:44 +0000 en-US hourly 1 http://wordpress.org/?v=3.6.1 Internet accessible cats – part 2 http://blog.gorwits.me.uk/2013/01/03/internet-accessible-cats-part-2/?utm_source=rss&utm_medium=rss&utm_campaign=internet-accessible-cats-part-2 http://blog.gorwits.me.uk/2013/01/03/internet-accessible-cats-part-2/#comments Thu, 03 Jan 2013 22:08:21 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=855 Continue reading ]]> So far so good for access to the new Cat Cam: from within the house we can view video from the cats’ shed, yet the camera is safely on its own DMZ.

In this final post I’ll show how I made the camera video feed available on the Internet.

One thing I wanted from the outset was for Internet clients not to make direct connections to the camera itself. I was a little worried about the ability of the web server and CPU in the camera to cope with multiple clients, and also the security implications of direct access. A second requirement was to have multi platform access – that is, desktop and iOS. This potentially means different streaming video formats.

We have one linux server in the house, which is used for many different things and runs virtual machines. My back-of-an-envelope plan looked something like this:

First step was to create the VM, but remember that the camera feed is in a DMZ using a VLAN, so the VM must live there too. In KVM it’s possible either to send all traffic to a guest system and let it process the VLANs or, you can separate the tagged VLAN traffic in the host system so the guest is dumb and just sees untagged frames. Clearly the latter is preferable so that were the guest to suffer attack from the Internet, it ought not to be able to put traffic onto the house workstation network. The guest is completely within the DMZ.

With that done and a basic Ubuntu system installed, I started work on Apache and VLC (the Swiss Army Chainsaw of video processing). First up, VLC…

Luckily the camera’s video feed comes in MJPEG format with a discoverable URL. The idea is to take this feed, duplicate it, and transcode the respective feeds into something suitable for a desktop browser and for iOS. As a bonus, I’ll timestamp the video to make it easy to tell if the transcoder has crashed (the timestamp would be wrong). After a lot of reading online about how to configure VLC I came up with the following monstrosity:

/usr/bin/cvlc -I dummy http://guest:guest@172.16.30.10:8888/videostream.cgi?rate=0
  --sout='#duplicate{

    dst="transcode{
      width=320,heigh=240,fps=25,vcodec=h264,vb=256,acodec=none,
      venc=x264{profile=baseline,level=30,keyint=30,ref=1},
      sfilter=marq{marquee=\"[%Y-%m-%d %H:%M:%S]\",position=8,size=18}
    }:std{access=livehttp{
        seglen=10,delsegs=true,numsegs=5,
        index=/var/www/streaming/cats.m3u8,
        index-url=/streaming/cats-########.ts},
      mux=ts{use-key-frames},
      dst=/var/www/streaming/cats-########.ts}",

    dst="transcode{
      width=640,heigh=480,fps=25,vcodec=theo,vb=512,acodec=none,
      sfilter=marq{marquee=\"[%Y-%m-%d %H:%M:%S]\",position=8,size=18}
    }:http{mux=ogg,dst=127.0.0.1:8081/catcam.ogg}"

  }'

Of the two transcodes (“dst=”), the second is more straightforward. It creates an Ogg format stream using the Theora video codec, which modern browsers should be able to cope with. This is a video stream being served from VLC’s built-in web server, so I’ll need to proxy it via Apache. The configuration also applies a filter (“sfilter=”) to add a timestamp on the video stream.

The first transcode uses the new HTTP Live Streaming support in VLC. This is a rather elegant specification from Apple (which is why I selected it for the iOS clients) for simple and efficient delivery of streaming video. It creates a set of files and assumes you have a web server to serve them. The files each contain a few seconds of video, and the client retrieves them and plays one after another. The “######” templates an incrementing number within the segment filename. Again, the timestamp is added to the video stream.

CPU load for the above runs at about 60% (in the VM) on the dual core Athlon X2 245e processor. I wrapped the above in an Upstart init file, and just in case VLC gets its knickers in a twist, I added a cron job to periodically stop and start the service.

Now on to Apache. It needs to proxy the Ogg stream and serve the Live Streaming files, and prevent any other access to the web server:

# redirect any non-cat requests to the cat index.html
RewriteEngine on
RewriteCond %{REQUEST_URI} !^/streaming/cats.*
RewriteCond %{REQUEST_URI} !^/stream/catcam.ogg$
RewriteCond %{REQUEST_URI} !^/index.html$
RewriteRule ^(.*) http://%{HTTP_HOST}/index.html [R,L]

ProxyReceiveBufferSize 16384
ProxyRequests On
ProxyVia On
ProxyPreserveHost On

<Proxy *>
    Order deny,allow
    Allow from all
</Proxy>

# VLC server stream
ProxyPass /stream/catcam.ogg http://localhost:8081/catcam.ogg
ProxyPassReverse /stream/catcam.ogg http://localhost:8081/catcam.ogg

Last but not least for this server, we need a web page which offers up the two video streams. This uses an HTML5 video tag:

<!DOCTYPE html>
<html>
    <head>
        <title>Cat Cam</title>
        <meta http-equiv="content-type" content="text/html; charset=ISO-8859-1">
        <META HTTP-EQUIV="Pragma" CONTENT="no-cache">
    </head>
    <body>
        <h1>Cat Cam</h1>
        <video id="video" autoplay="autoplay">
            <source src="/streaming/cats.m3u8">
            <source src="/stream/catcam.ogg" type="video/ogg; codecs=theora">
            Your browser doesn't appear to support the HTML5 <code>&lt;video&gt;</code> element.
        </video>
    </body>
</html>

All that remains is to enable a NAT rule and firewall pinhole on the home router for the web server (which is, of course, in the DMZ network connected directly to the router).

Let’s see the end result, taken on my iPhone this evening, also demonstrating the automatically activated night vision mode:

It’s nice to be able to check in on the wee beasties when I’m out at work. Other than a lot of reading about VLC, it wasn’t particularly difficult to do, and I think the end result is really quite good.

]]>
http://blog.gorwits.me.uk/2013/01/03/internet-accessible-cats-part-2/feed/ 0
Virtual Machine on Mythbuntu http://blog.gorwits.me.uk/2012/01/04/virtual-machine-on-mythbuntu/?utm_source=rss&utm_medium=rss&utm_campaign=virtual-machine-on-mythbuntu http://blog.gorwits.me.uk/2012/01/04/virtual-machine-on-mythbuntu/#comments Wed, 04 Jan 2012 23:45:10 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=728 Continue reading ]]> I have a Linux box running the excellent Mythbuntu (Ubuntu-based) distribution, headless (that is, without a monitor). Quite a lot of the time it’s sat around doing nothing (and even during recording or playback the CPU is idle).

For some side-projects I wanted a clean Linux installation to mess about with. It seemed a good idea to run virtual machines and make the most of existing hardware; what surprised me was just how easy this turned out to be :-)

The Ubuntu documentation for KVM is excellent, I must say, but I fancied distilling things further and blogging here, as I typically do to record most of my technical adventures. I’m not going to bother with any of the GUI VM builder tools or even the Q&A install script, but simply specify the VM config fully, up front.

Optionally, check whether your CPU has virtualization extensions – any fairly recent desktop chip should do. On Ubuntu there’s a command called kvm-ok, or you can poke /proc/cpuinfo:

# kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

# egrep -q '(vmx|svm)' /proc/cpuinfo && echo 'good to go!'
good to go!

First up install the KVM software:

# apt-get install qemu-kvm virtinst

This will pull in all the necessary packages. On other platforms it should be similar, but the virtinst package is often renamed (e.g. virt-install or vm-install).

Before getting stuck in to KVM we need to reconfigure the system’s network adapter to be a bridge. I prefer to set a static IP for servers on my home LAN and use the /etc/network/interfaces file for configuration:

# cat > /etc/network/interfaces
auto lo eth0 br0
iface lo inet loopback
iface eth0 inet manual
iface br0 inet static
    address <IP-ADDRESS>
    network <NETWORK-ADDRESS>
    netmask <NETMASK>
    broadcast <BROADCAST>
    gateway <GATEWAY>
    bridge_ports eth0
    bridge_stp off
    bridge_fd 0
    bridge_maxwait 0
    post-up ip link set br0 address <MAC-ADDRESS>

(hit ctrl-D)

Obviously, fill in the blanks for your own system’s IP and MAC address details. Next we can blow away Ubuntu’s network mangler daemon and poke the KVM service into life:

# apt-get --purge remove network-manager
# /etc/init.d/networking restart
# service libvirt-bin start

Now find somewhere on your disk for the VMs and a little script to live, and create a directory. I named mine /opt/vm. In there, try starting with this little shell script:

#!/bin/bash
virt-install --name=sandbox --ram=512 --vcpus=2 --os-type=linux \
  --autostart --disk=path=/opt/vm/sandbox.img,size=50 \
  --graphics=vnc,listen=0.0.0.0,port=5900 --noautoconsole \
  --cdrom=/opt/vm/mythbuntu-11.10-desktop-i386.iso

Walking through the above, it should be clear we’re creating a new VM called sandbox (this is the name KVM knows it by, not a hostname), with 512MB RAM, two virtual CPUs, a Linux-friendly boot environment, and 50GB (sparse) disk. The VM will be automatically booted by the KVM service when its host system boots. The last line specifies an installation CD image from which the new VM will boot.

For the graphics configuration I’ve asked for a headless system with the console being offered up via a VNC port on the host server. Note that the listen=0.0.0.0 is essential to connect remotely (e.g. over your home LAN) to the console because otherwise the VNC port is simply bound to the loopback interface.

Running the above will bring the new VM into life:

# ./sandbox.sh

Starting install...
Creating storage file sandbox.img                      |  50 GB     00:00
Creating domain...                                     |    0 B     00:01
Domain installation still in progress. You can reconnect to
the console to complete the installation process.

What KVM means by “installation still in progress” is that it knows this system is installing from the boot CD, so you should go right ahead and fire up VNC and connect to the console (port 5900 on the host server) to complete the process.

You’ll find that KVM saved the sandbox VM configuration in XML format in the /etc/libvirt/qemu directory, so that’s where to go to tweak the settings. Good documentation is available at the KVM website.

Be aware, however, that because KVM assumed the attached CD ISO was only needed for initial install, it’s not featured in the saved config as a permanent connection. You can, of course, remedy this (check out the virt-install man page for starters).

To finish off, here’s how to manage the lifecycle (start, restart, blow away, etc) of the VM. Use the virsh utility which can either be run with a single instruction or with no parameters for an interactive CLI:

# virsh
Welcome to virsh, the virtualization interactive terminal.
virsh # list
 Id Name                 State
----------------------------------
 10 sandbox              running

virsh # destroy
error: command 'destroy' requires <domain> option
virsh # destroy sandbox
Domain sandbox destroyed

virsh # create sandbox
error: Failed to open file 'sandbox': No such file or directory

virsh # create sandbox.xml
Domain sandbox created from sandbox.xml

virsh # list
 Id Name                 State
----------------------------------
 11 sandbox              running

Try the help command, and note that the VM’s XML settings file may need updating if you change things (see dumpxml).

I hope this is a useful and quick tutorial for KVM on Ubuntu… Good Luck!

]]>
http://blog.gorwits.me.uk/2012/01/04/virtual-machine-on-mythbuntu/feed/ 0
Hacking # in OS X http://blog.gorwits.me.uk/2011/11/30/hacking-in-os-x/?utm_source=rss&utm_medium=rss&utm_campaign=hacking-in-os-x http://blog.gorwits.me.uk/2011/11/30/hacking-in-os-x/#comments Wed, 30 Nov 2011 20:57:05 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=717 Continue reading ]]> To get a # sign on an Apple keyboard you use the Option (or Alt) key + 3. This seems terribly klunky to me, and # is of course used quite a bit in programming and sysadmin work.

This hack remaps another key on the keyboard to produce the # character. I chose the funny squiggle that’s to the left of the number 1 key (§). This is the Section sign, used in document formatting. Just create a file at ~/Library/KeyBindings/DefaultKeyBinding.dict which contains the following:

{
    /* this will make all &#167; turn into # */
    "\UA7" = ("insertText:", "#");
}

Any app that uses Apple’s Cocoa interface widgets for text input will pick this up after being restarted. There are some that don’t (perhaps TextMate? Not checked that one so if you know, please comment).

A lot more information about this is available at this excellent page on the Cocoa Text System, including some other neat hacks. Enjoy!

]]>
http://blog.gorwits.me.uk/2011/11/30/hacking-in-os-x/feed/ 0
Is it silly that tmux is fun? http://blog.gorwits.me.uk/2011/08/15/is-it-silly-that-tmux-is-fun/?utm_source=rss&utm_medium=rss&utm_campaign=is-it-silly-that-tmux-is-fun http://blog.gorwits.me.uk/2011/08/15/is-it-silly-that-tmux-is-fun/#comments Mon, 15 Aug 2011 15:08:00 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=575 Continue reading ]]> No, I don’t think it’s a bad thing to get a zing of excitement when you find a new tool that improves your life. Maybe you know what I mean – that feeling of happiness at saving time, remembering more easily how to do things, and satisfaction with a new workflow.

Recently I migrated from the venerable screen, to tmux, and whilst it’s one of those changes where the old tool had no real show-stopping problems, tmux immediately feels much cleaner and well thought-through.

I’ll leave you to read the docs and list of features yourself, but please do check this tool out if you’re an avid screen user. I’ve already got many more tmux sessions/windows/panes open than ever felt comfortable with screen, saving me a lot of time and effort when working remotely.

]]>
http://blog.gorwits.me.uk/2011/08/15/is-it-silly-that-tmux-is-fun/feed/ 1
The Limoncelli Test http://blog.gorwits.me.uk/2011/07/28/the-limoncelli-test/?utm_source=rss&utm_medium=rss&utm_campaign=the-limoncelli-test http://blog.gorwits.me.uk/2011/07/28/the-limoncelli-test/#comments Thu, 28 Jul 2011 13:40:48 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=563 Continue reading ]]> Over at the excellent Everything Sysadmin blog is a simple test which can be applied to your Sysadmin team to assess its productivity and quality of service. It’s quite straightforward – just 32 things a good quality team ought to be doing, with a few identified as must-have items.

Of course I’m not going to say anything about my current workplace but thought it would be interesting to assess my previous team as of October 2010 when I left. I’m incredibly proud of the work we did and both our efficiency and effectiveness in delivering services with limited resources. That’s reflected in the score of (drumroll…) 31 out of 32!

If you have a Sysadmin team, or work in one, why not quickly run through the test for yourself?

]]>
http://blog.gorwits.me.uk/2011/07/28/the-limoncelli-test/feed/ 0
Migrate SourceForge CVS repository to git http://blog.gorwits.me.uk/2011/06/22/migrate-sourceforge-cvs-repository-to-git/?utm_source=rss&utm_medium=rss&utm_campaign=migrate-sourceforge-cvs-repository-to-git http://blog.gorwits.me.uk/2011/06/22/migrate-sourceforge-cvs-repository-to-git/#comments Wed, 22 Jun 2011 19:27:00 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=529 Continue reading ]]> Updated to include promoting and pushing tags.

I recently had need to migrate some SourceForge CVS repositories to git. I’ll admit I’m no git expert, so Googled around for advice on the process. What I ended up doing was sufficiently distinct from any other guide that I feel it worth recording the process, here.

The SourceForge wiki page on git is a good start. It explains that you should log into the Project’s Admin page, go to Features, and tick to enable git. Although it’s not made clear, there’s no problem having both CVS and git enabled concurrently.

Enabling git for the first time will initialize a bare git repository for your project. You can have multiple repositories; the first is named the same as the project itself. If you screw things up, it’s OK to delete the repository (via an SSH login) and initialize a new one.

Just like the SourceForge documentation, I’ll use USERNAME, PROJECTNAME and REPONAME within commands. As just mentioned, the initial configuration is that the latter two are equal, until you progress to additional git repositories.

Let’s begin by grabbing a copy of the CVS repository with complete history, using the rsync utility. When you rsync, there will be a directory containing CVSROOT (which can be ignored) and one subdirectory per module:

mkdir cvs && cd cvs
rsync -av rsync://PROJECTNAME.cvs.sourceforge.net/cvsroot/PROJECTNAME/* .

Grab the latest cvs2git code and copy the default options file. Change the run_options.set_project setting to point to your project’s module subdirectory:

svn export --username=guest http://cvs2svn.tigris.org/svn/cvs2svn/trunk cvs2svn-trunk
cp cvs2svn-trunk/cvs2git-example.options cvs2git.options
vi cvs2git.options
# edit the string after run_options.set_project, to mention cvs/PROJECTNAME

Also in the options file, set the committer name mappings in the author_transforms settings. This is needed because CVS logs only show usernames but git commit logs show human name and email – a mapping can be used during import to create a sensible git history.

vi cvs2git.options
# read the comments above author_transforms and make changes

But how do you know what CVS usernames need mapping? One solution is to run through this export and git import without a mapping, then run git shortlog -se to dump the commiters. Blow the new git repo away, and re-import after configuring cvs2git author_transforms.

The cvs2git utility works by generating the input files used by git’s fast-import command:

cvs2svn-trunk/cvs2git --options=cvs2git.options --fallback-encoding utf-8
git clone ssh://USERNAME@PROJECTNAME.git.sourceforge.net/gitroot/PROJECTNAME/REPONAME
cd REPONAME
cat ../cvs2svn-tmp/git-{blob,dump}.dat | git fast-import
git reset --hard

At this point, if you’re going to continue using this new git repository for work, remember to set your user.name, user.email and color.ui options.

Now you’re ready to push the repo back to SourceForge. I did test myself that disabling so-called developer access to the repo in the SourceForge Project Member settings page does in fact prevent write access, as expected.

git push origin master

If you had tags on the CVS repo (git tag -l), they’ll have been imported as lightweight tags. Best practice is always to use annotated tags, so this short script will promote them for you:

git config user.name "Firstname Lastname"
git config user.email "me@example.com"
git tag -l | while read ver;
  do git checkout $ver;
  git tag -d $ver;
  GIT_COMMITTER_DATE="$(g show --format=%aD | head -1)" git tag -a $ver -m "prep for $ver release";
  done
git checkout master

Verify the tags are as you want, using something like:

git tag -l | while read tag; do git show $tag | head -3; echo; done

And then push them to the repository with:

git push --tags

Something you might want to do is set a post-commit email hook. For this you SSH to SourceForge, and if you have multiple projects remember to connect to the right one!

ssh -t USER,PROJECT@shell.sourceforge.net create
cd /home/scm_git/P/PR/PROJECTNAME

Download the post-receive-email script and place it in the hooks subdirectory; make it executable. Also set the permissions to have group-write, so your project colleagues can alter it if required. Set the necessary git options to allow the script to email someone after a commit. Season to taste.

curl -L http://tinyurl.com/git-post-commit-email > hooks/post-receive
chmod +x hooks/post-receive
chmod g+w hooks/post-receive
git config hooks.emailprefix "[git push]"
git config hooks.emailmaxlines 500
git config hooks.envelopesender noreply@sourceforge.net
git config hooks.showrev "t=%s; printf 'http://PROJECTNAME.git.sourceforge.net/git/gitweb.cgi?p=PROJECTNAME/REPONAME;a=commitdiff;h=%%s' ; echo;echo; git show -C ; echo"
git config hooks.mailinglist PROJECTNAME-COMMITS@lists.sourceforge.net

Remember to subscribe noreply@sourceforge.net to your announce list, if needed. Finally, set a friendly description on the repository for use by the git web-based repo browser:

echo 'PROJECTNAME git repository' > description

One other thing I did was enable an SSH key on my SourceForge account, as this makes life with SSH-based git much smoother :-) If you have the need to create additional git repositories, or even to replace the one created automatically, then it’s just a case of issuing the git command:

cd /home/scm_git/P/PR/PROJECTNAME
git --git-dir=REPONAME init --shared=all --bare

Good luck with your own migrations, and happy coding!

]]>
http://blog.gorwits.me.uk/2011/06/22/migrate-sourceforge-cvs-repository-to-git/feed/ 7
A Strategy for Opsview Keywords http://blog.gorwits.me.uk/2011/05/20/a-strategy-for-opsview-keywords/?utm_source=rss&utm_medium=rss&utm_campaign=a-strategy-for-opsview-keywords http://blog.gorwits.me.uk/2011/05/20/a-strategy-for-opsview-keywords/#comments Fri, 20 May 2011 15:04:54 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=492 Continue reading ]]> At my previous employer, and recently at my current one, I’ve been responsible for migration to an Opsview based monitoring system. Opsview is an evolution of Nagios which brings a multitude of benefits. I encourage you to check it out.

Since the 3.11.3 release, keywords have been put front and centre of the system’s administration, so I want to present here what I’ve been working on as a strategy for their configuration. Keywords can support three core parts of the Opsview system:

  1. The Viewport (a traffic-lights status overview)
  2. User access controls (what services/hosts can be acknowledged, etc)
  3. Notifications (what you receive emails about)

Most important…

My first bit of advice is do not ever set the keywords when provisioning a new Host or Service Check. This is because on these screens you can’t see the complete context of keywords, and it’s far too easy to create useless duplication. You should instead associate keywords with hosts and services from the Configuration/Keywords screen.

Naming Convention

Okay, let’s go to that screen now, and talk about our naming convention. Yes, there needs to be one, so that you can look at a keyword in another part of Opsview and have a rough idea what it might be associated with. Here’s the template I use, and some examples:

<type>-[<owner>-]<thing>

device-ups
server-linux
service-smtpmsa
service-nss-ntpstratum3

Let’s say you have a Linux server running an SMTP message submission service and an NTP Stratum 3 service. I would create one keyword for the underlying operating system (CPU, memory, disk, etc), named “server-linux“. I’d create another for the SMTP service as “service-smtpmsa” and another for the NTP as “service-ntpstratum3“. If your Opsview is shared between a number of teams, it might also be useful to insert the managing team for that service in the name, as I’ve done with NSS, above. The type “device” tends to be reserved for appliances which fulfil one function, so you don’t need to separate out their server/service nature.

With this in place, if the UNIX Systems team manages the server and OS, and another team manages the applications stack on the box, we’ve got keywords for each, allowing easy and fine grained visibility controls. When creating the keywords, you should go into the Objects tab and associate it with the appropriate hosts and service checks. I find this much more straightforward than using the Keywords field on the actual host and service check configuration pages.

Viewport

Let’s look at each of the three cornerstone uses I mentioned above, in turn. First is the Viewport. Well, that’s easy enough to enable for a keyword by toggling the radio button and assigning a sensible description (such as ”Email Message Submission Service” for “service-smtpmsa“). Which users can see which items in their own viewport is configured in the role (Advanced/Roles) associated to that user. I’d clone off one new role per user, and go to the Objects tab, remove all Host Groups or Service Groups and select only some Keywords. Job done – the user now sees those items in their viewport.

Actions

Next up is the ability for a user to acknowledge, or mark as down, an item. In fact it’s done in the same way as the viewport, that is, through a role. That’s because roles contain, on the Access tab, the VIEWPORTACCESS item for viewports and the ACTIONSOME/NOTIFYSOME items for actioning alerts. Because it’s currently only possible for a user to have one role, you cannot easily separate these rights for different keywords – a real pity. But I have no doubt multiple roles will come along, just like multiple notification profiles.

Notifications

Which brings us to the final item. Again I’d create a new notification profile for each user, so that it’s possible to opt them in or out of any service notifications. Using keywords makes things simple – are you just managing the underlying OS? Then you can have notifications about that, and not the application stack. It doesn’t stop you seeing the app stack status in your viewport, though. Because the notification profile is associated with a user, you’ll only be offered keywords that have been authorized in their role, which is a nice touch.

And finally…

In each of these steps the naming convention has really helped, because when looking at keywords the meaning “these hosts” or “this service” will (hopefully) jump out. If I were scaling this up, I’d have it all provisioned via the Opsview API from a configuration management or inventory database, and updated nightly. This is another way naming conventions help – they are friendly to automation.

]]>
http://blog.gorwits.me.uk/2011/05/20/a-strategy-for-opsview-keywords/feed/ 0
Cfengene3 on Debian Squeeze for local management http://blog.gorwits.me.uk/2011/05/10/cfengene3-on-debian-squeeze-for-local-management/?utm_source=rss&utm_medium=rss&utm_campaign=cfengene3-on-debian-squeeze-for-local-management http://blog.gorwits.me.uk/2011/05/10/cfengene3-on-debian-squeeze-for-local-management/#comments Tue, 10 May 2011 20:31:11 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=479 Continue reading ]]> Dialling the nerd factor up to 11, I’ve decided to store configuration for my VPS server in git and manage it with Cfengine3. Joking aside, this is a sound decision: having the VCS repo makes backups simple and trustworthy, and configuration management motivates me to keep on using that repository.

On Debian Squeeze it’s a simple case of apt-get install cfengine3, with the caveat of this packaging bug meaning I hacked /etc/cfengine3 to symlink from /var/lib/cfengine3/inputs.

[Edit: A colleague of mine, David, suggests that the package should link cfengine3's masterfiles to /etc, and I'm inclined to agree.]

Anyone familiar with Cfengine2 will have a good head start on the Cfengine3 configuration, however it’s still a bit of a learning curve (but we know complex problems rarely have simple solutions). The first file read is promises.cf which can include other files (“inputs“, in any order), and lists the promise bundles and their order of execution:

body common control {
    bundlesequence  => {
            "main"
    };

    inputs => {
        "site.cf",
        "library.cf"
    };
}

The library.cf file is simply a bunch of macros or templates. For example, the built-in copy_from command is augmented with some sane defaults and named local_copy:

body copy_from local_copy(from) {
    source  => "$(from)";
    compare => "digest";
    copy_backup => false;
}

This is then used in my site.cf file to install some cron jobs:

bundle agent main {
    vars:
        "repo" string => "/path/to/git/repo";

    files:
        "/etc/cron.d"
            handle => "cron_files",
            comment => "copy crontab files to /etc/cron.d",

            copy_from => local_copy("$(repo)/etc/cron.d"),
            depth_search => recurse("inf"),
            perms => p("root","444");
}

This is a trivial example, and could be made better. For example all files in the target directory have their permissions changed (via the “p” macro), whereas it makes sense only to set those files we copy, not any already existing.

Hopefully this post shows that Cfengine3 configuration isn’t that hairy, and once the principles are installed in your head it’s a case of browsing the reference manual and building up promise libraries.

Postscript

I’d like to note that the Cfengine3 configuration mini-language could be better designed. Some statements are terminated by semicolons as in the body, above, others separated by commas but still semicolon-terminated, as in the bundle, and braced sections inconsistently semicolon-terminated. This leads to awkward syntax errors when designing new promises :-(

Furthermore, I feel the language would benefit from some noise keywords, for example:

body copy_from local_copy(from) {

versus

body copy_from as local_copy(from) {

The latter makes it slightly more clear which is the base primitive and which the new macro name. I’m a great fan of the use of syntactic sugar, in moderation, and intuitive configuration mini-languages.

]]>
http://blog.gorwits.me.uk/2011/05/10/cfengene3-on-debian-squeeze-for-local-management/feed/ 2
Starting irssi in screen at reboot http://blog.gorwits.me.uk/2011/03/27/starting-irssi-in-screen-at-reboot/?utm_source=rss&utm_medium=rss&utm_campaign=starting-irssi-in-screen-at-reboot http://blog.gorwits.me.uk/2011/03/27/starting-irssi-in-screen-at-reboot/#comments Sun, 27 Mar 2011 22:00:31 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=449 Continue reading ]]> Another short aide-mémoire. My IRC client is irssi, running in a screen session on my linux server in London. I connect to it via SSH, but if the server restarts I want this all set up automatically. Fire up crontab -e and add the following:

@reboot /usr/bin/screen -dmUS irc /usr/bin/irssi

A brief rundown of the options:

  • @reboot : run this cron job when the cron daemon is restarting after a reboot
  • -dm : start screen in ‘detached’ mode
  • -U : run screen in UTF-8 mode (see my other post)
  • -S irc : give the screen session a friendly name, for reattaching using -r

This requires that the irssi configuration at least have all the channel and server configuration and is set to auto-connect, of course.

]]>
http://blog.gorwits.me.uk/2011/03/27/starting-irssi-in-screen-at-reboot/feed/ 2
Fixing character encoding with irssi and screen http://blog.gorwits.me.uk/2011/03/27/fixing-character-encoding-with-irssi-and-screen/?utm_source=rss&utm_medium=rss&utm_campaign=fixing-character-encoding-with-irssi-and-screen http://blog.gorwits.me.uk/2011/03/27/fixing-character-encoding-with-irssi-and-screen/#comments Sun, 27 Mar 2011 21:48:28 +0000 Oliver Gorwits http://blog.gorwits.me.uk/?p=442 Continue reading ]]> Yes, this old chestnut. Time and again I forget the steps to get a new (Debian-ish) system working properly such that I can ssh in and reattach to a screen’d irssi session and have character encoding work properly. That is, I can enter a £ sign and it doesn’t come out like I just swore at the channel.

Probably one of the simplest guides is by Salvatore Iovene, from which I summarise the following steps:

  1. sudo dpkg-reconfigure locales
  2. start screen with -U
  3. Add some options to the irssi configuration

For that last part, I think I worked out which stanza the config should go in:

settings = {
  core = {
    recode = "yes";
    recode_autodetect_utf8 = "yes";
    recode_fallback = "UTF-8";
    recode_out_default_charset = "UTF-8";
    recode_transliterate = "yes";
  };
  "fe-common/core" = {
    term_charset = "UTF-8";
  };
};
]]>
http://blog.gorwits.me.uk/2011/03/27/fixing-character-encoding-with-irssi-and-screen/feed/ 0