PerlStalker’s SysAdmin Notes

Notes from the life of a systems administrator

System Center Data Protection Manager 2012 R2 Always Crashing

DPM is great for backing up Microsoft stuff but I ran into something really, really odd. In short, the msdpm kept crashing. I took a while but I was eventually able to track down which object was causing the crash by trying each failed sync one at a time. (DPM troubleshooting step 1.) For the record, it was the system protection for a Windows Server 2012 R2 server but I’m not convinced that that matters. On the plus side, the eventlog on the DPM server was nearly useless.

Log Name:      Application
Source:        MSDPM
Date:          6/1/2016 2:55:45 PM
Event ID:      999
Task Category: None
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      srvdpm2.ad.adams.edu
Description:
The description for Event ID 999 from source MSDPM cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.

If the event originated on another computer, the display information had to be saved with the event.

The following information was included with the event: 

An unexpected error caused a failure for process 'msdpm'.  Restart the DPM process 'msdpm'.

Problem Details:
<FatalServiceError><__System><ID>19</ID><Seq>6871</Seq><TimeCreated>6/1/2016 8:55:45 PM</TimeCreated><Source>DpmThreadPool.cs</Source><Line>163</Line><HasError>True</HasError></__System><ExceptionType>FormatException</ExceptionType><ExceptionMessage>Input string was not in a correct format.</ExceptionMessage><ExceptionDetails>System.FormatException: Input string was not in a correct format.
   at System.Text.StringBuilder.AppendFormat(IFormatProvider provider, String format, Object[] args)
   at System.String.Format(IFormatProvider provider, String format, Object[] args)
   at Microsoft.Internal.EnterpriseStorage.Dls.Trace.TraceProvider.Trace(TraceFlag flag, String fileName, Int32 fileLine, Guid* taskId, Boolean taskIdSpecified, String formatString, Object[] args)
   at Microsoft.Internal.EnterpriseStorage.Dls.WriterHelper.SystemStateWriterHelper.RenameBMRReplicaFolderIfNeeded(String roFileSpec)
   at Microsoft.Internal.EnterpriseStorage.Dls.WriterHelper.SystemStateWriterHelper.ValidateROListOnPreBackupSuccess(Message msg, RADataSourceStatusType raDatasourceStatus, Guid volumeBitmapId, List`1&amp; missingVolumesList, ReplicaDataset&amp; lastFullReplicaDataset, ROListType&amp; roList)
   at Microsoft.Internal.EnterpriseStorage.Dls.Prm.ReplicaPreBackupBlock.ValidateROList(Message msg, RADataSourceStatusType raDatasourceStatus, Guid datasetId)
   at Microsoft.Internal.EnterpriseStorage.Dls.Prm.ReplicaPreBackupBlock.RAPreBackupSuccess(Message msg)
   at Microsoft.Internal.EnterpriseStorage.Dls.TaskExecutor.Fsm.Engine.ChangeState(Message msg)
   at Microsoft.Internal.EnterpriseStorage.Dls.TaskExecutor.TaskInstance.Process(Object dummy)
   at Microsoft.Internal.EnterpriseStorage.Dls.TaskExecutor.FsmThreadFunction.Function(Object taskThreadContextObj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()</ExceptionDetails></FatalServiceError>


the message resource is present but the message is not found in the string/message table

Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="MSDPM" />
    <EventID Qualifiers="0">999</EventID>
    <Level>2</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2016-06-01T20:55:45.000000000Z" />
    <EventRecordID>44561</EventRecordID>
    <Channel>Application</Channel>
    <Computer>srvdpm2.ad.adams.edu</Computer>
    <Security />
  </System>
  <EventData>
    <Data>An unexpected error caused a failure for process 'msdpm'.  Restart the DPM process 'msdpm'.

Problem Details:
&lt;FatalServiceError&gt;&lt;__System&gt;&lt;ID&gt;19&lt;/ID&gt;&lt;Seq&gt;6871&lt;/Seq&gt;&lt;TimeCreated&gt;6/1/2016 8:55:45 PM&lt;/TimeCreated&gt;&lt;Source&gt;DpmThreadPool.cs&lt;/Source&gt;&lt;Line&gt;163&lt;/Line&gt;&lt;HasError&gt;True&lt;/HasError&gt;&lt;/__System&gt;&lt;ExceptionType&gt;FormatException&lt;/ExceptionType&gt;&lt;ExceptionMessage&gt;Input string was not in a correct format.&lt;/ExceptionMessage&gt;&lt;ExceptionDetails&gt;System.FormatException: Input string was not in a correct format.
   at System.Text.StringBuilder.AppendFormat(IFormatProvider provider, String format, Object[] args)
   at System.String.Format(IFormatProvider provider, String format, Object[] args)
   at Microsoft.Internal.EnterpriseStorage.Dls.Trace.TraceProvider.Trace(TraceFlag flag, String fileName, Int32 fileLine, Guid* taskId, Boolean taskIdSpecified, String formatString, Object[] args)
   at Microsoft.Internal.EnterpriseStorage.Dls.WriterHelper.SystemStateWriterHelper.RenameBMRReplicaFolderIfNeeded(String roFileSpec)
   at Microsoft.Internal.EnterpriseStorage.Dls.WriterHelper.SystemStateWriterHelper.ValidateROListOnPreBackupSuccess(Message msg, RADataSourceStatusType raDatasourceStatus, Guid volumeBitmapId, List`1&amp;amp; missingVolumesList, ReplicaDataset&amp;amp; lastFullReplicaDataset, ROListType&amp;amp; roList)
   at Microsoft.Internal.EnterpriseStorage.Dls.Prm.ReplicaPreBackupBlock.ValidateROList(Message msg, RADataSourceStatusType raDatasourceStatus, Guid datasetId)
   at Microsoft.Internal.EnterpriseStorage.Dls.Prm.ReplicaPreBackupBlock.RAPreBackupSuccess(Message msg)
   at Microsoft.Internal.EnterpriseStorage.Dls.TaskExecutor.Fsm.Engine.ChangeState(Message msg)
   at Microsoft.Internal.EnterpriseStorage.Dls.TaskExecutor.TaskInstance.Process(Object dummy)
   at Microsoft.Internal.EnterpriseStorage.Dls.TaskExecutor.FsmThreadFunction.Function(Object taskThreadContextObj)
   at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.ExecutionContext.Run(ExecutionContext executionContext, ContextCallback callback, Object state, Boolean preserveSyncCtx)
   at System.Threading.QueueUserWorkItemCallback.System.Threading.IThreadPoolWorkItem.ExecuteWorkItem()
   at System.Threading.ThreadPoolWorkQueue.Dispatch()&lt;/ExceptionDetails&gt;&lt;/FatalServiceError&gt;
</Data>
  </EventData>
</Event>

As you can see, it’s super helpful. And by super helpful, I mean totally unhelpful. So, it’s DPM troubleshooting step number 2, remove and re-add the system protection to the protection group. Unfortunately, that didn’t help.

There’s another log entry about two entries up from the one I posted that tried to be helpful.

Log Name:      Application
Source:        Windows Error Reporting
Date:          6/1/2016 2:55:48 PM
Event ID:      1001
Task Category: None
Level:         Information
Keywords:      Classic
User:          N/A
Computer:      srvdpm2.ad.adams.edu
Description:
Fault bucket , type 0
Event Name: DPMException
Response: Not available
Cab Id: 0

Problem signature:
P1: msdpm
P2: 4.2.1205.0
P3: msdpm.exe
P4: 4.2.1205.0
P5: System.FormatException
P6: System.Text.StringBuilder.AppendFormat
P7: 9F6A23D0
P8: 
P9: 
P10: 

Attached files:
C:\Windows\Temp\tmp541A.xml
C:\Program Files\Microsoft System Center 2012 R2\DPM\DPM\Temp\MSDPMCurr.errlog.2016-06-01_20-55-45.Crash

These files may be available here:


Analysis symbol: 
Rechecking for solution: 0
Report Id: 36bbe16a-283b-11e6-80c9-2c600c6007a0
Report Status: 262144
Hashed bucket: 
Event Xml:
<Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event">
  <System>
    <Provider Name="Windows Error Reporting" />
    <EventID Qualifiers="0">1001</EventID>
    <Level>4</Level>
    <Task>0</Task>
    <Keywords>0x80000000000000</Keywords>
    <TimeCreated SystemTime="2016-06-01T20:55:48.000000000Z" />
    <EventRecordID>44563</EventRecordID>
    <Channel>Application</Channel>
    <Computer>srvdpm2.ad.adams.edu</Computer>
    <Security />
  </System>
  <EventData>
    <Data>
    </Data>
    <Data>0</Data>
    <Data>DPMException</Data>
    <Data>Not available</Data>
    <Data>0</Data>
    <Data>msdpm</Data>
    <Data>4.2.1205.0</Data>
    <Data>msdpm.exe</Data>
    <Data>4.2.1205.0</Data>
    <Data>System.FormatException</Data>
    <Data>System.Text.StringBuilder.AppendFormat</Data>
    <Data>9F6A23D0</Data>
    <Data>
    </Data>
    <Data>
    </Data>
    <Data>
    </Data>
    <Data>
C:\Windows\Temp\tmp541A.xml
C:\Program Files\Microsoft System Center 2012 R2\DPM\DPM\Temp\MSDPMCurr.errlog.2016-06-01_20-55-45.Crash</Data>
    <Data>
    </Data>
    <Data>
    </Data>
    <Data>0</Data>
    <Data>36bbe16a-283b-11e6-80c9-2c600c6007a0</Data>
    <Data>262144</Data>
    <Data>
    </Data>
  </EventData>
</Event>

The important bit there is the .Crash file. Well, sort of. I jumped to the end of the file as I would normally do with a log file and I saw this.

180C    280C    06/01   20:55:45.690    09  everettexception.cpp(761)       8AE83798-7A85-466A-80CA-22CA66582965    CRITICAL    Exception Message = Input string was not in a correct format. of type System.FormatException, process will terminate after generating dump

That’s what’s killing the service but it’s not super helpful. I spent a couple of hours running that down but it was a false trail. The real culprit was this message that was up a ways in the log.

180C    280C    06/01   20:54:55.393    09  AppAssert.cs(130)       8AE83798-7A85-466A-80CA-22CA66582965    WARNING value of non-nullable parameter @RecoverableObjectMachineName is null

That’s interesting. Even more interesting is the fact that Google gave me zero results when I searched for “value of non-nullable parameter @RecoverableObjectMachineName is null”. Now I’m getting somewhere … or not.

Actually, that provided a bit of a hint. For some reason, the machine name is not being set on the object. As you may be able to tell, DPM kinda expects the machine name to be set.

The solution was DPM troubleshooting step number 3, remove all of the protected items and reinstall the agent. I may or may not have sacrificed a rubber chicken at this point.

I added the system protection items back into DPM and waited. Actually, as it was the end of the day, I went home and played Star Wars Battlefront. When I came back in the morning everything was still up and running. No crashes and a successful backup. I call that a win!

SCCM Updates and Powershell

Microsoft is doing cool things with Windows Server with what was originally server core and is now the base version of Windows Server. Combined with Powershell remoting and there’s a lot of power from the command line. Unfortunately, is surprisingly difficult to tell if updates are available and to trigger their installation.

If you’re not using SCCM, you can run sconfig.exe and select option 6 to manage your updates but packages and applications pushed through SCCM don’t show up there.

Now, SCCM has reports so that you can what’s pending but sometimes, it’s nice to be able to see what the server sees as pending and to make sure that it’s getting your planned updates. Fortunately, it’s possible to pull that information out of WMI and CIM. The cool thing about powershell is that it’s really easy to pipe this update information into other tools, if needed.

Without further ado, here are a couple of one-liners for playing with SCCM.

List updates: gcim -namespace root\ccm\clientsdk -query 'Select * from CCM_SoftwareUpdate'

Easy, huh. Remember powershell’s pipeline is pretty powerful. If all you need is a count of updates, you can do something like this. (gcim -namespace root\ccm\clientsdk -query \'Select * from CCM_SoftwareUpdate\' | measure-object).Count

You can also assign that list of update to a variable like so: $updates = gcim -namespace root\ccm\clientsdk -query 'Select * from CCM_SoftwareUpdate'. Now you can use $updates in other commands without having to query CMI again.

Now that you’ve seen the list, you might want to tell the server to install anything that has a deadline set. iwmi -namespace root\ccm\clientsdk -Class CCM_SoftwareUpdatesManager -name InstallUpdates([System.Management.MangementObject[]](gwmi -namespace root\ccm\clientsdk -query \'Select * from CCM_SoftwareUpdate\'))

Now that the updates are installed, you can check to see if the server needs a reboot by running (icim -namespace root\ccm\clientsdk -ClassName CCM_ClientUtilities -Name DetermineIfRebootPending).RebootPending.

If you’re RDP’d into a host, you can open the Software Center by running c:\windows\ccm\scclient.exe.

One last thing, I have a plugin for salt that can run a lot of these things for you across many hosts. It needs some work to fit better into the salt way of doing things. Contributions welcome.

Switching to Docker and CoreOS

I learned about Docker over the summer at ApacheCon in Denver. While Docker, itself, wasn’t on the program, it came up several times as various people were talking about PaaS systems. Once I started to dig into it, I understood why people were so excited. After playing with it more on my own, I was hooked. I decided that I wanted to move this site to Docker.

In this post I’ll tell you a bit about what I did, how I did it and why. What I’m not going to do is explain the full workings of Docker. If you want that, check out the Docker documentation.

What is Docker?

Solomon Hykes, on the Docker site, describes Docker thusly, “Docker is an open platform for developers and sysadmins to build, ship, and run distributed applications.” In other words, it’s a really convenient way to wrap up an application and everything it needs in one nice neat little package and run everything from within that package.

One of the things I like about it is that individual pieces can be isolated. For example, I write a lot of Perl and, as much as I like it, doing a lot with CPAN on a server can make a real mess. That’s especially true if you have multiple Perl apps using lots of different libraries. Keep all of that up-to-date and making sure a needed upgrade for one app doesn’t break another is time consuming and, frankly, not very fun. Docker allows you to keep each piece as separate form each other one as you want.

On top of all of that goodness, Dockers can be reliably replicated. I know that if I build a container and fully test it out on my laptop, that it work exactly the same when it’s deployed. That consistency is great when it comes to the whole DevOps thing.

Converting to Docker

Originally, this site was running on Nginx on an Ubuntu server in my living room. The web server worked fine (it’s a simple site after all) but the apps would sometimes freak out.

I use Octopress to generate the site and I have a couple of Perl scripts that do other things for me. That worked alright but Octopress is written is Ruby and the gem system is even more fussy than the CPAN. I don’t know how many times updates broke because something changed with a Ruby gem. Even worse was when I checked out the site from git on another box I have some sort of problem get the deploy step to work.

I had a great opportunity because I needed to move my site off of my home server. I decided to set up shop at Digital Ocean. DO is a great place to run your own virtual private server and they make it very easy to run certain applications like Wordpress and, more importantly for me, Docker out of the box. Their Docker application installs Docker on Ubuntu and is all ready to host Docker containers. Docker installs easily on Ubuntu even without their app but, hey, I’m all for making things easier on myself.

The first thing I needed to do was break down my site into pieces to Dockerize. The current Docker best practice is to have a each container do a single task. It this case, it was pretty simple to pick out those tasks. I would need four containers. The first container I’d need is nginx. Number two was for Octopress and three and four were for my Perl scripts.

I took a little bit of working with Docker files to make sure all of the needed libraries were installed for each app but that wasn’t hard. The real head scratcher was Octopress. You see, Octopress is designed to deploy the generated pages via rsync to a remote server. The rsync part is fine but I wasn’t going to running ssh or an rsync server in the nginx container just to publish updates. I had to hack on Octopress a little to allow to publish to a local directory and I was golden.

Now, let’s dig into the containers.

Nginx

This is the easiest of the bunch. On my first pass, I created a Dockerfile which used the official nginx repository from the Docker Hub Registry. The only thing it changed was the location of the document root to match what was on my server. It turns out that that was a bad idea. It was easier to use the nginx repo unchanged and change apps to look to the new document root. It’s one less container for me to maintain and, thanks to other magic I’ll get to when I talk about CoreOS, it’s automatically updated.

I run the container with the following command: /usr/bin/docker run --rm --name perlstalker_web-server -v /var/www:/usr/share/nginx/html -p 80:80 nginx. There’s one piece of special magic in that I map /var/www to the default nginx doc root /usr/share/nginx/html. This keeps the site data persistent even though the container is deleted after every run and provides a nice hook for the other containers.

Octopress

One of the first things I did to prep for this move (after I fixed my rsync issue) was to move my repo up to github. Now I had an easy way to get my site onto the server. The next step was to build the container.

Below is the Dockerfile.

Octopress DockerfileDocker file on GitHub
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
FROM ubuntu:trusty
MAINTAINER Randall Smith <perlstalker@gmail.com>

RUN apt-get update

RUN apt-get install -y git ruby ruby-dev gems rbenv ruby-redcloth build-essential python-pygments nodejs

WORKDIR /usr/local/src
RUN git clone https://github.com/PerlStalker/perlstalker.vuser.org.git perlstalker.vuser.org
WORKDIR perlstalker.vuser.org
RUN gem install bundler
RUN rbenv rehash
RUN bundle install

ENTRYPOINT git pull && rake generate && rake deploy

When you run docker build against this Dockerfile, it will install all of the necessary requirements, clones the site from github and then finishes the install. Once that’s complete, running the container will pull the latest updates from github, generate the static pages and deploy the site into the doc root.

The cool thing is that this container can be built once and run as often as required. (I know. I’m easily amused.) The running containers can even be removed on completion (with the --rm flag to docker run) and re-run.

The other trick is to mount the document root from the nginx container so that the generated files from Octopress get put in the right place. Use the --volumes-from flag like so: /usr/bin/docker run --rm --name perlstalker_deploy --volumes-from perlstalker_web-server perlstalker/sysadmin-deploy.

Scriptures Feed

One of the scripts I use on my site generates a RDF feed for my daily scripture study. The code, including the Dockerfile, is up on github. I’m not going to go into details. You can check out the Dockerfile for yourself. Again, the trick is that it mounts the doc root from nginx container.

CoreOS

I ran on Ubuntu for a while but ran into an annoying problem when it came to updates. You seen, Digital Ocean used an external kernel when starting Ubuntu VMs. It’s great, in one way, because it’s really fast to start up. On the other hand, it causes no end of problems when working with Docker. I frequently ran into issues where I would forget to change the kernel I was booting with to match what was just installed by apt. Sometimes the VM wouldn’t boot, other times docker refused to start.

The other catch is that, to be honest, I got really tired of applying patches to an OS that isn’t really doing anything. All of the fun stuff happens in Docker. I didn’t exactly want to have to worry about Ubuntu.

Fortunately, Digital Ocean rolled out the option to create CoreOS VMs. I decided to get in on that action despite the beta status.

The big problem converting to CoreOS is that I had to learn systemd. That was annoying but wasn’t too bad. I’d like to share a couple of the units to show the magic.

The first is the main web server.

perlstalker.service
1
2
3
4
5
6
7
8
9
10
11
12
13
[Unit]
Description=perlstalker.vuser.org web server
After=docker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
ExecStartPre=-/usr/bin/mkdir /var/www
ExecStartPre=/usr/bin/docker pull nginx
ExecStart=/usr/bin/docker run --rm --name perlstalker_web-server -v /var/www:/usr/share/nginx/html -p 80:80 nginx

[Install]
WantedBy=multi-user.target

I want to draw your attention to the ExecStartPre lines. The first one creates the persistent storage for the web site pages. The - prefix tells to systemd to ignore errors like, for example, the directory already exists.

It’s the second one that’s interesting. Every time the service restarts, it pulls an updated nginx image. That means that every time my server reboots or the service is restarted, I get a fully updated, patched and, theoretically, secure web server.

The next big piece is the Octopress deployment.

deploy.service
1
2
3
4
5
6
7
8
9
10
11
[Unit]
Description=Generate and deploy site
After=perlstalker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
RemainAfterExit=no
Type=simple
ExecStartPre=/usr/bin/docker pull perlstalker/sysadmin-deploy
ExecStart=/usr/bin/docker run --rm --name perlstalker_deploy --volumes-from perlstalker_web-server perlstalker/sysadmin-deploy

Now, every time I run systemctl start deploy, the Octopress Docker updates the site.

I want to show you two last systemd units which trigger the update of my scriptures feed, in part, because I want to remember to crazy way systemd replaces cron.

Every cron needs to two units. One is the .service file which defines what actually happens.

scriptures.service
1
2
3
4
5
6
7
8
9
10
11
[Unit]
Description=Generate scriptures feed
After=perlstalker.service
Requires=docker.service

[Service]
TimeoutStartSec=0
RemainAfterExit=no
Type=simple
ExecStartPre=/usr/bin/docker pull perlstalker/scripture-feed
ExecStart=/usr/bin/docker run --rm --name perlstalker_scriptures --volumes-from perlstalker_web-server perlstalker/scripture-feed

The second one is a .timer file which sets the schedule.

scriptures.service
1
2
3
4
5
6
7
8
9
10
[Unit]
Description=Generate scriptures feed
        
[Timer]
OnBootSec=25min
OnCalendar=*-*-* 04:30:00
Persistent=true
        
[Install]
WantedBy=timers.target

Make sure that you run systemd enable scriptures.timer and systemd start scriptures.timer. I forgot to do that then wondered why my feed didn’t update. :-)

I want to make a point here that may have been lost in my digression into systemd mazes. I didn’t have to change any of containers. I simply plugged them into the systemd on CoreOS and my site was flying again. If at some point, I decide to move to some other service such as GCE or EC2, I can drop my containers in, easy peasy.

Anyway, the point of this little screed is to show how a few building blocks or, shall I say, containers, can be stacked together to build whatever you need. Even if it’s something as trivial as my little site.

Endnotes With Org-mode

I was recently writing an internal peer review for work. Because I’m a happy emacs user, I wrote the peer review in org-mode and exported it to PDF using org-latex-export-to-pdf. The problem was that our interal format requires that I use endnotes and emacs exports my footnotes as, well, footnotes. So, here’s the quick and dirty on how I got the exporter to give me end notes.

First of all, you need to tell LaTeX that you want to use endnotes. I put it at the begining of my org file with the rest of the boilerplate.

#+LaTeX_HEADER: \usepackage{endnotes}
#+LaTeX_HEADER: \let\footnote=\endnote

The first line loads the endnotes packge and the second says that you want it to treat your footnotes as if they were endnotes.

Then, at the end of your document, drop this block.

* Endnotes
#+LaTeX: \begingroup
#+LaTeX: \parindent 0pt
#+LaTeX: \parskip 2ex
#+LaTeX: \def\enotesize{\normalize}
#+LaTeX: \theendnotes
#+LaTeX: \endgroup

That works great except that LaTeX will drop a second heading called “Notes”, by default. Well, I’m using my own heading so that it shows up nicely in the table of contents. It’s possible to make the heading go away with this line.

#+LaTeX_HEADER: \renewcommand{\notesname}{}

Unfortunately, it leaves the extra space where the heading would have been. I’m sure there’s a way to get rid of that but I didn’t take the time to figure it out. If you know, drop a comment below.

Importing iCal Into Org-mode

I’ve been using emacs and org-mode for some time to manage my tasks. Org-mode has a great feature which shows and agenda view which includes upcoming scheduled items and deadlines. One of the things that was missing was the ability to view my calendar (which is in Google Calendar) in the agenda.

There are a couple of ways of dealing the syncing the calendar data. One of the ways I tried was org-caldav. It kind of worked. Sort of. It did import the caledar but it failed spectaculary with repeating tasks set in Google by me or others. Since most of the things on my calendar are repeating events, this was a problem.

Alright, so org-caldav didn’t work for me. I could have looked for something else that did two-way sync but, in the end, it wasn’t that important to me. So, I worked up a way to do pull the iCal feed from Google and convert it into an org-mode file.

The first is a pretty simple script that pulls down the iCal files and pumps it through the translation script. I run this from cron every ten minutes.

fetch-calendars.pl (fetch-calendars.pl) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
#!/usr/bin/env perl
use warnings;
use strict;

my $debug = 0;

my $wget = "/usr/bin/wget";
my $ical2org = "$ENV{HOME}/bin/ical2org.pl";

my $base_dir = "$ENV{HOME}/.calendars";
my $org_dir  = "$ENV{HOME}/org/calendars";

my %calendars = (
    'home' => 'http://www.google.com/calendar/ical/...',
    'work' => 'https://www.google.com/calendar/ical/...',
    );

chdir $base_dir;
# acad and ooo don't work
my @cals = qw(home work);

foreach my $cal (@cals) {
    next unless $calendars{$cal};

    my $cmd = "$wget -q -O $cal.ics.new $calendars{$cal} && mv $cal.ics.new $cal.ics";
    print STDERR "$cmd\n" if $debug;
    system $cmd;

    next unless -r "$cal.ics";

    $cmd = "$ical2org -c $cal < $base_dir/$cal.ics > $org_dir/$cal.org.new";
    print STDERR "$cmd\n" if $debug;
    system $cmd;

    if ( -s "$org_dir/$cal.org.new" ) {
  $cmd = "cp $org_dir/$cal.org.new $org_dir/$cal.org";
  print STDERR "$cmd\n" if $debug;
  system $cmd;
    }
}

The fun part is in ical2org.pl.

ical2org.pl (ical2org.pl) download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
#!/usr/bin/env perl
use warnings;
use strict;

use Data::ICal;
use Data::Dumper;
use DateTime::Format::ICal;

use Getopt::Long;
my $category = 'ical';

# Only sync events newer than this many weeks in the past.
# Set to 0 to sync all past events.
my $syncweeksback = 2;

GetOptions(
    'category|c=s' => \$category
);

my $cal = Data::ICal->new(data => join '', <STDIN>);

#print Dumper $cal;
my %gprops = %{ $cal->properties };

print "#+TITLE: ical entries\n";
print "#+AUTHOR: ".$gprops{'x-wr-calname'}[0]->decoded_value."\n";
print "#+EMAIL: \n";
print "#+DESCRIPTION: Converted using ical2org.pl\n";
print "#+CATEGORY: $category\n";
print "#+STARTUP: overview\n";
print "\n";

print "* COMMENT original iCal properties\n";
#print Dumper \%gprops;
print "Timezone: ", $gprops{'x-wr-timezone'}[0]->value, "\n";

foreach my $prop (values %gprops) {
    foreach my $p (@{ $prop }) {
  print $p->key, ':', $p->value, "\n";
    }
}

foreach my $entry (@{ $cal->entries }) {
    next if not $entry->isa('Data::ICal::Entry::Event');
    #print 'Entry: ', Dumper $entry;

    my %props = %{ $entry->properties };

    # skip entries with no start or end time
    next if (not $props{dtstart}[0] or not $props{dtend}[0]);

    my $dtstart = DateTime::Format::ICal->parse_datetime($props{dtstart}[0]->value);
    my $dtend   = DateTime::Format::ICal->parse_datetime($props{dtend}[0]->value);
    # Perhaps only sync some weeks back
    next if ($syncweeksback != 0
       and $dtend < DateTime->now->subtract(weeks => $syncweeksback)
       and !defined $props{rrule});

    my $duration = $dtend->subtract_datetime($dtstart);

    if (defined $props{rrule}) {
  #print "  REPEATABLE\n";
  # Bad: There may be multiple rrules but I'm ignoring them
  my $set = DateTime::Format::ICal->parse_recurrence(
      recurrence => $props{rrule}[0]->value,
      dtstart    => $dtstart,
      dtend      => DateTime->now->add(weeks => 1),
  );

  my $itr = $set->iterator;
  while (my $dt = $itr->next) {
      $dt->set_time_zone(
      $props{dtstart}[0]->parameters->{'TZID'} ||
      $gprops{'x-wr-timezone'}[0]->value
      );

      my $end = $dt->clone->add_duration($duration);
      next if ( $end < DateTime->now->subtract(weeks => $syncweeksback) );
  
      print "* ".$props{summary}[0]->decoded_value."\n";
      print '  ', org_date_range($dt, $end), "\n";
      #print $dt, "\n";
      print  "  :PROPERTIES:\n";
      printf "  :ID: %s\n", $props{uid}[0]->value;

      if (defined $props{location}) {
      printf "  :LOCATION: %s\n", $props{location}[0]->value;
      }

      if (defined $props{status}) {
      printf "  :STATUS: %s\n", $props{status}[0]->value;
      }

      print "  :END:\n";

      if ($props{description}) {
      print "\n", $props{description}[0]->decoded_value, "\n";
      }
  }
    }
    else {

  print "* ".$props{summary}[0]->decoded_value."\n";

  my $tz = $gprops{'x-wr-timezone'}[0]->value;
  $dtstart->set_time_zone($props{dtstart}[0]->parameters->{'TZID'} || $tz);
  $dtend->set_time_zone($props{dtend}[0]->parameters->{'TZID'} || $tz);

  print '  ', org_date_range($dtstart, $dtend), "\n";

  print  "  :PROPERTIES:\n";
  printf "  :ID: %s\n", $props{uid}[0]->value;

  if (defined $props{location}) {
      printf "  :LOCATION: %s\n", $props{location}[0]->value;
  }

  if (defined $props{status}) {
      printf "  :STATUS: %s\n", $props{status}[0]->value;
  }

  print "  :END:\n";

  if ($props{description}) {
      print "\n", $props{description}[0]->decoded_value, "\n";
  }

    }

#    print Dumper \%props;
}

sub org_date_range {
    my $start = shift;
    my $end = shift;

    my $str = sprintf('<%04d-%02d-%02d %s %02d:%02d>',
     $start->year,
     $start->month,
     $start->day,
     $start->day_abbr,
     $start->hour,
     $start->minute
       );
    $str .= '--';
    $str .= sprintf('<%04d-%02d-%02d %s %02d:%02d>',
     $end->year,
     $end->month,
     $end->day,
     $end->day_abbr,
     $end->hour,
     $end->minute
       );

    return $str;
}

I let Data::ICal parse the feed and DataTime::Format::ICal do the heavy lifting of parsing the date and time information from each entry. (Have I mentioned how cool the CPAN is?)

Most of the code is just reformating the iCal entry into org-mode syntax so that emacs can pull it into the agenda.

There’s one bit of magic I’m not showing here. In my .emacs config, I have this little gem.

1
(add-hook 'org-mode-hook 'auto-revert-mode)

That tells emacs to automatically revert (reload) any org-mode file that changes on disk while the buffer is open. Since I drop the converted files in a directory scanned by org-mode, emacs opens each converted calendar file in a buffer when the agenda view is first run. When the files are updated by the scripts above, emacs sees the changes and reverts the buffers. Anytime I regenerate the agenda view, emacs uses the updated buffers and the view is up-to-date.

Once again, this is a one way sync. I can’t edit the generated org-mode files and see the changes reflected in the Google calendars. If I want to make changes to my calendar, I have to do it through Google’s web interface. This actually works out for the best because Google provides all of the scheduling hooks to make sure others who I’ve invited to meetings can attend. I can’t get that, easily, in emacs.

So, there you go. A relatively pain free way to pull any iCal calendar into emacs.

Update 2016-01-21: I’ve incorporated a suggestion from Anders Johansson that prevents ical2org.pl from syncing old events. I set the default to two weeks in the past. To get the old behavior set $syncweeksback to 0.

Update 2016-02-03: Thanks to gr4nchio for a fix for recurring events.

Cluster SSH With Tmux

I was working today and, as I glanced at #lopsa, I saw this little gem.

13:50 <geekosaur> tmux has a broadcast-to-all-terminals thing

Wait, what?! I had to check it out. It turns out that tmux has a window option called synchronize-panes which lets you “Duplicate input to any pane to all other panes in the same window.”

I’ve been using cluster ssh to occasionally log into a bunch of my boxes at once and run the same command on all of them at the same time. It’s really nice for troubleshooting checking on the same thing on a bunch of servers all at once. It works pretty well but has the drawback that it depends on having X available. That’s a concern if I have to bridge through my machine and work and want to talk to a cluster.

I played around a bit and came up with a way for me to replicate when I was doing with cssh. The first bit is the following script which is based on this example.

#!/bin/bash
HOSTS=

if [ $1 = 'cluster1' ]; then
    HOSTS="host1 host2 host3"
elif [ $1 = 'cluster2' ]; then
    HOSTS="hostA hostB hostC hostD hostE hostF"
else
    exit
fi

for host in $HOSTS
do
    tmux splitw "ssh $host"
    tmux select-layout tiled
done
tmux set-window-option synchronize-panes on

Tmux can be controlled completely from the command line or from a script. This script takes a cluster name on the command line and opens a ssh session to each host in the list in a new pane. The last line is the magic. It turns on the synchronization so what gets typed in one pane is echoed to the others as well.

Now, this isn’t perfect. Unfortunately, the tiling doesn’t end up right when I use more than three or four servers. A quick C-z M-5 takes care of it but it’s annoying. (Note: I changed send-prefix from the default of C-b to C-z. Adjust your thinking accordingly.)

I’ve made the following changes to ~/.tmux.conf to make this easier to use.

bind-key M-s command-prompt -p "cluster" "new-window -n %1 'tssh %1'"
bind-key M-a set-window-option synchronize-panes

The first line maps C-z M-s so that it prompts me for a cluster name then opens a new window with all of the connections.

The second line provides an easy way to toggle the synchronization on and off. That makes it nice for ad hoc cluster views. Sometimes, I’m looking at a couple of servers and I want to perform a few commands on them both to check things. A quick C-z M-a and I can issue the commands to both servers. Hitting C-z M-a turns it off again.

There you go. A quick and easy way to get work on many servers all at once without the need for X.

Switching to a Standing Desk

System administrators have a fairly sedentary job. With the exception of occasionally racking or unracking servers, we’re pretty much desk bound. I’m certainly no exception.

Several months ago, I noticed that sitting all day was starting to cause me pain in the backs of my thighs. Now, I don’t know about you, but I’m not a big fan of pain, especially while I’m working. The pain would, eventually, drive me from my chair. Standing relieved the pain almost immediately but I could work standing up because my monitors were still sitting on my desk, too low to see.

I had heard of standing desks before and started to look around to see how I could cobble one together on the cheap. There was an old workbench/desk that we were pulling out of the server room that I could use to raise my work surface up so that I could stand. I talked to my boss and he suggested that I look online for something that would do the job without having a seven foot workbench sticking up over the five foot cubicle walls.

In my searching, I discovered the Ergotron WorkFit-S. At almost $400, it’s a little spendy but my boss agreed to get it for me. You can see what it looked like set up on my desk when I first got it.

It was a little strange working standing up, at first, but after a while, I got used to it. It was easy to do since I wasn’t dealing with the pain of sitting down all the time. The best thing about using the WorkFit-S is that it’s a dual sit-stand system which means that if I get tired of standing, I can simply slide it down sit for a while.

Not long after I took the above picture, I rearranged my desk so that I had work surfaces at both standing and sitting height. Generally, all I need while I’m standing is a place to put a notepad to jot things down on while I’m working. (There is a work surface add-on for the WorkFit-S but I didn’t get it.)

On average, I stand about half of my day. Some days a little more some days a little less. It’s all about listening to your body. When my feet start to hurt from standing, I’ll sit down. If my legs start to hurt from too much sitting, it’s back up and I’m standing.

One of the nicest things about standing is that it’s a great way to deal with “that 2:30 pm feeling”. I’ll tell ya, it’s a lot harder to doze off when you’re standing up. I’ve found that if I’m having trouble focusing or am feeling a little tired, standing helps me stay focused.

One thing I began to notice a month or so back is that the WorkFit-S is a bit short for me while I’m standing. I’m six feet tall and, lifted all the way up, the monitors were about 3-4 inches below what was comfortable. The good news is that Ergotron makes a Tall-User kit which can add something like eight inches to the height of the monitors as well as adding a bit of tilt. Set at its lowest level, the Tall-User kit added the extra height I needed. (Actually, it added a bit too much when slid all the way up so I, simply, don’t slide it all the way up.)

There was an extra benefit that I hadn’t counted on. When I’m working with someone on a problem in my cubicle, I can raise my WorkFit-S to the standing position which makes it much easier for both of us to see. Plus, it’s right at eye level if I’m up drawing things out on the whiteboard. I wouldn’t buy it just for that reason but it’s a nice plus.

Now, why did I write this on a blog that’s mostly full of tech notes and documentation? Simple. The notes on this blog are about things I’ve learned which make my job as a sysadmin easier or more enjoyable. Moving to a standing desk certainly qualifies.

Now, here a few of things that you should be aware of if you decide to go with a standing desk.

  • Be sure that you wear comfortable shoes. I’d also recommend a pad like you’ll see cashiers using in the store. You’ll last a lot longer standing up. Also, don’t be afraid to move. I tend to pace a little when I’m standing. Moving around will help your feet as well as helping your circulation.
  • If you go with a standing desk that allows you to sit down, don’t be afraid to do it. It’s all about listening to your body.
  • The first week or two using a standing desk are going to be a bit painful if you aren’t used to standing that much. I tried to go in three hour chunks when I first started. (Three hours standings and then one hour sitting.) That helped me get used to standing but I’ve cut back a bit. As I said, I’m standing about half the time now. Usually, I’ll stand for a couple of hours then sit for a couple of hours but it varies, especially if I have meetings or am working in the server room.
  • If you’re working in a cube farm rather than a private office, be aware that your monitor may be above the cube walls when you’re standing. If possible, position your monitors so that your monitors aren’t as visible. (As sysadmins, we occasionally work with sensitive date. You don’t want to show it off to the entire office.)
  • Don’t be afraid to try it out with a few bricks and boards before spending big bucks on something.

That’s about it. I highly recommend using a standing desk. I feel a lot better, physically, if I can stand for a part of the day than I ever did after spending the whole day glued to my chair. In the end, it’s all about being comfortable and not letting your work environment have a negative impact on your health.

Have you considered using a standing desk or are you using one now? Post your experience in the comments. I’d love to hear about it.

Emacs and Tmux

Hello. My name is PerlStalker and I’m an emacs user. I love emacs and use it for nearly everything but there are a few things it’s not good at. (“Like editing,” I hear all you warped vi users cry.) Among them, and most important to me, are window management and terminals.

Let’s start with terminals. I use eshell from time to time to do quick and dirty things on the command line but I always run into weird things that don’t work like I expect. For example, there’s a little one liner I run to convert the mp4 videos of my podcast that I get from YouTube to mp3. Eshell chokes on it. The more powerful and better featured term-mode and it’s more friendly cousin multi-term-mode are pretty good but I still run into tools, from time to time, that break it. (To be fair, it seems to be better in emacs 24 but I haven’t played with it as much.)

My main use of terminals, however, is logging into Linux servers and making changes. I can do weird things with eshell and tramp to edit files but it’s kinda slow. If I try to edit a file on the server through an editor in term-mode, all sorts of things break.

The other thing that emacs is bad at is window management. A little terminology before I go further, emacs uses the term “frames” for what X11 and Microsoft call windows. The term windows is used by emacs when it splits a frame to display multiple buffers. Emacs can split frames horizontally and vertically all day and not have a problem. Where things get hairy is if you use something like gnus which feels like it can do whatever it wants to your window layout at anytime. It’s a real pain in the neck when I have multiple buffers opened, looking at different things, then hit M-x gnus to check my email and boom all of my buffers have been hidden in favor of whatever gnus wants to do. Not cool, emacs. Not cool.

However, something that is good at handling window layouts and shells is tmux. Tmux is similar to screen in that it provides an “always-on” session that you can access from multiple places. Where it beats screen is in its scriptability and window management. Those two features make it especially nice for what I’m going to show you. On a side note, you can apply most of this to screen with a little work but it was really easy to do in tmux.

The super secret ingredient to all of this is emacs server and emacsclient. Emacs server allows you to connect to a running instance of emacs to do things (like edit files or run elisp functions) without starting a whole new emacs instance. That makes it really fast. You can start emacs server by running M-x server-start or have it happen automatically when emacs starts by putting (server-start) in $HOME/.emacs. You can even use emacs –daemon to start emacs in the background when you log in or with the @reboot tag in cron to start it when the machine starts. The –daemon option has the fringe benefit of leaving emacs running even if you log out.

Now to how this all works with tmux. First, I need to redefine “window”. (Don’t you just love overloading definitions?) Basically, tmux only lets you see one window at a time. You can switch between them but you can never see more than one. However, you can split them into “panes” and this is where tmux shines. I’m not going to get into here but you can see the man page to see how easy it is to create, resize and navigate between panes.

The first problem I needed to solve was to quickly and easily ssh into servers. Based on the examples in the docs, I added these lines to my $HOME/.tmux.conf.

bind-key S   command-prompt -p "host" "split-window 'ssh %1'"
bind-key C-s command-prompt -p "host" "new-window -n %1 'ssh %1'"

If I hit the prefix (C-b by default, C-z in my case) followed by C-s, tmux prompts me for a host name (which can also be user@host) and then opens a new window for the ssh session. If I do C-z S instead, it opens the ssh session in a new pane in the same window. Using a pane rather than a new window is useful when I’m checking things on multiple servers at the same time. The window or pane closes when the ssh session is finished.

Now for the fun. Here’s where emacsclient comes in. Let’s say that I want to open emacs inside tmux. By adding this magic to .tmux.conf and reloading the config I can open emacs in a new window (C-z y) or new pane (C-z C-y).

bind-key y   new-window -n "emacs"  "emacsclient -nw"
bind-key C-y split-window "emacsclient -nw"

Emacs opens extremely quickly because it’s already running. Even better, because it’s emacsclient, you can switch to any buffer that you already have open in other clients, even if you opened it in the X11 version of emacsclient.

That’s great but there are other things I want to do in emacs besides edit files. For example, suppose I want to jump into gnus to check my email.

bind-key g   new-window -n "gnus" "emacsclient -nw --eval '(gnus)'"
bind-key C-g split-window "emacsclient -nw --eval '(gnus)'"

C-z g opens gnus in a new tmux window and C-z C-g opens gnus in a new pane.

I’m using a personal convention that whatever key I bind, by itself, opens in a new window and control plus that key opens in a new pane.

If you can script it in elisp, you can make it a shortcut in tmux. I have shortcuts to open w3m …

bind-key W   new-window -n "w3m" "emacsclient -nw --eval '(w3m)'"
bind-key C-w split-window "emacsclient -nw --eval '(w3m)'"

… and I have one to open the RT command line with emacsclient set as the editor in a multi-term buffer. (The editor magic is hidden in a separate script (~/bin/rtc) to get around some restrictions with the RT command line and the EDITOR environment variable.)

(defun rtc ()
  (interactive)
  (if (get-buffer "*rtc*")
      (switch-to-buffer "*rtc*")
    (rtc-create)
    )
)

(defun rtc-create ()
  (eshell t)
  (rename-buffer "*rtc*")
  (goto-char (point-max))
  (eshell-kill-input)
  (insert "~/bin/rtc")
  (eshell-send-input)
)

Then in .tmux.conf:

bind-key C-r split-window "emacsclient -nw --eval '(rtc)'"

Now I can open my ticket list and edit tickets in a pane while I actually work on the ticket in another pane.

You could use this as an example for opening any shell app within emacs. Obviously, if you just want to bring up the app outside of emacs, you can do something magical like this …

bind-key C-m command-prompt -p "man" "split-window 'exec man %%'"

… which prompts you for a man page when you hit C-z C-m then opens it in a new pane. It’s super convenient if you want to check the docs for a tool you’re using. (I could have used emacs man- or woman-mode instead of calling man directly but this was simple and easy.)

If, for some strange reason, you would rather use vi to edit a file, you could simply replace man in the previous command with vi and change the key binding.

For even more special sauce, you could use byobu with tmux to display any number of fun widgets at the bottom of the window. I use a custom script combined with gcalcli to display the next thing I have coming up on my calendar.

The combination of tmux (+byobu) and emacsclient gives me a very efficient and very powerful way to get things done at work. If you’re an emacs user, I highly recommend looking into emacsclient even if you don’t need tmux or screen but combining the two makes for much joy and happiness.

Managing /etc/hosts With Puppet

So, here’s the situation. I have a stack of VM servers running KVM and libvirt. The hosts need to connect to a SAN for ISO storage and, potentially, VM disks. The problem is that the VM running DNS may not be up yet when the host starts. That’s a problem since I’m referencing the san by it’s host name rather than the IP address. Yes, I could change all of my configs to use the IP instead but host names are a lot easier to deal with, most of the time.

Well, I could work around the lack of DNS by putting an entry for the server in /etc/hosts but then I’d have to update it on every server if I ever change the IP address. Fortunately, puppet makes it easy. Sort of.

Puppet is a wonderful tool for managing Linux (and other *nix) servers. It’s a little weak, though, when all you want to do is add a line to file. That’s exactly what I wanted to do with /etc/hosts.

The good news is that puppet has a hook into the tool augeus. That makes editing the config relatively easy but plugging that into puppet is still a little messy. To make it easier, I created a class and define to tidy that up a bit.

Update [2012-09-03 Mon 07:15]: Puppet actually makes this easier than I thought. There already exists a host data type that I totally missed before. Dominic Cleal also pointed me to a module that he wrote that adds augeus providers to some of the default data types, including host.

class hosts {
  define entry(
    $ipaddr,
    $canonical,
    $aliases = 'UNSET' # I want to make this an array
    )
    {
      augeas { "create_$title":
        context => '/files/etc/hosts',
        changes => [
                    "ins 01 after 1",
                    "set 01/ipaddr $ipaddr",
                    "set 01/canonical $canonical"
                    ],
        onlyif  => "match *[ipaddr = '$ipaddr'] size == 0"
      }

      augeas { "update_$title":
        context => '/files/etc/hosts',
        changes => [
                    "set *[ipaddr = '$ipaddr']/canonical $canonical"
                    ],
      }

      Augeas["create_$title"] -> Augeas["update_$title"]

      # It would be great if I could loop this
      if ($aliases == 'UNSET') {
        augeas { "alias_$title":
          context => '/files/etc/hosts',
          changes => [
                      "rm *[ipaddr = '$ipaddr']/alias[1]"
                      ],
        }
      }
      else {
        augeas { "alias_$title":
          context => '/files/etc/hosts',
          changes => [
                      "set *[ipaddr = '$ipaddr']/alias[1] '$alias'"
                      ],
        }
      }
      Augeas["update_$title"] -> Augeas["alias_$title"]
    }
}

There are a couple of known issues. First, you can’t unset an entry. If you add a host with this and then decide that you no longer want it there, you can’t take it out. I don’t think it would hard to add that feature but I haven’t.

Second, the define only lets you set the first alias. Augeas allows for multiple aliases and I could pass an array to the define but I don’t know how to loop through that list.

Anyway, here’s how you use the define within a node or class definition.

include hosts
hosts::entry { 'san':
  ipaddr    => '192.168.1.5',
  canonical => 'san.example',
  aliases   => 'san'
}

You can use as many hosts::entry blocks as you want.

Testing for “Bitness” in Configuration Manager 2012 App Deployments

We started our deployment of System Center Configuration Manager 2012 last week and I ran into an interesting problem.

One of the first apps I rolled out to test with was Strawberry Perl. I grabbed the 64-bit MSI and ran through the Create Application wizard and added the MSI to the deployment types. One quick deployment later and ConfigMgr was happily installing perl on my servers.

… Most of my servers.

You see, I still have a couple of 32-bit servers hanging around and the 64-bit MSI wouldn’t install. D’oh. So, I figured it would be easy to jump into the Requirements of the distribution and limit the package to 64-bit systems. It wasn’t. While there are options for RAM amounts, CPU speed and disk space there’s nothing to test for CPU architecture.

"Create Requirements"

To fix this, I created a new Global Condition to test for the “bitness” of a server.

The information I’m looking for is in the AddressWidth property of the Win32_Processor class. You can see the list of properties by running gwmi Win32_Processor in powershell. If you run

gwmi -query "select * from Win32_Processor where AddressWidth = 32"

and get back a screen full of text, your system is 32 bit. If you specify the wrong value for AddressWidth, the command will exit with no output.

The condition properties should look something like this when you’re done.

"bitness Properties"

Once that’s done, it was a simple matter of adding the check to the deployment.

"Create Requirement"

Set the Value field to 64 for 64-bit systems and 32 for 32-bit systems and you’re done.

Having said all that, I’m still not sure that I haven’t missed a setting somewhere. One would think that a test to see if an app matches the target architecture would have been a no-brainer to include. If there’s a setting I missed, please let me know because not having it just doesn’t make any sense.

Update [2012-05-01 Tue 08:25]: One of my co-workers pointed out that there is, in fact, an “easier” way.

When you add a requirement, there’s an option for Operating system. In the tree view at the bottom of the pane, you can select just the 64-bit version of the OS that you’re targeting.

"OS Requirements"

I totally missed that before. It takes a bit more clicky-clicky to use for every 64-bit or 32-bit only package you deploy but it may be more obvious to the next admin.