Can maintaining package dependencies in RPM be magic?

One of the biggest complaints I hear about packaging software is packaging up all the dependencies, and then being responsible for keeping them current. Obviously, this is not the only complaint.. but i’m not talking about the others at the moment.

A long time ago i had the idea for a website where you could aggregate status updates for software projects you care about, thus allowing you to go get all your updates without having to subscribe to billions of lists. I had this idea back when still installing things manually, because the package ecosystem was nowhere near as complete as it is today (in debian or fedora). Sometime after that i registered myswuf.com (my software update feed) with the intention of one day writing such a tool.

That was a long time ago, and obviously this kind of thing is an issue. So some really clever people (okay, at the very least more motivated) in Fedora came up with Anitya, a tool for monitoring releases. They even provide a public and freely consumable installation.

Anitya publishes events to fedmsg (fedora infratructure message bus), which can be listened to as an unauthenticated feed.

Stanislav Ochotnický created a small proof of concept project called fedwatch which listens to fedmsg and will pass data from it to scripts in its run directory. My idea was to take release information and pass it to jobs in jenkins that are setup to bump the relevant information in a spec file with rpmdev-bumpspec (>=1.8.5) and build the new version. In theory this would cover lots of releases with minimal overhead, allowing the outlying changes to require hands on. I’ve put some completely unfinished code here.

Kickstarting Fedora 21 Workstation with Cobbler

Quick warning. I’m writing this as more of a post-mortum. I didn’t get all the errors recorded, and I realize that will make this hard for some people to find. I am also simplifyingthe process a bit more than what I actually went through, so some of it might not be perfect. I’ll try to make that better, but don’t hold your breath on me re-creating all the things that failed just to update this.

Odds are that if you’ve randomly stumbled upon this post you are suffering.  Its that pain that can only be understood by someone who questions why something that has worked for decades suddenly breaks.

Now lets be clear on something, I’m not against the Fedora.next concept.  I think there is a lot of merit to the concept, and more power to those implementing it for having the drive and vision to do so.  However, it is not fun spending several nights on a low and capped bandwidth network trying to figure out why something I’ve been doing for years doesn’t work.

So lets step back a few weeks.  My wife is opening an optometry practice, which is exciting and stressful.  It is also not cheap. Bearing in mind that I have over 15 years of sysadmin/network/security/blah/blah/blah experience I’m trying to save her a bit of money by handling her IT.  As part of that I’m bringing a few basic concepts to the environment.

  • Disposal and Repeatability – I want to be able to rebuild any of the desktops or the server at the drop of a hat.  There are several tools that facilitate this like The Foreman and Cobbler.  I settled on the latter.  I’ll talk about that elsewhere.
  • Linux – I know that the medical industrial complex is not necessarily Linux friendly, but I bet I can do at least the desktops this way.  This is going to have some fun problems.
  • Security – Small offices are notorious for their lack of security.  Going to conver user management, shared secrets, and good password policies.

But you want to know how to kickstart Fedora 21.  Lets move that direction…

One of the first things I established was a Cobbler instance and downloaded the Fedora 21 Workstation media. Let’s import it.

If you dig into this a bit you will find that the Live Workstation doesn’t have the bits necessary to match any of the signatures, and updating the signatures isn’t the solution.  A cursory google search shows that the recommended path for kickstarting Fedora 21 is to use the Server media with the Fedora 21 Everything repository. So we go download the Server DVD (or netinstall) and start syncing the Everything repository.

Quick aside. What really happened in my late nights of making this happen was that I found the above information, but confused the Everything repository with the Server Everything iso. So I started with just the Everything ISO, which isn’t an actual bootable iso with a installer. This led to weeping and gnashing of teeth. After a while of poking and prodding at this and reading poor documentation I realized there was a base Everything repository too. Then, because of my bandwidth limitations, I had to go sync the repository elsewhere, and manually import it into the environment.

Awesomeness. That’s done. I then added this repo to my f21 profile, and tried to kickstart. Using the simple default kickstart from cobbler, the kicking succeeded. However I only had a server instance due to not defining custom package sets.

I have a Fedora 21 workstation so I looked at the available groups and started with the simple “@^Fedora Workstation”. Restarted the process, and fail. No such group. Turns out cobbler’s reposync doesn’t grab comps.xml with its default settings. Go edit /etc/cobbler/settings and add ‘-m’ to the reposync_flags setting. Then re-run the cobbler sync.

I haven’t tested it with the re-run. Due to my constant back and forth I ended up doing a full re-sync at one point after adding the -m.

At this point it should be working. Let me know what doesn’t work and I’ll update.

Restart the flow

My good friend and Rackspace recuiter, Jill, has started a blog! Which is awesome. So we were talking and I pointed out how I’ve been slacking. I have actually written a blog post this year, and it succeeded in making my time between posts not reach a year. But still, 11 months is weak.

To that end we’ve both made a schedule. Me to fix my problem, and her to ramp up. Let see how this goes! :)

Update: and fail…. (its 3 months later without a single post)

Some RPM packaging thoughts and resources

Hopefully in the next two days my topic of ‘Using system level packaging’ will get picked up at our internal unconference. In preparation for that I figured I would compile a little bit of my data into a post so that I point people to it as a reference.

First off, try reading this old blog post by myself.

Now that that diatribe is out of the way, on to some advice for developers.

Intended Audience

Did you start your project by defining the problem? As part of that definition did you consider who your target audience is?  In my opinion this is a very important part of the process related to system packaging.

Let’s say that I’m working on a project that will be useful to run on OpenStack servers.  If you utilize the latest python libraries installed via pip and virtualenv, many systems that OpenStack runs on will not support those versions.  However, if you follow the global requirements that OpenStack projects follow, it is easier for your project to fit into that environment.

What if have brilliant idea to create a next generation orchestration tool? Knowing that this would be an awesome tool for enterprises, who typically are willing to buy support even for your free open-source project, it makes sense to consider that from day one.  As of 2013, many enterprise shops are running a mixture of RHEL5 and RHEL6.  That means that python 2.3 and 2.4, respectively, are the default python version available to you. In some cases there are additional version available, but we’ll get to that next.

3rd Party Repositories for EL distributions

Now maybe you have decided that Red Hat Enterprise Linux and it’s derivatives are your target and have started gnashing your teeth.  “I don’t want to have to package all these dependencies!”  Luckily, some wonderful communities have popped up to help.  The tried and true repository, Extra Packages for Enterprise Linux (EPEL) has been around for a while, and provides a nice stable extension to the base EL distribution.  Both fortunately and unfortunately, it has similar stability and version restrictions to base EL.

If you are using CentOS there are several additional repositories available for you.

Going back to the OpenStack example above there is a fairly recent repository set that can be extremely help as well.  Red Hat Deployed OpenStack provides the OpenStack global requirements, all nice and packaged up for public consumption. Here is some information about these repositories.

Software Collections

Software Collections is an extremely interesting new concept with lots of potential stemming from the Fedora community and embraced by Red Hat. Generically speaking the goal is to provide a consistent framework for parallel installation of different versions of software.  It supports anything from newer versions of Python and Ruby to new versions of databases like PostgreSQL and MySQL. By providing a framework it becomes very possible for you to readily package your entire dependency stack in a way that can be installed in a standard RPMish way, but still isolated from the rest of the system!

That’s it for now.

Creating signed debian apt repository

Update: 2014.03.19 Use freight

Recently had the pleasure(?) of having to resetup an apt repository. It had been managed by reprepro, but there apparently the underlying concept of a single version per repository that is central to the implementation. Seems kinda weird to me, as was it to some of the users of the repository. In an effort to resolve this and simplify our repository management I did some research and came up with a very simple solution.

Using the following references:

We already had a GPG key, so I didn’t have to create a key, i just followed the instructions in How to use GnuPG in an automated environment to generate a passwordless signing subkey using the existing gpg key. If you don’t have a key already there are lots of resources available if you just search around a bit. Here is one.

Once the private key was created I removed the original gpg keypair for that user off the system, storing it some place safe, and replaced it with the subkey’s files I just created. This allows us to revoke the subkey if it or the server is compromised. Then we can generate a new subkey using the original key and the consumers of the repository won’t need to change their installed key.

Next, we need to lay down a two things. A directory and a config file. In my example the directory will be the root of your domain on a web server. I’m not going to go into how to setup a web server here.

The config file is for apt-ftparchive to generate the Releases file. Following the instructions from Automatic Debian Package Repository HOWTO we generate /srv/repo.com/debian/Releases.conf

In my repository we only have noarch packages. No source packages, no architecture specific. Adjust that field as is necessary for your repository. We also only have one component section. I don’t even think it ends up being relevant to this simple of a repository, the same as Suite. What matters is that the codename matches your directory. I used debian for both, because ours is not specific to a Debian release such as “squeeze” or “wheezy”.

With these files in place I created the following script to build the repository, and saved it as /usr/local/bin/buildrepo.sh:

This script assumes that the packages are being uploaded to the RELEASES directory of the user bob.

I create both Packages and Packages.gz because i didn’t like seeing the Ign message when I did an apt-get update. You could consolidate it down to the one line and have just a single file.

Now for the fun part. With Incron you can configure the execution of a command/script based on filesystem events. Since packages are being pushed into /home/bob/RELEASES we want to monitor that directory. We have a large number of file event types we could trigger from, as can be seen under inotify events of this man page. For our purposes IN_CREATE was what I used. I need to verify as IN_CLOSE_WRITE might be better for larger files. To configure this I run incrontab -e and added the following entry:

That’s it! Now if you push a deb package into /home/bob/RELEASES the repository will get generated. The resulting repository can be accessed by configuring the following source on a client system.

Extra credit: add your public key to the /srv/repo.example.com/ directory so that users can add it to their local apt-key ring, which will otherwise complain.

installing puppet 3.0 + hiera + puppetdb + librarian

What we are doing

So Puppet 3.0 recently came out. It has Hiera support built in. Along with this PuppetDB 1.0 was released, which is supposed to be a very handy and very fast means of centrally storing catalogs and facts about your Puppet clients. Librarian is a project I recently ran across that helps coordinate the modules in your Puppet environment, unfortunately its not packaged.  I don’t usually like using Puppet Lab’s softwre repositories directly, but am for this because the software isn’t in EPEL yet.

So all I’m really doing is help layout a proof of concept environment using these tools.

Software

Prerequisites

  • You have enabled PuppetLab’s repositories.
  • You are not going to implement it this way in production.  That would be bad, m’kay?
  • You are going to notice than installing librarian as a gem completely overwrites your package installed version, thus validating why this in production is bad.

Installation

Configuration

Reference: http://docs.puppetlabs.com/puppetdb/1/connect_puppet_master.html

  • Make sure your fqdn is resolveable. Right now we are using a single host, so I’m just using localhost not the fqdn.
  • Populate /etc/puppet/puppetdb.conf with the following
  • Set the puppetdb server in /etc/puppet/puppet.conf
  • If you are using a separate host ensure that /etc/puppetdb/jetty.ini has the servername set to our fqdn. If its unpopulated, check it again after you run puppetdb-ssl-setup below.

Initialization of Puppet and PuppetDB

So PuppetDB’s SSL setup is very strict. For now, just make sure that you are

Adding modules using Librarian

Reference: https://github.com/rodjek/librarian-puppet/blob/master/README.md

  • First, prepare your puppet install for Librarian to control your modules directory
  • This will have created a PuppetFile in /etc/puppet
  • Add a puppet forge module into PuppetFile
  • Add a module from a git repository into PuppetFile
  • Tell librarian to build your modules directory
  • Check out your handy work

Configuring Hiera and preloading some data

ya.. need to get to this part..

puppet via apache using passenger from epel

I put together this serious of steps a few years ago long before Passenger made its way into Fedora/EPEL, when it was required putting together write ups from all over the place. Its easier now, but I’ve updated it and am publishing it to my blog because someone had expressed interest, and for my own use.

The goal of this set of steps is to enable the serving of Puppet through Apache using the Passenger module. mod_passenger to ruby what mod_cgi is to perl and mod_wsgi is to python. You would want to use this because Puppetmaster itself does not scale as well to large numbers of puppets. There are other options, but the whole thing is discussed more here.

Pre-requisites

  • RHEL 6 or clone installed
  • EPEL enabled on server (preferably with epel-release RPM)
  • The knowledge to do the above without my help

Installing a Puppetmaster

  • Install puppet and other packages:
    yum install --enablerepo=epel-testing httpd mod_ssl puppet-server mod_passenger
  • Populate /etc/httpd/conf.d/puppetmaster.conf with the following block. There is a sample ‘apache2.conf’ file that comes with the puppet package, but its never worked for me:
  • Optional
    • Set ServerName value in the VirtualHost block
    • Change the ssl cert file names from ‘puppet.pem’ to match your local environment
    • Set the correct puppet paths for ssl certificates in your environment
  • Create rack directory structure
    mkdir -p /usr/share/puppet/rack/puppetmasterd/{public,tmp}
  • Copy config.ru fromthe puppet source dir
    cp /usr/share/puppet/ext/rack/files/config.ru /usr/share/puppet/rack/puppetmasterd/
  • Set permissions on the previous items
    chown -R puppet: /usr/share/puppet/rack/puppetmasterd/
  • Configure /etc/puppet/puppet.conf to include the following, taking into consideration your local environment:
  • Configuring SSL the lazy way :)
    • Run puppetmasterd to build ssldirectory structure and keys
      /usr/sbin/puppetmasterd
    • Stop puppetmasterd
      killall -9 puppetmasterd
  • Add firewall rules before the reject and commit rules in your firewall definition:
    -A INPUT -m state --state NEW -m tcp -p tcp --dport 8140 -j ACCEPT
  • Restart firewall
    /etc/init.d/iptables restart
  • Restart apache
    /etc/init.d/httpd restart
  • Verifying that the system is working by browsing to admin page: https://puppetmaster:8140, and if its working you should see:
    The environment must be purely alphanumeric, not ''

puppet tricks: debugging

Update: (2012/9/30) I came up with this around the time I was using 0.25.  Apparently now you can do similar utilizing the –debug switch on the client along with debug() calls. I thought the function was only part of PuppetLab‘s stdlib, but apparently its in base, at least in 2.7+. I’ll probably do a part 2 to this with more info, although there isn’t much more.

Update: (2012/12/20) So the debug() function from stdlib is lame. I spent a while troubleshooting my new environment not getting messages and realized that rolling back to notice() worked. Could have sworn I tested it when I posted that. I did also run into an issue that naming the fact debug is actually a bad idea and so have updated this blog accordingly.

Update: Found this bug that talks about the facts not returning as the appropriate types.

Disclaimer: I am not a ruby programmer… so there might be “easier” or “shorter” ways to do some of the things I do with ruby, but my aim is for readability, comprehensibility by non-programmers, and consistency.

In my time playing with puppet I have had to do a few things I was not pleased with.  Mainly I had to write several hundred lines of custom facts and functions.  Debugging was one of the biggest pains, until I found a wonderful blog post that helped me out with that.  Actually, when he helped me out with debugging I had already been to the site once because I ran into a bug related to the actual topic of his post, “calling custom functions from inside other custom functions”.  Back to the matter at hand… when I first started working on custom functions I would leave exceptions all over my code and use them to step through the functions during debugging sessions.  While the code itself was short, this a tedious process as I would have to comment out each exception to move to the next one and then re-run the test.  It looked like this:

Then I found function_notice, which got rid of the commenting of exceptions by allowing me to log debug statements.  So I replaced all of my exceptions with if wrapped function_notice calls, resulting with:

An important thing to remember about function_notice in a custom function is that the variable you pass to function_notice must be a list.  I have not done anything other than send a single string inside a single list, so I could not speak to its other behaviors.  The length of the code increases greatly, and I do not actually do a debug for everything.  Overall this is a much better place to be.  However, now to enable debug I have to edit the custom functions on the puppet master which requires a restart the service (puppetmasterd, apache, etc), and logs are generated for every client.  That is still a pain.  This is when I had a “supposed to be sleeping” late at night revelation.  You can lookup facts and variables inside your custom functions!  So I created a very simple fact named debug.rb that looks like this:

So what that means is that on any of my puppet clients I can enable debugging of my puppet setup by touching the file /etc/puppet/debug, and disable it by deleting that file.  To enable this in my custom function I change the definition of debug.

Now, this may seem like a kinda odd way to go about setting the debug value, but while the code in the custom fact is working with the boolean value of true/false, when called as a fact it returns the string “true” or “false”.  Since the string “false” is true from a boolean sense you could end up getting flooded with logs if you do a simply true/false check against the lookup() result.  Thus, we default to false as that should be our normal working mode, and if the fact returns the string “true”, we set debug to true.  Now there is a custom fact providing debug, and a custom function utilizing it to log messages on the puppet server. Yay!  But wait, there is more!  Now that you have the custom fact defined, you can utilize it inside your puppet manifests in the same way!  Let take a look:

Wait, what? Sorry.. threw a few curve balls at you. The notify call, which is not a local function, logs on the client side. Then I wrapped it in a define called print, because I was going to pass an array to it. By wrapping it in the define it takes the array and performs the notify call on each object in the array. You can read more about this on this page, under the sections What is the value of a variable? and Whats in an array?.  The article has some nice explanations of a few other things as well.

Also, if you’d rather check for $::debug than $::puppet_debug then add the following to your site.pp:

$::debug = $::puppet_debug

puppet tricks: staging puppet

As I have been learning puppet @dayjob one of the things I have been striving to deal with is order of operations.  Puppet supports a few resource references, such as before, after, notify, and subscribe. But my classes were quickly becoming slightly painful to define all these in, when the reality was there was not always hard dependencies so much as a preferred order.  After having issues with this for a while and researching other parts of puppet I stumbled across some mention of run stages, which were added in the 2.6.0 release of puppet.  If you read through the language guide they are mentioned.  There has always been a single default stage, main.  But now you add as many as you want.  To define a stage you have to go into a manifest such as your site.pp and define the stages, like so:

That defines the existence of two stages, a pre stage for before main and a post for after main.  But I have not defined any ordering.  To do that we can do the following, still in site.pp:

Thus telling puppet how to order these stages.  An alternate way would be:

It all depends on your style. So now that we have created the alternate stages, and told puppet what the ordering of these stages is, how do we associate our classes inside them?  It is fairly simple, when you are including a class or module you pass it in as a class parameter.  To do this they introduced an alternate method of “including” a class.  Before you would use one of these two methods:

In this the base class requires that the users class is done before it, and then includes the packages class. Its fairly basic. Transitioning this to stages comes out like this:

It is very similar to calling a define.  In production I ended up where adding my base class in the pre stage of a lot of classes, and which became kinda burdensome. I knew that there were universal bits that belonged in the pre stage, and universal bits that did not. To simplify I settled on the following:

With this setup I do not have to worry about defining the stages multiple times. I even took it further by doing the same concept for the different groups that are also applied to systems, so the universal base and the group base are both configured as in the last example. I have not tried it with the post stage, as I do not use one yet, but I would imagine it would work just as above. Here is an untested example:

Maybe this seems fairly obvious to people already using stages, but it took me a bit to arrive here, so hopefully it helps you out.

 

UPDATE: PuppetLabs’ stdlib module provides a ‘deeper’ staging setup.  Here is the manifest in github.

fedora 16

So… I installed Fedora 16 on the home desktop today, coming from Fedora 14.  I am running it on an HP Pavilion Elite… I would tell you what kinda but they have garbage written all over the thing, just not the model!  Suffice it to say, AMD 3core processor, 6GB RAM, and an nVidia vga+dvi video card with two Samsung 20″ monitors attached.  My first attempt to upgrade was to use the preupgrade feature.  Ran the software, it said reboot, I did, and came right back to Fedora 14.  Meh, it failed last time so no big suprise[1]… while the preupgrade software was running I burned the Fedora 16 install DVD.  Booted to the installer, which was only on the vga feed monitor, and to big to see the buttons (yay for alt+b for back and alt+n for next).  I went through the process telling it to upgrade my current Fedora 14 to 16.  Post installation I could not get X running.  I assume this was because on Fedora 14 I was using the RPMFusion packaged nvidia drivers.  At this point I sighed and remembered why I have always believed in the concept of fresh installs.  And why not this time? There is btrfs, new userids, the new systemd, and Gnome 3.2.  Why wouldn’t I want a clean slate?

So I rebooted to the installer.  Oddly enough this time the installer display “worked”.  I could see everything, it was just spanned over both screens which for an installer is more annoying than helpful, BUT it was better than the first install so.. moving on.  Was able to easily add my wireless network during the install to grab updates at install time, which is great. Post install we come to the Firstboot screen, which unfortunately suffered from some screen freakiness, it was skinny and too tall for the monitor.  You can select Forward using alt+f easily enough, unless you want to provide smolt data, that requires a few trial and error tab and spaces.  A short time after starting I was presented with the new and prettier GDM login prompt.  I logged in and was presented with… pretty much nothing.  But that is the point, right?  So there is a small non-descript panel across the top with Activities, the date, a few icons and my name.  I had heard that Gnome 3.2 has Online Account integration, so I clicked on my name in the top right corner and clicked on Online Accounts.  I added my Google account, which is the only type it supports currently, and… well nothing happened, at least not noticeably.  So I went and read about Online Account integration.  It says Contacts are integrated, so I clicked on Activities and typed a contact name in, and voila, there it was.  It says Calendar is integrated, so I clicked on the date in the top panel.  My calendar was not there, so I clicked Open Calendar. Evolution Calendar popped up and it had my Google calendars in it.  I checked the boxes and was prompted for my password, I provided it, and my calendar integrated with the top panel.  Yay.  It says Documents are integrated, and ya well, I never figured that one out.

So I did a couple of quick normal behaviors.  I added HotSSH, which is an amazing SSH GUI for Linux; added RPMFusion, added the printer (even easier than last time, ridiculously so) and created the wifey’s account.  At which point I decided to try to add “profile icons” to our local accounts.  So first I have to say that it is silly to integrate Online Accounts and not just use my associated profile picture from that account, or let me choose which account’s picture to use.  Second, IT DOES NOT WORK.  I tried clicking on the blank icon and selecting an existing image to scale down, nothing happened.  I tried making a smaller image and selecting that, nothing.  I googled how to set it and found the ‘manual’ way, nothing.  I found a more specific manual way, nothing.  I rebooted just to verify, still nothing.  I even tried using one of their icons, still nothing!! So that is very annoying.

Aside from an occasional sluggishness, which I am currently (and perhaps naively) attributing to the nouveau driver, its pretty good.  Before this I have been using Gnome 3 a bit on my laptop, so I am not completely thrown off or anything.  I only have primary two complaints with Gnome 3 is the Alt+Tab behavior.  And its not that I don’t like how they tried to improve it, and their concept is decent.  But there is one significant flaw.  A quick alt+tab single hit has historically always taken you back and forth between your current and last window, even if they were the same app.  Now it appears that this functionality comes from Alt+Esc.  Which is weird, and I only just now accidentally discovered while typing this complaint.  I thought Alt+Esc was Activities, but that seems to be mapped to the Windows key, which is kewl.  Maybe this complaint is now void.  My second complaint is that I’ve always used the right click menu on the desktop to open terminals, and now I’ve got to do a new work flow.  is that really a big deal? I guess not.

The real question now is, how will the wife take the change?

Update 1: I am sad to report, that HotSSH, while installable, does not work without installing an undefined dependancy, the ‘vte’ package.  A bug was already filed.

Update 2: Got the Gnome profile images working.  Still had to do it the more manual way of placing the image as /var/lib/AccountServices/icons/${userid} and adding the “Icon=/var/lib/AccountServices/icons/${userid}” in /var/lib/AccountServices/users/${userid}.  The problem was that the images were not tagged appropriately for SELinux.  So a quick restorecon -R /var/lib/AccountServices and its fixed.  However this does not explain why doing it the easy way through the GUI did not work.