Kickstarting Fedora 21 Workstation with Cobbler

Quick warning. I’m writing this as more of a post-mortum. I didn’t get all the errors recorded, and I realize that will make this hard for some people to find. I am also simplifyingthe process a bit more than what I actually went through, so some of it might not be perfect. I’ll try to make that better, but don’t hold your breath on me re-creating all the things that failed just to update this.

Odds are that if you’ve randomly stumbled upon this post you are suffering.  Its that pain that can only be understood by someone who questions why something that has worked for decades suddenly breaks.

Now lets be clear on something, I’m not against the Fedora.next concept.  I think there is a lot of merit to the concept, and more power to those implementing it for having the drive and vision to do so.  However, it is not fun spending several nights on a low and capped bandwidth network trying to figure out why something I’ve been doing for years doesn’t work.

So lets step back a few weeks.  My wife is opening an optometry practice, which is exciting and stressful.  It is also not cheap. Bearing in mind that I have over 15 years of sysadmin/network/security/blah/blah/blah experience I’m trying to save her a bit of money by handling her IT.  As part of that I’m bringing a few basic concepts to the environment.

  • Disposal and Repeatability – I want to be able to rebuild any of the desktops or the server at the drop of a hat.  There are several tools that facilitate this like The Foreman and Cobbler.  I settled on the latter.  I’ll talk about that elsewhere.
  • Linux – I know that the medical industrial complex is not necessarily Linux friendly, but I bet I can do at least the desktops this way.  This is going to have some fun problems.
  • Security – Small offices are notorious for their lack of security.  Going to conver user management, shared secrets, and good password policies.

But you want to know how to kickstart Fedora 21.  Lets move that direction…

One of the first things I established was a Cobbler instance and downloaded the Fedora 21 Workstation media. Let’s import it.

If you dig into this a bit you will find that the Live Workstation doesn’t have the bits necessary to match any of the signatures, and updating the signatures isn’t the solution.  A cursory google search shows that the recommended path for kickstarting Fedora 21 is to use the Server media with the Fedora 21 Everything repository. So we go download the Server DVD (or netinstall) and start syncing the Everything repository.

Quick aside. What really happened in my late nights of making this happen was that I found the above information, but confused the Everything repository with the Server Everything iso. So I started with just the Everything ISO, which isn’t an actual bootable iso with a installer. This led to weeping and gnashing of teeth. After a while of poking and prodding at this and reading poor documentation I realized there was a base Everything repository too. Then, because of my bandwidth limitations, I had to go sync the repository elsewhere, and manually import it into the environment.

Awesomeness. That’s done. I then added this repo to my f21 profile, and tried to kickstart. Using the simple default kickstart from cobbler, the kicking succeeded. However I only had a server instance due to not defining custom package sets.

I have a Fedora 21 workstation so I looked at the available groups and started with the simple “@^Fedora Workstation”. Restarted the process, and fail. No such group. Turns out cobbler’s reposync doesn’t grab comps.xml with its default settings. Go edit /etc/cobbler/settings and add ‘-m’ to the reposync_flags setting. Then re-run the cobbler sync.

I haven’t tested it with the re-run. Due to my constant back and forth I ended up doing a full re-sync at one point after adding the -m.

At this point it should be working. Let me know what doesn’t work and I’ll update.

Restart the flow

My good friend and Rackspace recuiter, Jill, has started a blog! Which is awesome. So we were talking and I pointed out how I’ve been slacking. I have actually written a blog post this year, and it succeeded in making my time between posts not reach a year. But still, 11 months is weak.

To that end we’ve both made a schedule. Me to fix my problem, and her to ramp up. Let see how this goes! :)

Update: and fail…. (its 3 months later without a single post)

Creating signed debian apt repository

Update: 2014.03.19 Use freight

Recently had the pleasure(?) of having to resetup an apt repository. It had been managed by reprepro, but there apparently the underlying concept of a single version per repository that is central to the implementation. Seems kinda weird to me, as was it to some of the users of the repository. In an effort to resolve this and simplify our repository management I did some research and came up with a very simple solution.

Using the following references:

We already had a GPG key, so I didn’t have to create a key, i just followed the instructions in How to use GnuPG in an automated environment to generate a passwordless signing subkey using the existing gpg key. If you don’t have a key already there are lots of resources available if you just search around a bit. Here is one.

Once the private key was created I removed the original gpg keypair for that user off the system, storing it some place safe, and replaced it with the subkey’s files I just created. This allows us to revoke the subkey if it or the server is compromised. Then we can generate a new subkey using the original key and the consumers of the repository won’t need to change their installed key.

Next, we need to lay down a two things. A directory and a config file. In my example the directory will be the root of your domain on a web server. I’m not going to go into how to setup a web server here.

The config file is for apt-ftparchive to generate the Releases file. Following the instructions from Automatic Debian Package Repository HOWTO we generate /srv/repo.com/debian/Releases.conf

In my repository we only have noarch packages. No source packages, no architecture specific. Adjust that field as is necessary for your repository. We also only have one component section. I don’t even think it ends up being relevant to this simple of a repository, the same as Suite. What matters is that the codename matches your directory. I used debian for both, because ours is not specific to a Debian release such as “squeeze” or “wheezy”.

With these files in place I created the following script to build the repository, and saved it as /usr/local/bin/buildrepo.sh:

This script assumes that the packages are being uploaded to the RELEASES directory of the user bob.

I create both Packages and Packages.gz because i didn’t like seeing the Ign message when I did an apt-get update. You could consolidate it down to the one line and have just a single file.

Now for the fun part. With Incron you can configure the execution of a command/script based on filesystem events. Since packages are being pushed into /home/bob/RELEASES we want to monitor that directory. We have a large number of file event types we could trigger from, as can be seen under inotify events of this man page. For our purposes IN_CREATE was what I used. I need to verify as IN_CLOSE_WRITE might be better for larger files. To configure this I run incrontab -e and added the following entry:

That’s it! Now if you push a deb package into /home/bob/RELEASES the repository will get generated. The resulting repository can be accessed by configuring the following source on a client system.

Extra credit: add your public key to the /srv/repo.example.com/ directory so that users can add it to their local apt-key ring, which will otherwise complain.

installing puppet 3.0 + hiera + puppetdb + librarian

What we are doing

So Puppet 3.0 recently came out. It has Hiera support built in. Along with this PuppetDB 1.0 was released, which is supposed to be a very handy and very fast means of centrally storing catalogs and facts about your Puppet clients. Librarian is a project I recently ran across that helps coordinate the modules in your Puppet environment, unfortunately its not packaged.  I don’t usually like using Puppet Lab’s softwre repositories directly, but am for this because the software isn’t in EPEL yet.

So all I’m really doing is help layout a proof of concept environment using these tools.

Software

Prerequisites

  • You have enabled PuppetLab’s repositories.
  • You are not going to implement it this way in production.  That would be bad, m’kay?
  • You are going to notice than installing librarian as a gem completely overwrites your package installed version, thus validating why this in production is bad.

Installation

Configuration

Reference: http://docs.puppetlabs.com/puppetdb/1/connect_puppet_master.html

  • Make sure your fqdn is resolveable. Right now we are using a single host, so I’m just using localhost not the fqdn.
  • Populate /etc/puppet/puppetdb.conf with the following
  • Set the puppetdb server in /etc/puppet/puppet.conf
  • If you are using a separate host ensure that /etc/puppetdb/jetty.ini has the servername set to our fqdn. If its unpopulated, check it again after you run puppetdb-ssl-setup below.

Initialization of Puppet and PuppetDB

So PuppetDB’s SSL setup is very strict. For now, just make sure that you are

Adding modules using Librarian

Reference: https://github.com/rodjek/librarian-puppet/blob/master/README.md

  • First, prepare your puppet install for Librarian to control your modules directory
  • This will have created a PuppetFile in /etc/puppet
  • Add a puppet forge module into PuppetFile
  • Add a module from a git repository into PuppetFile
  • Tell librarian to build your modules directory
  • Check out your handy work

Configuring Hiera and preloading some data

ya.. need to get to this part..

Renaming images utilizing time taken metadata in linux

This is more of a reminder for myself, but i figured I’d put it here.  The wifey wanted me to fix the naming on some pictures so that they were named based on their date.  Instead of manually doing so a quick search on Google showed me that the identify program that comes with the ImageMagick package in Linux will give me access to the data on the command line.  Taking the data and some awk-fu I threw together this quick one liner:

Since I am actually pretty prone to trial and error as I make my one-liners, I prefer for the command to print my commands out instead of executing them.  Makes it easier to spot errors before execution, and is just a simple copy-paste away from execution.

I’d break this out into a moreable block, but the awk section kinda goes on and on,  but here goes a breakdown

So given a set of files named IMG_2204.JPG, IMG_2840.JPG, IMG_3204.JPG as a source into this we pull the following date:modify results (in order):

And the final output from the script is:

socializing

Anyone that knows me knows that I never had a myspace or facebook account.  While I know that my adamant refusal to establish a presence on those sites seemed like it might be a bit of an elitist attitude, it was actually more of a “oh no, not again” reaction.

I began my online life in 1992, and my primary use of the Internet was Internet Relay Chat (IRC).  IRC was, and still is, the ultimate online chat room.  There are hundreds of networks these days, and while I am sure there were more than just undernet and EFnet, those were two of the biggest when I started.  After getting me signed onto EFnet, my older sister told me, pick the name of something you are interested in and join a channel with that name preceded by a hash tag.  As a twelve year old, my priorities were simple.  No, I did not join #sex. I joined #genesis (yay Sega).  As I am sure you have guessed, it was not a video game channel, but it did act as the #genesis *badump chi* to my 5 year addiction to online chat. I met several interesting individuals in the room, and they invited me over to #dakewlchannel, and I have not spelled the word cool correctly since.  I then spent the next several years hanging out in #vampcafe on undernet.  Vampires were kewl long before before Twilight.

If you have not noticed yet, Twitter’s use of the hash tag was not original.

I am not exactly sure when it happened, but I made the leap from IRC to a similar but even more addictive social network.  Multi-User Dungeons (MUD) were the precursor to MMORPGs.  Instead of graphical games that you played with a local console they were server hosted text games.  Lots of fun.  While I can not recall the name of the first mud I played, I do know that StuphMUD is why I do not spell stuff correctly anymore either.

As I hit my later teen years, I somehow started to develop a life outside the Internet.  This was quite the blessing considering the number of holidays I spent online with people I had never met, and most of which I never would.  Part of that life outside the Internet was an introduction to the the Austin rave scene.  My primary interest in the events was learning Poi, although my interest did vary a bit over the time I was involved in the scene.  Oddly enough, this pushed me back into another form of social networking.  The increasingly popular (at the time) Web Forum.  This too became a bit of an addiction.  I had a hard time staying off the forums, even if it was just reading updates.  Drama eventually drove me away.  I do not recall if it was just the site’s drama, or drama I had with people that actually affected the separation, and I do not really care because I got away from it.

Within a year or two myspace started becoming popular, and I avoided it like a plague.  Facebook came around, but was initially for colleges only, which was a great reason for me to stay away as I had never attended one.  Once it became open and more popular, I already knew it was a possible online addiction and readily stayed away.

I did however see at least some benefit to LinkedIn, and do have a profile there. I do not spend much time on it, but occasionally look at my feed. Its about all I do with it though, I have not been dragged in.

Then Google went and did something.  I am still not sure how I feel about it, but they made it so that all I had to do was flip a switch and my existing Google account was apart of a new social network, Google+.  So I flipped it.  I have got to say, I love the concept of Circles.  The blatant openness of data sharing that was the default on all its predecessors was a bit much for me, but the directed and immediate control of posts to specific circles makes that significantly less intrusive.

On the other hand, I am not a fan of Google+’s name policy.  I get some of where they are coming from, and since I specifically use my Google account with my name its less of an issue for me directly.  That does not make it right.  The Internet and Free Speech have always gone hand in hand.  To have potentially one of the most powerful and online companies in the world decide that there is no longer a place for that is just scary.

Fortunately, the other thing that is important to remember about Freedom, is that it exists.  There are so many technologies available and so many ways to use them that if there needs to be a way around a repressive regime, one can be made.  The unfortunate side of that is that few of those are seen as ‘easy’ by the people that would need them, whereas the likes of Twitter and Facebook are well within the reach of the most technologically impaired.

So… Google+ is an experiment for me, dipping my toes into the modern social network experience.  As of yet I have not looked at it any more than LinkedIn, and am pretty comfortable with the experience.  Let us all hope that something better comes of their online profile policies.

 

3.0

Despite my intentions of actually using this site, I’ve yet to blog anything. To be honest, it shouldn’t surprise anyone, much less me. However, my desire to rework the 2.0 version of the site got the best of me, and I have migrated the blog into being the 3.0 version of the site. *cue cheers of excitement* All that really means is I’m to lazy to code the site any more (not that it was ever hard). One unfortunate side effect is that I haven’t determined how to make my resume available through here. On the other hand, I think I was the only one using that feature. I guess it became pointless once I made it non-indexable by search engines.

So where does that leave us? Right where we are. The site is, and will continue to basically just be a series of handy book marks for myself. And maybe I’ll eventually write something along the lines of my original intentions.