Editing image metadata with exiv2

My wife just had some maternity photos taken, and they came out wonderfully. I went to import them into Shotwell so we could upload them to her Picasa web albums (sorry, no link to her pictures)  and all the images imported in the year 2001 folder. I looked at the data and while their time stamps seemed reasonably correct, their date stamp was well over 10 years off.

I use Linux for my operating system, and a few quick good searches turned up the man page for the program exiv2. And while a man page got me there, I figured it would not hurt to provide an example (both for myself and any of you lucky people who stumble upon it).

So, first lets look at the metadata of the image.jpg:

So it says in bold up there that this image was taken on March 10th, 2001 at 03:13:05 am (it is a 24h clock).  I am not specifically worried about the time, but the date will be nice several years from now.  So we need to adjust several things, which I am going to follow in parenthesis with the cli switch the man page says we need.  What needs to adjust is the year (-Y), month (-O), day (-D), AND time (-a).  The trick with the adjustment you have to make is that it is a time shift, not a direct set.  You can not tell it the specific day, just to move the setting back or forwards, kinda like using the buttons on an alarm clock.  For all of the fields you can provide a negative or positive number, but the time field is a HH:MM:SS format with the minute and seconds being optional (but if you want to change seconds you need all 3). We want to move from March 10th, 2011 at 03:13:05am to October 15th, 2011 at about 3pmish. To do this we are going to have to adjust forward 10 years, 7 months, 5 days, and 12 hours.  So this is how the command should look:

The command does not return a visible success, which is very common for *nix applications.  But before we verify it you might be asking youself, “self, what the heck is that ad in the middle of his command?”.  Well self, that is an action, telling exiv2 that it has to do something.  This is optional if the switches you are providing imply it.  All the switches I am using do, so my use was just overly explicit.  Anyway, as I was saying, there are other ways to verify if the command ran successfully that I won’t go into here, but what better way than to check the metadata again!

Yay!  Now I just do a quick loop through all the pictures in the directory and I have an updated set.

and voila… several dozen reasonably corrected dates in the metadata of those images.  If you noticed the command was slightly different, good for you.  For the actual loop I used the shortened time adjustment I mentioned earlier and left off the ad.

jumping out the window

So, its time to feed the troll.  I do try and avoid this usually, but I was channeling some xkcd when I wrote this.

While my first exposure to computers was on a Commodore 64 then later with SystemV and Red Hat Linux (RHL), I started my professional career as a Windows desktop support lackey.  In that role I learned a bit more about Windows, and began maintaining  an Exchange 5.5 system and then later towards IIS and DNS management (backwards, I know).  Within a a few short years I was ecstatic to return to the world of Linux sans Windows.  One of the big things for me was that things I wanted to do were always a pain to accomplish, whether it was on Windows or Linux, but Linux allowed me to do it faster and easier.

The deepest development I do is python and some other scripting and web languages.  I have never built Linux from scratch.  I installed Gentoo once, but thought it was way to much effort.  Slackware was nice, but I have always been a fan of my first Linux, Red Hat.  My first personal server ran RHL 7.1 and was maintained with updates until they stopped coming.  In fact, said server is sitting under my desk at home powered off, but still functional.  It has not gotten an update in years, but that is because none were available.  I ran RHL 7.2 on my Toshiba Protege for about 4 years.

I concede frustration in the early days of Fedora, because it was a rocky start, very bleeding edge, and not prone to stability.  I even strayed to spend a year or so in the arms of another, ahh Ubuntu.  Such a nice approachable mistress, but high maintenance between releases due to all the non-upstream modifications.  (Not that Fedora was better mind you in upgrades, it didn’t support release upgrades till Fedora 9).  I did come back to Fedora around version 9 and have been staying up to date as time allows. I do prefer to stay completely current.. but time is not always in my favor.  My current desktop is Fedora 14, I am waiting for 16’s release to introduce the wifey to Gnome 3.

I am Red Hat certified and at work I am a Red Hat Enterprise Linux (RHEL) advocate.  One of the largest reasons is that I feel it is the best way for companies that have zero interest in participating in open source to actually contribute back (by paying Red Hat to do it).  I have been utilizing RHEL since it was released as version 2.1 back in 2002.  I keep my systems updated all the time.  However, I have been singed a few times by updates.  They can be counted on one hand:

  1. Way back in the day the bind-chroot package would blow away your named.conf on an update.  But now that I think about that… it was not even RHEL.  It may have been RHL.
  2. In RHEL5 there was a openssh update that introduced dynamic tcp window scaling.  We have a phantom network issue and thus started having stalled ssh data transfers.  Not really the update’s fault.
  3. In RHEL5 they changed the tzdata package from noarch to arch-specific and you could end up with a bad old tzdata package installed.  Did not actually break anything.
  4. At one point kmod-nvidia blew up on a kernel update during the Fedora 13 time frame.  The kmod was from RPMFusion and of nvidia’s proprietary binaries… kind of an issue waiting to happen in the first place.  I moved to the akmod (self-rebuilding kernel mod) package and haven’t had an issue since.

I do have one habit that is rather mildly irritating to both myself and others, I am a big fan of playing with software that is supposed to do task X or have feature Y, but is not quite there yet.  Sometimes this is due to it being bleeding edge, other times its just poorly maintained software, or maybe the vendor was just a liar.  I have never had this habit destroy a system,  usually just project timeline delays or the need to find a better solution.

So what is the point of this?  I was forwarded a link to a rant on ZDnet this last Friday (2011.10.21) entitled  Why I’ve finally had it with my Linux server and I’m moving back to Windows by David Gewirtz.

I am not going to delve to deep into his background since it is readily available on his site, but based on his advertised background this guy should be beyond my skill set in understanding the how computers work. Sadly, understanding and development skills does not a skilled administrator make. Here is how he describes himself in the afore mentioned article:

“… I’m no tech babe in the woods. I’ve been a UNIX product manager, I’ve written kernel code, and I’ve taught programming at the college level. ”

So, my quick synopsis on me.  I am a fairly competent sysadmin with a background of being in the trenches.  I took a C class in high school that used C/C++ for Dummies as the course material, and is practically the last time I touched it. I did list my scripting experience above.  I have never touched kernel code.  I never went to college.

Now that context has been set, here are quotes from his article followed by my responses.  I am trying to consider that his rant was written while he was angry and (hopefully) just being melodramatic, but it is difficult.

“I’ve had it with all the patched together pieces and parts that all have to be just the right versions, with just the right dependencies, compiled in just the right way, during just the right phase of the moon, with just the right number of people tilting left at just the right time. “

And what exactly are you doing? Unless you are grabbing several repositories from random places and enabling them all at the same time, or installing everything from scratch this should not be an issue these days.  3-5 years ago? Maybe. 5-10 years ago? okay ya… probably.

“I’ve had it with all the different package managers. With some code distributed with one package manager and other code distributed with other package managers. With modules that can be downloaded on Ubuntu just by typing the sequence in the anemic how-to, but won’t work at all on CentOS or Fedora, because the repositories weren’t specified in just, exactly, EXACTLY, the right frickin’ order on the third Wednesday of the month. “

Okay… so apt (Debian and Ubuntu) and yum (CentOS, Fedora, RHEL) are not the same software.  So their commands are a touch different.  They are also used in different distributions, so there might be different package names. I can see where that can be annoying, it has annoyed me at times.  But this is complaining that your Windows box and OS X box do not use the same exact programs and syntax. There is software that exists on both of those platforms that require different installation and execution procedures.

“With builds and distros that won’t even launch into a UI until you’ve established a solid SSH connection, “

Umm… so I personally prefer my remote access to my servers over an encrypted channel, and SSH is a great medium for that.  You are a security advocate, right? This is a server environment, right?  You need a GUI, why?  I am not against GUIs, they have their place.  However, most server components in Linux do not have a native GUI tool.  It is usually just configuration files, and sometimes a web interface.  Furthermore, if this is the administrative interface of a backup program, why would you run it on lots of machines?  There should be a central administrative interface, and RDP to Windows is a nice feature for that purpose.  Even if it is on a Linux server, if this is a centralized interface, what is wrong with just exporting the GUI over X via your SSH session? Its secure and easy.  It requires almost no setup (install a few programs on your Windows desktop, establish the connection, run the program).  VNC is rather insecure usually…

“I’ve had it with the fact that this stuff doesn’t work reliably.”

Ahh reliability… such a subjectively quantifiable term.  I like how you do not explain how it is not reliable, you just barges right on to knowledge and understanding.  In fact the closest you come to saying anything about a lack of reliability is the update issue resulting in a crash, but this statement is completely separate from those statements in your rant.

“Oh, sure, if you work with Linux every hour of every day, if this is all you do, and all you love, if you’ve never had a date since you grew that one facial hair, if you’ve never had any other responsibility in your entire life, then you know every bit of every undocumented piece of folklore. You know which forums and which forum posters have the very long and bizarre command line that only. That. One. Guy. Knows. “

“and THAT command line sequence can be gotten by getting on just the right IRC channel, at just the right time of night, and talking just the right way, to that one incredibly self-absorbed luser who happens to know that you need to put the undocumented”

Okay.. so it is my day job and thus I do spend a significant part of every day doing the work, I will give you that.  But I have had plenty of dates (and am now happily married with a child on the way), lots of other responsibilities, and do not know lots of undocumented folklore (documented, sure).  I do not frequent forums except as the result of searches, and I do almost all of my help searches strictly at google.com.  There are times when I need more help than a search provides, and I use things like the mailing lists or IRC channels for that software.  I do not always get the help I need, but I usually get it figured out.  That being said… I rarely have those types of scenarios even with the bleeding edge things I play with all the time.

“Can you imagine my rank naivety here? I actually said Okay to a Linux update. I know I should have known better. … But I didn’t. I figured that after all these years, Linux was finally robust enough to not rip me a new one because I just wanted to run a server and keep it up to date. Silly me! Silly, silly me!”

So unless you are pulling in packages from all kinds of non-reliable repositories or letting manually installed software override package installed software this should not be an issue.  I would love to know what the root of this update issue was, because user error is the number one cause of package management updates on any systems in my organization.

“Sure, Linux machines can make great servers. But they require a dedicated group of Linux groupies who know all the folklore, all the secret handshakes, and where all the bodies are buried. “

“That’s how you survive with a Linux distro apparently. Once it’s installed and works, never, ever update it.”

I install machines, turn on updating, and walk away.  They run.  I really do not know why you are having such an issue and would actually love to know the truth behind your problems. It boggles my mind to the point where I felt the need to write this blog entry.

Take into consideration that I have a fairly general philosophy about running software on systems am responsible for either installing or administering.

  1. Install software from trusted repositories.  IE: the distribution + one (two is pushing it, but doable) external repository.
  2. Do not configure repositories that have conflicting packages. (Do not turn on DAG and EPEL)
  3. Avoid unpackaged software. Only install packaged software if you can.  If it is not packaged, can you package it? It is not that hard, and other people benefit from your work.

With those 3 things in mind I usually have no issue with my systems.

“Oh, and one last point. Don’t go telling me I don’t know what I’m doing, because that proves my case against Linux. I know quite well what I’m doing, but not to the level that is apparently required to keep a simple LAMP machine running. ” (emphasis his)

What I love about this quote is that he attempts to deflect any possibility that he is at fault by saying the requirement of someone to have entry level junior system administrator skills is to much to ask for from someone that wants to be a system administrator.

Now that all being said.  If you have a bad experience with Linux and are done with it, then fine.  Enjoy Windows, or try to.  Just remember that your experience is not the norm.  Linux has greatly improved over the years, and from what I hear Windows is starting to get to the same point with updates and usability as my experience with Linux’s updates has been. Are Windows admins still waiting for SP1 or 2 before applying updates? I wish you luck.

On a final note, it does worry me that this is the type of person advising Washington on technical issues and using such a public forum to spread FUD.  Nothing in his background suggests that he is a competent system administrator.  Product management and development? While development and system administration tend to overlap, in my experience most developers turned system administrators are more likely to have all kinds of funky behavior and configuration patterns on their systems.

git reference guide – part one

I’m still getting used to utilizing git for my version control. The part I like most is the merge handling. So here is another reference post for me, hopefully it will help me remember bits of my git work flow. Mostly basics, and some I do not need to remind myself, but it does not hurt to document.

  • Checkout repository
  • Add a file to the index
  • See current status
  • See differences between current changes and committed changes
  • Stash changes without committing them
  • Update local repository from remote
  • Commit changes in index
  • Generate a patch from local commit

    Some useful options

    • –find-renames, -M n%
    • –output-directory, -o dir
    • –numbered, -n
    • –unnumbered, -N
    • –signoff, -s
  • Directly send locally committed patch via e-mail (see man page for Gmail config)
  • Apply a patch set
  • Push changes to remote

That is it for now, but I know there will be at least one more of this because I have not touched on branching and switching around between repositories.

Renaming images utilizing time taken metadata in linux

This is more of a reminder for myself, but i figured I’d put it here.  The wifey wanted me to fix the naming on some pictures so that they were named based on their date.  Instead of manually doing so a quick search on Google showed me that the identify program that comes with the ImageMagick package in Linux will give me access to the data on the command line.  Taking the data and some awk-fu I threw together this quick one liner:

Since I am actually pretty prone to trial and error as I make my one-liners, I prefer for the command to print my commands out instead of executing them.  Makes it easier to spot errors before execution, and is just a simple copy-paste away from execution.

I’d break this out into a moreable block, but the awk section kinda goes on and on,  but here goes a breakdown

So given a set of files named IMG_2204.JPG, IMG_2840.JPG, IMG_3204.JPG as a source into this we pull the following date:modify results (in order):

And the final output from the script is:

All the users gather round

Linux has two classification of accounts.  System accounts and User accounts.  System accounts are delineated as any account with a UID lower than the defined UID_MIN value in the /etc/login.defs file, with the UID of 0 being reserved for the root account.  Red Hat based distributions systems set UID_MIN to 500, which is a deviation from the upstream project, shadow-utils, which uses of 1000.  Some of these UIDs are considered to be statically allocated and others for dynamically allocated.  In the upstream distribution Fedora there are currently static UIDs up to 173.  There is no clear definition of where the dynamically allocated UIDs start, but within Fedora as of version 16 and higher there is currently a plan to help define this more clearly.  One part of that plan is that Fedora upping their definition of UID_MIN to the upstream 1000.  If the feature makes it in this will still not effect RHEL until version 7 at the earliest.  I’m honestly not sure if any other distribution has a clearer definition of the usage of these, but if not maybe that will change.

The primary use for system accounts is for any application that needs a dedicated user.  Some good examples of this are tomcat, mysql, and httpd. One of the biggest benefits of having a designated space for system accounts is that you can define a specific UID, and have that application user have that same UID on every system.  Take for example a case where a user, such as myapp, owns millions of files on a system.  If the myapp user was created without defining that it is a system account, then myapp would get a UID in the 500+ range, we will use 502 for the sake of this example.  Now say I need to keep these files synchronized with a backup system.  However on the backup system there were already several more users than on my production system and so myapp was assigned the UID of 509.  What about 502? That is assigned to gswift. Well now if my sync of the files preserves the file ownership, the user swiftg now has ownership of all of those files, because sync is a based on the UID, not the readable mapping.  The same thing could occur if you were migrating from one server to a new one.

So, where am I going with this?  I think it is important for developers to remember that any time you are creating a user on the system for your application, it should be in the system account area.  Luckily most do, especially when they include their software in a public distribution.

socializing

Anyone that knows me knows that I never had a myspace or facebook account.  While I know that my adamant refusal to establish a presence on those sites seemed like it might be a bit of an elitist attitude, it was actually more of a “oh no, not again” reaction.

I began my online life in 1992, and my primary use of the Internet was Internet Relay Chat (IRC).  IRC was, and still is, the ultimate online chat room.  There are hundreds of networks these days, and while I am sure there were more than just undernet and EFnet, those were two of the biggest when I started.  After getting me signed onto EFnet, my older sister told me, pick the name of something you are interested in and join a channel with that name preceded by a hash tag.  As a twelve year old, my priorities were simple.  No, I did not join #sex. I joined #genesis (yay Sega).  As I am sure you have guessed, it was not a video game channel, but it did act as the #genesis *badump chi* to my 5 year addiction to online chat. I met several interesting individuals in the room, and they invited me over to #dakewlchannel, and I have not spelled the word cool correctly since.  I then spent the next several years hanging out in #vampcafe on undernet.  Vampires were kewl long before before Twilight.

If you have not noticed yet, Twitter’s use of the hash tag was not original.

I am not exactly sure when it happened, but I made the leap from IRC to a similar but even more addictive social network.  Multi-User Dungeons (MUD) were the precursor to MMORPGs.  Instead of graphical games that you played with a local console they were server hosted text games.  Lots of fun.  While I can not recall the name of the first mud I played, I do know that StuphMUD is why I do not spell stuff correctly anymore either.

As I hit my later teen years, I somehow started to develop a life outside the Internet.  This was quite the blessing considering the number of holidays I spent online with people I had never met, and most of which I never would.  Part of that life outside the Internet was an introduction to the the Austin rave scene.  My primary interest in the events was learning Poi, although my interest did vary a bit over the time I was involved in the scene.  Oddly enough, this pushed me back into another form of social networking.  The increasingly popular (at the time) Web Forum.  This too became a bit of an addiction.  I had a hard time staying off the forums, even if it was just reading updates.  Drama eventually drove me away.  I do not recall if it was just the site’s drama, or drama I had with people that actually affected the separation, and I do not really care because I got away from it.

Within a year or two myspace started becoming popular, and I avoided it like a plague.  Facebook came around, but was initially for colleges only, which was a great reason for me to stay away as I had never attended one.  Once it became open and more popular, I already knew it was a possible online addiction and readily stayed away.

I did however see at least some benefit to LinkedIn, and do have a profile there. I do not spend much time on it, but occasionally look at my feed. Its about all I do with it though, I have not been dragged in.

Then Google went and did something.  I am still not sure how I feel about it, but they made it so that all I had to do was flip a switch and my existing Google account was apart of a new social network, Google+.  So I flipped it.  I have got to say, I love the concept of Circles.  The blatant openness of data sharing that was the default on all its predecessors was a bit much for me, but the directed and immediate control of posts to specific circles makes that significantly less intrusive.

On the other hand, I am not a fan of Google+’s name policy.  I get some of where they are coming from, and since I specifically use my Google account with my name its less of an issue for me directly.  That does not make it right.  The Internet and Free Speech have always gone hand in hand.  To have potentially one of the most powerful and online companies in the world decide that there is no longer a place for that is just scary.

Fortunately, the other thing that is important to remember about Freedom, is that it exists.  There are so many technologies available and so many ways to use them that if there needs to be a way around a repressive regime, one can be made.  The unfortunate side of that is that few of those are seen as ‘easy’ by the people that would need them, whereas the likes of Twitter and Facebook are well within the reach of the most technologically impaired.

So… Google+ is an experiment for me, dipping my toes into the modern social network experience.  As of yet I have not looked at it any more than LinkedIn, and am pretty comfortable with the experience.  Let us all hope that something better comes of their online profile policies.

 

nokia connection 2011

So… its been a while since I’ve commented on the state of affairs with nokia… mainly cause I’ve had very little reason to care.  However, Nokia Connections is starting shortly, and in another window I am streaming the presentation that will start it all off.  So while I wait for the presentation to begin (it has been beginning shortly for at least 20m) I figured I’d comment on the recent big rumors that have been running around are that this is when Nokia will announce a MeeGo device, that it will actually sell!  Be still, my beating heart.  Within the last month in a  half two videos were released, here and here.   The word is that Nokia will be discussing several new devices, including a disruptive device.  During the Elopcalypse it was said that the group responsible for MeeGo would continue working on future distruptive technologies, such as MeeGo.

There are two devices being teased currently.  One is a sleek looking touchscreen only device, and is being referred to as the Nokia N9-01 or the Nokia Lankku (which apparently if means Plank in Finnish),  while the other is the one we see in the videos and is referred to as the development device and also as the Nokia N950.  I’d like to echo a sentament I read earlier about this.  Why would only the videos of the development device leak out?  You’d think they wouldn’t have been wasting money on it.  However, I could see the videos and devices they leaked as being rather old (pre-Elopcalypse) and released by people who were bitter or for some such other reason.  Despite leaks abounding, several events have come and gone where devices such as these could have been showcased, with nothing to show for it.  Nokia Connection is not normally considered one of Nokia’s biggest events (until this year), but it seems to have been selected as a big deal.

And it starts….. cue montage with descriptive verbs describing the South East Asian Pacific (SEAP) area.  Short intro by the head of Sales for the SEAP region.  Enter President and CEO of Nokia, Stephen Elop.  First WP7 device will be later this year (2011) and in volume in 2012.  They have working Nokia WP7 phones built and working now (wish they’d have put that kind of effort into MeeGo ‘eh people?) .  We did recently release a new version of Symbian, called Anna, on several devices including the newly released E6.  Now the SEAP Head of Smart Devices is going to show off Anna on the Nokia N8.  He’s got it attached to the HDMI connection and is showing off a picture he took of the crowd, and is going to upload it to his OviShare account.  Based on what I can see of the screen. to accomplish this you e-mail it to Ovi Share.  But that is not what he was showing off.  What he wanted to show was portait mode QWERTY keyboard.  I will not comment on the sadness of the timing on that, and I would kinda like him to move on cause this isn’t why I’m watching :)  Especially, since they just decided to demo Bing in the browser.  This is me really not caring.

Aside: Microsoft is not on my good side, and not because of the Nokia WP7 bit.  Because their products are always “almost” good.  Such good ideas going to waste by poor implementation.  The recent burn for me? Microsoft/Ford Sync.  Maybe I’ll write up about that some day, but right now it would just be an unhealthy rant.

Elop is back.  They will support Symbian till at least 2016.  They will release at least 10 new Symbian devices in the next year.  This is good.  My E71x is dying, and if I can not get a N950, I might settle on a Nokia E6.  Now he is bringing in Mary McDowell, Executive VP, Mobile Phones.  She is talking about their Next Billion consumers directive, which is focusing on Symbian Series 40 devices.  Such as the Touch and Type devices.  I wouldn’t mind one of these if whatever tablet secondary device I got wasn’t cellular capable.  They have shipped over 17 million C3 QWERTY phones.  They are debuting another new Dual SIM device today.  I wish they’d released one like 9 years ago when I wanted one.  Last month they announced the X1-01. Its a party in your pocket apparently, with 16GB of memory and the loudest and clearest speaker Nokia has ever produced.  Now they are shipping the C2-00, but will now be shipping a C2-03.  Its an interesting looking phone.  Its a dual SIM, slider, with the Touch and Type interface.  The kewl bit about the Dual SIM? The second SIM has a easy changer slot on the side like a microSD card. Once again, really, it took this long for someone to do this?  I bet there is a restrictive patent for it too….  Nokia is also taking a cue from the concept of Web pages as Apps and making that a way to write apps on Symbian Series 40.

Here comes Elop.  Thanks Mary.  We are going to focus on supporting our Developers.  While Marco, Senior VP of Developer Experience, talks I’m going to take a shower, cause he is talking Mango development.  With my luck he’ll talk about the N950 development device ;)

And I’m back.  And so is Elop.  Wonder what I missed.  He’s talking about a better phone, yay for timin!, and introducing Marco (different one), Senior VP of Design .  He just said… “We’d like to introduce you to the Nokia N9”.  He says that you are supposed to watch your users and it will help you create a better device.  People want a big screen and to be mobile.  So they set the goal of creating an all screen phone that is easy to use.  He talks about the elegance of the glass, leading into the fact that instead of a home button any time you need to go there, you just swipe a screen from one side to the other.  He’s referring to the primary screens as their numeric place.  So the First is the standard application list.  The Second is a unified event/notification screen.  The Third is running applications.  Its carousel-esque in that you can just keep swiping one direction to get back to the start.  Anytime you close an application it returns you to the previous primary screens you were running.  Now he is waxing eloquent about the shape of the device. I think I’ve discussed before that I’m not a touchscreen only guy.  So I’m probably spent.  Oooh… talking about bring up old news.  He just made a reference to the iPhone Grip of Death.  I’m confused about the “large” screen bit helping displace the home button.  There is plenty of room for a button with that large of a bezel! :)  And now he just pulled it out and is going to put it up on the screen.  I never heard him say it was MeeGo, but I’m guessing it is.

Anyways… its not really something I want sadly, but such is life.  I’m sure I can read the rest of the event later.