Upgrades – as much fun as a barrel of rotten fish

Having neglected my aging personal desktop for some time, I decided it was time to do some upgrades and make it into something I’d enjoy developing on again. Not that I’ve made much time for that lately, but that’s just the point, you know?

First order of business: get dual monitors working again on Linux. My desktop has added PCI video boards to add to the onboard video. I’ve had as many as three monitors hooked in that way, but decided in the end that’s too much. Two is perfect. But the PCI boards aren’t initialized until quite late in the boot process. This worked fine until Fedora 9, at which point some major X11 change was made and this configuration would simply crash no matter what I did. I didn’t have the time to debug it. Staying working on Fedora 8 got less appealing as time wore on, and sadly, I mostly left it on the Windows boot (which of coruse has no problem using both monitors) to just browse the web.

Solution: get a fancy PCIe dual-head video card and just use that. Actually, getting this to work required me to do a little digging in the BIOS. There are a number of settings for what to initialize first (onboard, PCI, or PCIe), and had I poked at it enough, I might conceivably have gotten the PCI boards working. Or maybe I tried that before to no avail, don’t remember. Anyway, by initializing the PCIe board first at boot, I have dual monitors under Fedora 17.

Yes, 17! Well, it’s not technically released until Tuesday, but it’s close enough.

Next problem: adding more RAM. After reviewing what my motherboard can handle, I added two 4GB DDR2 modules to bring my RAM up to 10 GB. That part was easy. I’m gonna need that RAM, because I’m all about emulation (Android) and virtualization (VMs!) these days.

Installing Fedora 17 was a little tricky. This computer has seemingly random boot issues. For one thing, it has always refused to boot from USB. That would be the preferred way to try a live distro on it. Burning CDs… so archaic, but at least it works. I saw some BIOS settings related to that, and while they seemed correct, perhaps I should fiddle with them some more. But a live CD worked for now. The other boot issue is that my keyboard is often (but not always!) disabled at the GRUB menu. Today I found BIOS settings for that too. Honestly, who would ever WANT to disable their USB keyboard at boot? And why did it work sometimes? Well, whatever. If only that were the last of the tricks my motherboard had for me.

Having installed F17 and fiddled around a bit, it was time to try out virtualization. I wanted to give the OpenShift LiveCD a try, so I started up virt-manager. It crashed. I tried VirtualBox instead. It wouldn’t run either. The problem was, I didn’t have kernel sources and headers (kernel-devel and kernel-headers) to match my running kernel, in order for kernel modules to be built for virtualization. And when I looked at what was available in the yum repos, there simply weren’t any matches. I.e., no kernel versions matched any versions of header/source available. I would basically have to build my kernel from source to get a match.

I hoped that situation would resolve itself, and when I looked today, it had. New kernel-devel available to match the kernel. Onward and upward! I tried virt-manager. It crashed. I tried VirtualBox. It gave me an error: “AMD-V is disabled in the BIOS. (VERR_SVM_DISABLED)”. Grrr! So I looked up what this is all about. There are two levels of problem.

First, my Athlon 64 does support AMD-V (the flag in /proc/cpuinfo is “svm”). However, this capability can be disabled by the motherboard. In my motherboard, the Gigabyte GA-M61P-S3, it is disabled with no option in the BIOS to configure it. Why? Gigabyte has offered no explanation. Five years ago, someone ran into this exact same problem using Xen, and the only solution he found is downgrading the BIOS. Downgrade? Yes. The ability to disable this feature was introduced in later CPU steps, and taken advantage of in later versions of the BIOS. I would be curious if the problem was rectified in later BIOS versions after that exchange (latest releases are 2007/10 and 2010/08). Doubt it, since the notes there don’t mention it. Also, I was idly looking at getting the “new hotness” of an Athlon 64 X2, since the desktop speed is about as great as my crappy laptop, but that would require a BIOS upgrade, so moving in the wrong direction as far as virtualization (and I’m not sure I want to deal with the extra heat). Maybe a third-party BIOS would do the trick? Maybe it’s just time for better hardware. Sadly, AMD seems to be somewhat throwing in the towel against Intel. Can’t really justify backing the underdog anymore :(

Second, with AMD-V disabled, virt-manager was running into an SELinux violation trying to do non-hardware-assisted emulation. Took me a while to find that link, but it shouldn’t have been hard, given that’s exactly the error I was hitting. Too bad it’s not solved in F17. At least that one is easy enough to resolve! So I at least have QEMU to work with. It’s gonna be a dog, though.

Now, the two other major things I get to deal with: dealing with GRUB2 (introduced in F16, but I haven’t really used that) and the migration from SysV to systemd.

I saw GRUB2 at work when I tried Ubuntu a while back, but didn’t really recognize it at the time and ran screaming. I’m so used to just editing my GRUB menus directly, it seems so much more complicated. It’s a bigger leap than moving from LILO years ago. But I think I can begin to see some of the benefits. Also, it seems to be pretty good at detecting the existing OSes on the system and providing boot options for them. That’s nice, since I rarely work on a single-boot system anymore. I just need to spend a little more time with guides like this one to get the hang of it.

The transition to systemd was a rude awakening too. How do I turn on sshd? How do I configure the firewall so I can VNC and ssh in? (The firewall was particularly confusing since there was an abortive attempt at replacing that with something else which made it onto the beta but was reverted.) Fortunately it doesn’t look too difficult.

Wow, this was a pretty big release for Fedora!

Oh yeah. One more thing: how to get the desktop switching of the dual monitors to work the way it used to and obviously ought to: both monitors switch when you switch desktops. I read about that decision when Gnome 3 came out with F16, and it made no sense to me. There was some hackish tweak for getting it to work the way it should, but I can’t find it right now. It seemed totally unstable under F16 – gnome-shell would freeze randomly all the time. I hope this is better in F17. I’m not encouraged, though, by the fact that most of the gnome-tweak extensions I want to use seem to be broken at this time.

Stuff that should just work…

Had one of those Maven/Eclipse experiences that was so infuriating, I need to record it here to make sure I can fix it easily next time.

Using STS 2.9.1 I created a “Dynamic Web Module” project. For the runtime I targeted tc Server / Tomcat 6. Then I proceeded to do some rudimentary JSP stuff, only to find that the JSTL taglibs were not resolving. It seems the expectation is that these would be provided by the container. Under JBoss they probably would be, but not under Tomcat. (I guess it makes sense… if you have multiple apps in the container, just let each one bundle the version desired – fewer container dependencies).

Fine; so I added the maven nature and the jstl.jar dependency. At some point I did Maven > Update project configuration. Suddenly the class that I had defined to back a form is not found. Also I’m getting this really annoying project error:

Dynamic Web Module 3.0 requires Java 1.6 or newer. [...] Maven WTP Configuration Problem
One or more constraints have not been satisfied.

WTF? Of course STS/Eclipse are configured to use Java 1.6… but my project apparently isn’t. So I go change that, but it doesn’t fix that error, and any time I update project config with Maven, it’s back to using JRE 1.5 and my Java source files are no longer on the build path as source files.

Turns out (took longer to find than to tell about it) the Maven compiler plugin doesn’t use Eclipse settings and just imposes its own defaults, i.e. Java 5, unless otherwise configured by a POM. And since a “Dynamic Web Project” uses Servlet 3.0 it requires Java 6. Boom.

Easy to fix, though annoying that I have to and there isn’t some Eclipse setting for this. Just add under the top-level POM:

<build>
    <plugins>
       <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-compiler-plugin</artifactId>
          <version>2.1</version>
          <configuration>
             <source>1.6</source>
             <target>1.6</target>
          </configuration>
       </plugin>
    </plugins>
 </build>

(Cripes, WordPress, can I get a “code” formatting/paste option already?? “Preformatted” most certainly isn’t.)

Then have Maven update project config again and 1.6 is in play. Oh, and instead of using the “src” directory for my source, I went ahead and changed to src/main/java as Maven expects, so that future “config update” will pick that up.

Fiddling around with Cloud Foundry

In my spare work time the last couple days I’ve taken another good look at Cloud Foundry. I haven’t gotten to the code behind it yet, just putting it through its paces as a user. I’ve used the public cloud as well as running the virtual appliance (Micro Cloud Foundry), and the CLI as well as the STS/Eclipse plugin. It’s really a lot easier than I expected to get up and going (even the Eclipse part). I guess that’s the whole point!

When setting up the MCF appliance, I didn’t quite cotton on to what the DNS token was for (or, you know, read the docs). Cloud Foundry will apparently set up a wildcard DNS entry for you to point to your local instance.  Then you can point vmc/STS to api.{your-choice}.cloudfoundry.me and your web browser to the app URLs that come out of that, and they’ll actually resolve to the MCF VM on your local network (well, as long as you’re local too). That’s pretty cool, but I didn’t do that. I just set it up with a local domain name and added my own wildcard entry at my DD-WRT router. I had to look up how to do that – just pin the MAC address to an IP and add a line to DNSMasq config:

## wildcard for micro cloud foundry VM
address=/.mcf.sosiouxme.lan/172.31.0.140

The only trouble was that when I booted it up, I left the NIC at default config, which attaches it to a virtual network private to my workstation. I’d much prefer it available to my whole LAN, so I reconfigured it to bridge to the LAN. But then I had trouble getting MCF to accept its new address. It wasn’t clear how to do it – I don’t remember how I finally got it to work – something about offline mode. But eventually it accepted its new LAN address.

The example with the simple Ruby application is indeed simple: just install ruby and rubygems (required for the CLI anyway and even instructions for that are included!) and the Sinatra gem, and follow the instructions.

Rails proved to be a little more complicated, but mainly due to my setup. Rails 3.0 and 3.1 are supported. I had gem install Rails and got the latest: 3.2. It seems like this might work, except the simple app that you get with “rails new” uses coffeescript, which pulls in a native gem for the JS library, which can’t be bundled into the cloud app. The discussion at that link left me unclear how to remedy – remove the coffeescript gem? Wouldn’t that break stuff? Configure it to use a different JS lib via ExecJS? I wasn’t clear which, if any, of the options there wouldn’t have the same problem. Taking the path of least resistance here, I removed that rails gem and explicitly installed the most recent 3.0 instead.

This highlights one of the difficulties with a cloud platform… native code. If your code requires something that isn’t abstracted away into the platform and framework, you’re out of luck. Theoretically, you know nothing about the native host under the platform, so you can’t count on it. Just one of the prices you pay for flexibility.

Everything worked fine… except not quite? When I clicked the link to get application environment info, I didn’t get it:

Doesn’t seem to be routing that request, for some reason. It works fine if run with “rails server” naturally. Not sure what happened there, and didn’t want to mess with it just now.

Moving on to Grails and Spring MVC, I quickly set up sample apps in STS and tried them out on both the private and public instance. No problems.

The cool thing about having a local foundry, though, aside from being master of your domain, is that you can debug into your running app, which is vital if it is having a weird problem specific to the cloud environment. You just have to start the app in debug mode. The only hitch here, is that the Cloud Foundry servers don’t show up in the “Debug As… > Debug on Server” dialog:

And the “Connect to Debugger” button didn’t show up after starting the app in debug:

So, how to actually debug in? Well, it turns out it’s simple. The debugger *is* already connected to the app. I’m just not looking at the debug perspective because I couldn’t go the “Debug as…” route. I can explicitly open it (Window > Open perspective) or just set a breakpoint in the code and hit it with the browser (which automatically requests opening that perspective). Then I’m debugging as usual:

The “Connect to Debugger” button only shows up for the app when I disconnect the debugger and need to reconnect.

As far as I can tell, the Eclipse plugin has the same capabilities as the CLI, although the path may not be obvious, being a GUI. I did notice one little glitch that someone should fix (maybe me! in my copious spare time…) – if I open the foundry pane and there are no services, the services subpane is, of course, empty:

The glitch is that if I add a service (say, a MongoDB) it still doesn’t show up in the list, and I can’t then bind it to the application. I have to close the tab and re-open it by clicking on the server in the “Servers” pane (and go to the “Applications” tab… many levels of tab/pane here!):

You might have noticed the “caldecott” app sticking out above. That’s actually a bridge app for accessing services from the cloud directly. With the caldecott rubygem, you can open a tunnel between the foundry service (say a MySQL DB) and a port on your local host, such that clients (such as mysqldump) can directly access that service at that port (e.g. to make a backup or restore one). That will come in handy.

Also, just recently Cloud Foundry enabled running arbitrary workloads on the cloud (as long as they’re on supported platforms). It’s not just about webapps anymore! Another sweet development.