OpenShift logging and metrics

Server logs aren’t usually a very exciting topic. But if you’re a sysadmin of an OpenShift Enterprise deployment with hundreds of app servers coming and going unpredictably, managing logs can get… interesting. Tools for managing logs are essential for keeping audit trails, collecting metrics, and debugging.

What’s new

Prior to OpenShift Enterprise 2.1, gear logs simply wrote to log files. Simple and effective. But this is not ideal for a number of reasons:

  1. Log files take up your gear storage capacity. It is not hard at all to fill up your gear with logs and DoS yourself.
  2. Log files go away when your gear does. Particularly for scaled applications, this is an unacceptable loss of auditability.
  3. Log file locations and rotation policies are at the whim of the particular cartridge, thus inconsistent.
  4. It’s a pain for administrators to gather app server logs for analysis, especially when they’re spread across several gears on several nodes.

OSE 2.1  introduced a method to redirect component and gear logs to syslogd, which is a standard Linux service for managing logs. In the simplest  configuration, you could have syslog just combine all the logs it receives into a single log file (and define rotation policy on that). But you can do much more. You can filter and send log entries to different destinations based on where they came from; you can send them to an external logging server, perhaps to be analyzed by tools like Splunk. Just by directing logs to syslog we get all this capability for free (we’re all about reusing existing tools in OpenShift).

Where did that come from?

Well, nothing is free. Once you’ve centralized all your logging to syslogd, then you have the problem of separating entries back out again according to source so your automation and log analysis tools can distinguish the logs of different gears from each other and from other components. This must be taken into account when directing logs to syslogd; the log entries must include enough identifying information to determine where they came from down to the level of granularity you care about.

We now give instructions for directing logs to syslog for OpenShift components too; take a look at the relevant sections of the Administration Guide for all of this. Redirecting logs from OpenShift components is fairly straightforward. There are separate places to configure if you want to use syslog from the broker rails application, the management console rails application, and the node platform. We don’t describe how to do this with MongoDB, ActiveMQ, or httpd, but those are standard components and should also be straightforward to configure as needed. Notably left out of the instructions at this point are instructions to syslog the httpd servers hosting the broker and console rails apps; but the main items of interest in those logs are error messages from the actual loading of the rails apps, which (fingers crossed) shouldn’t happen.

Notice that when configuring the node platform logging, there is an option to add “context” which is to say, the request ID and app/gear UUIDs if relevant. Adding the request ID allows connecting what happened on the node back to the broker API request that spawned the action on the node; previously this request ID was often shown in API error responses, but was only logged in the broker log. Logging the request ID with the logs for resulting node actions to the syslog  now makes it a lot easier to get the whole picture of what happened with a problem request, even if the gear was destroyed after the request failed.

Distinguishing gear logs

There are gear logs from two sources to be handled. First, we would like to collect the httpd access logs for the gears, which are generated by the node host httpd proxy (the “frontend”). Second, we would like to collect logs from the actual application servers running in each gear, whether they be httpd, Tomcat, MongoDB, or something else entirely.

Frontend access logs

These logs were already centralized into /var/log/httpd/openshift_log and included the app hostname as well as which backend address the request was proxied to. A single httpd option “OpenShiftFrontendSyslogEnabled” adds logging via “logger” which is the standard way to write to the syslog. Every entry is tagged with “openshift-node-frontend” to distinguish frontend access logs from any other httpd logs you might write.

With 2.1 the ability to look up and log the app and gear UUIDs is added. A single application may have multiple aliases, so it is hard to automatically collate all log entries for a single application. Also, an application could be destroyed and re-created with the same address, though it is technically a different app from OpenShift’s viewpoint. Also, the same application may have multiple gears, and those gears may come and go or be moved between hosts; the backend address for a gear could also be reused by a different gear after it has been destroyed.

In order to uniquely identify an application and its gears in the httpd logs for all time, OSE 2.1 introduces the “OpenShiftAnnotateFrontendAccessLog” option which adds the application and gear UUIDs as entries in the log messages. The application UUID is unique to an application for all time (another app created with exactly the same name will get a different UUID) and shared by all of its gears. The gear UUID is unique to each gear; note that the UUID (Universally Unique ID) is different from the gear UID (User ID) which is just a Linux user number and may be shared with many other gears. Scale an application down and back up, and even if the re-created gear has the same UID as a previous gear, it will have a different UUID. But note that if you move a gear between hosts, it retains its UUID.

If you want to automatically collect all of the frontend logs for an application from syslog, the way you should do it is to set the “OpenShiftAnnotateFrontendAccessLog” option and collect logs by Application UUID. Then your httpd log entries look like this:

Jun 10 14:43:59 vm openshift-node-frontend[6746]: 192.168.122.51 php-demo.openshift.example.com – – [10/Jun/2014:14:43:59 -0400] “HEAD / HTTP/1.1” 200 – “-” “curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2” (3480us) – 127.1.244.1:8080/ 53961099e659c55b08000102 53961099e659c55b08000102

The “openshift-node-frontend” tag is added to these syslog entries by logger (followed by the process ID which isn’t very useful here). The app and gear UUIDs are at the end there, after the backend address proxied to. The UUIDs will typically be equal in the frontend logs since the head gear in a scaled app gets the same UUID as the app; they would be different for secondary proxy gears in an HA app or if you directly requested any secondary gear by its DNS entry for some reason.

Gear application logs

In order to centralize application logs, it was necessary to standardize cartridge logging such that all logs go through a standard mechanism that can then be centrally configured. You might think this would just be syslog, but it was also a requirement that users should be able to keep their logs in their gear if so desired, and getting syslog to navigate all of the permissions necessary to lay down those log files with the right ownership proved difficult. So instead, all cartridges now must log via the new utility logshifter (our first released component written in “go” as far as I know). logshifter will just write logs to the gear app-root/logs directory by default, but it can also be configured (via /etc/openshift/logshifter.conf) to write to syslog. It can also be configured such that the end user can choose to override this and have logs written to gear files again (which may save them from having to navigate whatever logging service ends up handling syslogs when all they want to do is debug their app).

Here distinguishing between which gear is creating the log requires somewhat more drastic measures. We want to indicate which gear created each log entry, but we can’t trust each gear to self-report accurately (as opposed to spoofing the log traffic actually coming from another gear or something else entirely). So the context information is added by syslog itself via a custom rsyslog plugin, mmopenshift. Properly configuring this plugin requires an update to rsyslog version 7, which (to avoid conflicting with the version shipped in RHEL) is actually shipped in a separate RPM, rsyslog7. So to usefully consolidate gear logs into syslog really requires replacing your entire rsyslog with the newer one. This might seem extreme, but it’s actually not too bad.

Once this is done, any logs from an application can be directed to a central location and distinguished from other applications. This time the distinguishing characteristics are placed at the front of the log entry, e.g. for the app server entry corresponding to the frontend entry above:

2014-06-10T14:43:59.891285-04:00 vm php[2988]: app=php ns=demo appUuid=53961099e659c55b08000102 gearUuid=53961099e659c55b08000102 192.168.122.51 – – [10/Jun/2014:14:43:59 -0400] “HEAD / HTTP/1.1” 200 – “-” “curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2”

The example configuration in the manual directs these to a different log file, /var/log/openshift_gears. This log traffic could be directed to /var/log/messages like the default for everything else, or sent to a different destination entirely.

Gear metrics

Aside from just improving log administration capabilities, one of the motivations for these changes is to enable collection of arbitrary metrics from gears (see the metrics PEP for background). As of OSE 2.1, metrics are basically just implemented as log messages that begin with “type=metric”. These can be generated in a number of ways:

  • The application itself can actively generate log messages at any time; if your application framework provides a scheduler, just have it periodically output to stdout beginning with “type=metric” and logshifter will bundle these messages into the rest of your gear logs for analysis.
    • Edit 2014-06-25: Note that these have a different format and tag than the watchman-generated metrics described next, which appear under the “openshift-platform” tag and aren’t processed by the mmopenshift rsyslog plugin. So you may need to do some work to have your log analyzer consider these metrics.
  • Metrics can be passively generated in a run by the openshift-watchman service in a periodic node-wide run. This can generate metrics in several ways:
    • By default it generates standard metrics out of cgroups for every gear. These include RAM, CPU, and storage.
    • Each cartridge can indicate in its manifest that it supports metrics, in which case the bin/metrics script is executed and its output is logged as metrics. No standard cartridges shipped with OSE support metrics at this time, but custom cartridges could.
    • Each application can create a metrics action hook script in its git repo, which is executed with each watchman run and its output logged as metrics. This enables the application owner to add custom metrics per app.

It should be noted that the cartridge and action hook metrics scripts have a limited time to run, so that they can’t indefinitely block the metrics run for the rest of the gears on the node. All of this is configurable with watchman settings in node.conf. Also it should be noted that watchman-generated logs are tagged with “openshift-platform” e.g.:

Jun 10 16:25:39 vm openshift-platform[29398]: type=metric appName=php6 gear=53961099e659c55b08000102 app=53961099e659c55b08000102 ns=demo quota.blocks.used=988 quota.blocks.limit=1048576 quota.files.used=229 quota.files.limit=80000

The example rsyslog7 and openshift-watchman configuration will route watchman-generated entries differently from application-server entries since the app UUID parameter is specified differently (“app=” vs “appUuid=”). This is all very configurable.

I am currently working on installer options to enable these centralized logging options as sanely as possible.

vim as IDE

I happened to learn vi as my first major text editor back in the 90s. There are many great editors, and I have no interest in proving that vim is best. It’s just what I use, and what many others use, and it’s available everywhere.

A friend recently observed “it looks like your vim does things that mine doesn’t.” Vim is almost infinitely extensible. It takes some time to incorporate everything, to be sure, and vim lacks somewhat in discoverability. But when you are working on code all day every day, it pays to invest some time in improving and learning your tools. And no matter how much you know about vim, you can always find some feature to surprise and delight you.

vim tips and tricks sites abound, so I don’t really have much to add to these:

  1. http://www.vim.org/docs.php
  2. http://vim.wikia.com/wiki/Vim_Tips_Wiki + http://vim.wikia.com/wiki/Best_Vim_Tips
  3. http://vimcasts.org/
  4. http://pragprog.com/book/dnvim/practical-vim
  5. http://learnvimscriptthehardway.stevelosh.com/ (learn how to seriously customize vim)

I spend most of my time working with git between various related repositories, mostly coding in ruby and bash. If you are doing the same thing, you might be interested in some of the plugins I’ve added to make life a little easier and have vim help as much as possible with the workflow. You really can get to the point where vim pretty much does everything you need. I’m still getting these into my fingers, but thought I’d pass them on:

  1. NERDTree – this is a handy directory plugin. vim already has a directory display; if you start up vim with a directory name, you get a directory listing. It’s not a tree, though, and it goes away once you pick a file to edit. Invoke NERDTree (I mapped “:NT” to toggle it on and off) and it keeps a directory tree structure in a vertical split on the left; choose a file and it opens in a buffer on the right. If you dismiss NERDTree and bring it back later, it comes back with the same state – same directories opened.
  2. Fugitive – Sweet git integration plugin from Tim Pope. I will never work another merge conflict without it. It does so much stuff there are five vimcasts introducing it. May also introduce you to standard vim features you never heard of, like the quickfix list.
  3. Rails.vim – another Tim Pope invention for working with Rails. The idea is to make all those TextMate users jealous (you may want some addons like SnipMate though – and see this classic post for pointers to really decking out your vim Rails IDE).

That’s just three, and that’ll keep you busy for a long time. There are plenty more (see that last link and various recommendations on StackOverflow).

vim for OpenShift and oo-ruby

One more addition – if you happen to be in my very particular line of work, you get to work with a lot of ruby files that don’t *look* like ruby files to vim, because they’re scripts that invoke oo-ruby as their executable.

What’s oo-ruby? It’s a shim to wrap Ruby such that you get a Ruby 1.9 environment whether you are on Fedora (where 1.9 is native currently) or on RHEL (where it is provided by an SCL).

But the problem is, if the file doesn’t end in .rb, vim doesn’t know what filetype to give it, so syntax highlighting and all the other goodies that come with a known filetype don’t work. You have to help vim recognize the filetype as follows. Create or edit .vim/scripts.vim and add the following vimscript:

if did_filetype() " filetype already set..
    finish " ..don't do these checks
endif
if getline(1) =~ '^#!.*\<oo-ruby\>'
    setfiletype ruby
endif

This checks the first line of the file for “oo-ruby” somewhere after the shebang and, if present and filetype is not otherwise determined, sets filetype to ruby. Problem solved!

The OpenShift cartridge refactor: a brief introduction

If you’re watching the commit logs over at OpenShift Origin you’ll see a lot of activity around “v2” cartridges (especially a lot of “WIP” commits). For a variety of reasons we’re refactoring cartridges to make it easier to write and maintain them. We’re particularly interested in enabling those who wish to write cartridges, and part of that includes removing as much as possible from the current cartridge code that is really generic platform code and shouldn’t be boilerplate repeated in cartridges. And in general, we’re just trying to bring more sanity and remove opacity.

If you’ve fired up Origin lately you wouldn’t necessarily notice that anything has changed. The refactored cartridges are available in parallel with existing cartridges, and you have to opt in to use them. To do that, use the following command as root on a node host:

# oo-cart-version -c toggle
Node is currently in v1 mode
Switching node cartridge version
Node is currently in v2 mode

The node now works with the cartridges installed in /usr/libexec/openshift/cartridges/v2 (rather than the “v1” cartridges in /usr/libexec/openshift/cartridges – BTW these locations are likely to change, watch the RPM packaging for clues). Aside from the separate cartridge location, there are logic branches for the two formats in the node model objects, most prominently in OpenShift::ApplicationContainer (application_container.rb under the openshift-origin-node gem) making a lot of calls against @cartridge_model which is either a V1CartridgeModel or a V2CartridgeModel object depending.

The logic branches are based on two things – for an existing gear, the cartridge format already present is used; otherwise, for new gears, the presence of a marker file /var/lib/openshift/.settings/v2_cartridge_format is checked (which is the main thing the command above changes) – if present, use v2 cartridges, otherwise use the old ones. In this way, the development and testing of v2 cartridges can continue without needing a fork / branch and without disrupting the use of v1 cartridges.

A word of warning, though: you can use gears with the v1 and v2 cartridges in parallel on the same node (toggle back and forth), but don’t try to configure an embedded cart from one format into a gear with the other. Also, do not set a different mode on different nodes in the same installation. Results of trying to mix and match that way are undefined, which is to say, probably super broken.

Let’s look around a bit.

# ls /usr/libexec/openshift/cartridges/
10gen-mms-agent-0.1 diy-0.1 jbossews-1.0 mongodb-2.2 phpmyadmin-3.4 rockmongo-1.1 zend-5.6
abstract embedded jbossews-2.0 mysql-5.1 postgresql-8.4 ruby-1.8
abstract-httpd haproxy-1.4 jenkins-1.4 nodejs-0.6 python-2.6 ruby-1.9
abstract-jboss jbossas-7 jenkins-client-1.4 perl-5.10 python-2.7 switchyard-0.6
cron-1.4 jbosseap-6.0 metrics-0.1 php-5.3 python-3.3 v2

# ls /usr/libexec/openshift/cartridges/v2
diy haproxy jbosseap jbossews jenkins jenkins-client mock mock-plugin mysql perl php python ruby

There look to be a lot fewer cartridges under v2, and that’s not just because they’re not all complete yet. Notice what’s missing in v2? Version numbers. You’ll see the same thing looking in the source at the cartridge source trees and package specs; you don’t have a single cartridge per version anymore. It’s possible to support multiple different runtimes from the same cartridge. This is evident if you look in the ruby cartridge. First, there’s the cartridge manifest:

# grep Version /usr/libexec/openshift/cartridges/v2/ruby/metadata/manifest.yml
Version: '1.9'
Versions: ['1.9', '1.8']
Cartridge-Version: 0.0.1

There’s a default version if none is specified when configuring the cartridge, but there are two versions available in the same cartridge. Also notice the separate directories for version-specific implementations:

# ls /usr/libexec/openshift/cartridges/v2/ruby/versions/
1.8 1.9 shared

So rather than have completely separate cartridges for the different versions, different versions can live in the same cartridge and directly share the things they have in common, while overriding the usually-minor differences. This doesn’t mean we’re going to see ruby versions 1.9.1, 1.9.2, 1.9.3, etc. – in general you’ll only want one current version of a supported branch, such that security and bug fixes can be applied without having to migrate apps to a new version. But it means we cut down on a lot of duplication of effort for multi-versioned platforms. We can put ruby 1.8, 1.9, and 2.0 all in one cartridge and share most of the cartridge code.

You might be wondering how to specify which version you get. I’m not sure what is planned for the future, but at this time I don’t believe the logic branches for v2 cartridges have been extended to the broker. Right now, if you look in /var/log/mcollective.log for the cartridge-list action, you’ll see the node is reporting two separate Ruby cartridges just like before, which are reported back to the client, and you still request app creation with the version in the cartridge:

$ rhc setup
...
Run 'rhc app create' to create your first application.
Do-It-Yourself rhc app create <app name> diy-0.1
 JBoss Enterprise Application Platform rhc app create <app name> jbosseap-6.0.1
 Jenkins Server rhc app create <app name> jenkins-1.4
 Mock Cartridge rhc app create <app name> mock-0.1
 PHP 5.3 rhc app create <app name> php-5.3
 Perl 5.10 rhc app create <app name> perl-5.10
 Python 2.6 rhc app create <app name> python-2.6
 Ruby rhc app create <app name> ruby-1.9
 Ruby rhc app create <app name> ruby-1.8
 Tomcat 7 (JBoss EWS 2.0) rhc app create <app name> jbossews-2.0
$ rhc app create rb ruby-1.8
...
Application rb was created.

If you look in v2_cart_model.rb, you’ll see there’s a FIXME that parses out the version from the base cart name to handle this – the FIXME is to note that this should really be specified explicitly in an updated node command protocol. So at this time, there’s no broker-side API change to pick which version from a cartridge you want. But look for that to change when v2 carts are close to prime time.

By the way, if you’re used to looking in /var/log/mcollective.log to see broker/node interaction, that’s still there (you probably want to set loglevel = info in /etc/mcollective/server.cfg) but a lot more details about the node actions that result from these requests are now recorded in /var/log/openshift/node/platform.log (location configured in /etc/openshift/node.conf). You can watch this to see exactly how mcollective actions translate into system commands, and use this to manually test actions against developing cartridges (see also the mock cartridge and the test cases against it).

You’ll notice if you follow some cartridge actions (e.g. “restart”) through the code that the v2 format has centralized a lot of functions into a few scripts. Where before, each action and hook resulted to a call to a separate script (often symlinked in from the “abstract” cartridge which anyone would admit, is kind of a hack):

# ls /usr/libexec/openshift/cartridges/ruby-1.8/info/{bin,hooks}

/usr/libexec/openshift/cartridges/ruby-1.8/info/bin:
app_ctl.sh build.sh post_deploy.sh ps threaddump.sh
app_ctl_stop.sh deploy_httpd_config.sh pre_build.sh sync_gears.sh
/usr/libexec/openshift/cartridges/ruby-1.8/info/hooks:
add-module deploy-httpd-proxy reload restart stop tidy
configure info remove-httpd-proxy start system-messages update-namespace
deconfigure move remove-module status threaddump

In the new format, these are just options on a few scripts:

# ls /usr/libexec/openshift/cartridges/v2/ruby/bin/
build control setup teardown

If you look at the mcollective requests and the code, you’ll see the requests haven’t changed, but the v2 code is just routing it to the new scripts. For instance, “restart” is now just an option to the “control” script above.

Those are just some of the changes that are in the works. The details are still evolving daily, too fast for me to keep track of frankly, but if you’re interested in what’s happening, especially interested in writing cartridges for OpenShift, you might like to dive into the existing documentation describing the new format:

https://github.com/openshift/origin-server/blob/master/node/README.writing_cartridges.md

Other documents in the same directory may or may not distinguish between v1 and v2 usage, but regardless should be useful, if sometimes out of date, reading.

Fiddling around with Cloud Foundry

In my spare work time the last couple days I’ve taken another good look at Cloud Foundry. I haven’t gotten to the code behind it yet, just putting it through its paces as a user. I’ve used the public cloud as well as running the virtual appliance (Micro Cloud Foundry), and the CLI as well as the STS/Eclipse plugin. It’s really a lot easier than I expected to get up and going (even the Eclipse part). I guess that’s the whole point!

When setting up the MCF appliance, I didn’t quite cotton on to what the DNS token was for (or, you know, read the docs). Cloud Foundry will apparently set up a wildcard DNS entry for you to point to your local instance.  Then you can point vmc/STS to api.{your-choice}.cloudfoundry.me and your web browser to the app URLs that come out of that, and they’ll actually resolve to the MCF VM on your local network (well, as long as you’re local too). That’s pretty cool, but I didn’t do that. I just set it up with a local domain name and added my own wildcard entry at my DD-WRT router. I had to look up how to do that – just pin the MAC address to an IP and add a line to DNSMasq config:

## wildcard for micro cloud foundry VM
address=/.mcf.sosiouxme.lan/172.31.0.140

The only trouble was that when I booted it up, I left the NIC at default config, which attaches it to a virtual network private to my workstation. I’d much prefer it available to my whole LAN, so I reconfigured it to bridge to the LAN. But then I had trouble getting MCF to accept its new address. It wasn’t clear how to do it – I don’t remember how I finally got it to work – something about offline mode. But eventually it accepted its new LAN address.

The example with the simple Ruby application is indeed simple: just install ruby and rubygems (required for the CLI anyway and even instructions for that are included!) and the Sinatra gem, and follow the instructions.

Rails proved to be a little more complicated, but mainly due to my setup. Rails 3.0 and 3.1 are supported. I had gem install Rails and got the latest: 3.2. It seems like this might work, except the simple app that you get with “rails new” uses coffeescript, which pulls in a native gem for the JS library, which can’t be bundled into the cloud app. The discussion at that link left me unclear how to remedy – remove the coffeescript gem? Wouldn’t that break stuff? Configure it to use a different JS lib via ExecJS? I wasn’t clear which, if any, of the options there wouldn’t have the same problem. Taking the path of least resistance here, I removed that rails gem and explicitly installed the most recent 3.0 instead.

This highlights one of the difficulties with a cloud platform… native code. If your code requires something that isn’t abstracted away into the platform and framework, you’re out of luck. Theoretically, you know nothing about the native host under the platform, so you can’t count on it. Just one of the prices you pay for flexibility.

Everything worked fine… except not quite? When I clicked the link to get application environment info, I didn’t get it:

Doesn’t seem to be routing that request, for some reason. It works fine if run with “rails server” naturally. Not sure what happened there, and didn’t want to mess with it just now.

Moving on to Grails and Spring MVC, I quickly set up sample apps in STS and tried them out on both the private and public instance. No problems.

The cool thing about having a local foundry, though, aside from being master of your domain, is that you can debug into your running app, which is vital if it is having a weird problem specific to the cloud environment. You just have to start the app in debug mode. The only hitch here, is that the Cloud Foundry servers don’t show up in the “Debug As… > Debug on Server” dialog:

And the “Connect to Debugger” button didn’t show up after starting the app in debug:

So, how to actually debug in? Well, it turns out it’s simple. The debugger *is* already connected to the app. I’m just not looking at the debug perspective because I couldn’t go the “Debug as…” route. I can explicitly open it (Window > Open perspective) or just set a breakpoint in the code and hit it with the browser (which automatically requests opening that perspective). Then I’m debugging as usual:

The “Connect to Debugger” button only shows up for the app when I disconnect the debugger and need to reconnect.

As far as I can tell, the Eclipse plugin has the same capabilities as the CLI, although the path may not be obvious, being a GUI. I did notice one little glitch that someone should fix (maybe me! in my copious spare time…) – if I open the foundry pane and there are no services, the services subpane is, of course, empty:

The glitch is that if I add a service (say, a MongoDB) it still doesn’t show up in the list, and I can’t then bind it to the application. I have to close the tab and re-open it by clicking on the server in the “Servers” pane (and go to the “Applications” tab… many levels of tab/pane here!):

You might have noticed the “caldecott” app sticking out above. That’s actually a bridge app for accessing services from the cloud directly. With the caldecott rubygem, you can open a tunnel between the foundry service (say a MySQL DB) and a port on your local host, such that clients (such as mysqldump) can directly access that service at that port (e.g. to make a backup or restore one). That will come in handy.

Also, just recently Cloud Foundry enabled running arbitrary workloads on the cloud (as long as they’re on supported platforms). It’s not just about webapps anymore! Another sweet development.

the rest of the day

My G1 phone utterly locked up. I think it was in a deadlock or spin loop of some sort; it didn’t respond to any buttons, not to the power button being held down, not to the power being plugged in. I had to remove the battery and replace it to get it to wake up. This has never happened before. All this just because I wanted to see what time it was, sheesh! On the bright side, I figured out how to remove the protective case and used the opportunity to replace it with the other one I bought a while back.

I’m trying out Heroku – heard about it at a Ruby meetup last week. What a useful idea – place to try out my Rails apps for free, and hooked in with github. Hope it sticks around. For now, I just noticed that I don’t even have Ruby installed on my Ubuntu VM – how embarrassing. Rectifying that.

Having a quick chat (before rehearsal) with Justis about the tech lunch and other things around creating a tech startup/entrepreneurial atmosphere in the Triangle area.

First step of Heroku failed – installing the heroku gem.

extconf.rb:1:in `require': no such file to load -- mkmf (LoadError)

Looks like I need another dependency according to these helpful blog entries. Easy enough, rolling. But having installed the heroku gem, the heroku command is not available in my path. Following a tip from stackoverflow I find the path is /var/lib/gems/1.8/bin/heroku so I just run that manually. Need to install libopenssl-ruby, ok. After that, the workflow goes as advertised on the heroku start page. I open the site in my browser, and I receive a lovely error about my configuration – well, I’ll figure out what that’s about later, I’m sure it’s some rails versioning thing or something I left busted. D’oh.