A summary of my life with depression

This may not seem like a technical post but if you cross-reference to this talk it should be clear this is a problem that developers should really be aware of. I mean, not my personal issues, but the topic of mental illness in the developer world. So in the spirit of openness and sharing with others, I present my own story.

I’m not sure when depression began; I think it was kind of a slow progression over years. I’m 41 years old and I’ve always been kind of a negative person, always looking for flaws in things and worrying about what could go wrong. I liked to think this made me a better engineer. Somewhere along the way it became a state of mind where I could only see the negative.

I started to realize I was in trouble when it became clear that I didn’t really enjoy anything any more. It’s called anhedonia. It turns out you can function for quite a while being motivated only by negatives (fear of failure, fear of letting down your family/coworkers, etc.) and through sheer determination, but it’s a really miserable way to live. It’s hard to understand if you haven’t experienced it, so it’s not much use explaining it. Without anything that drives me to say “yes!” life seems pretty damn pointless. I was frustrated and angry all the time.

My wife encouraged me to get clinical help and eventually I did. Apparently sometime around September 2015, though I don’t really remember. I remember describing my depression not so much as “stuck in a pit” as “life in a dense fog”. The next year and few months my psychiatrist had me trying out various medications and tweaking dosages and such. Sometimes something would seem to be helping a little, but nothing really seemed to stick or make a big difference. It was discouraging to say the least. I was doing counseling, too, though I have yet to find a counselor who helps much.

In the summer of 2016 my mother was diagnosed with incurable cancer. In September 2016 my wife and I separated and I started shuttling our kids between us. Then in late November her health took a nosedive and I was left taking care of the kids alone, in addition to working full time with depression. I had always been able to deal with everything myself before, but something finally gave out in me. My job at Red Hat, which before had always been a refuge in turbulent times, became unbearable. I would spend all day staring at my screen and moving the mouse occasionally when it started to go dark because I hadn’t done anything for so long. I felt crushing guilt and shame that the one thing I had always been good at and enjoyed was now a joyless burden and I was letting everyone down.

I went on disability leave in early December. I didn’t even know you could go on disability for depression, but it was definitely disabling me, so it makes sense. My psychiatrist suggested trying a new course of treatment called TMS (Transcranial Magnetic Stimulation). In short, it uses an electromagnet to stimulate your brain, in daily treatments over the course of 6-8 weeks. I was expecting to get started with it ASAP, but it turned out I couldn’t start until January 4th.

I thought disability leave would be a relief, and it certainly was in the sense that I no longer had to feel guilty about the work I wasn’t doing (well, less guilty anyway – getting paid to do nothing really rubs me the wrong way). The downside is that it gave me a lot more time to brood over how useless I was and how I was going to lose everything and end up still depressed but in a homeless shelter, with my kids in foster care. I can look at things objectively and say that actually my situation is not that bad, and quite recoverable if I can just kick this depression thing, and there’s a good chance I can. And I’m so grateful for being able to take disability, and for my health insurance that covers all this pretty well, and for having my health otherwise. But the thing about being depressed is that you still feel hopeless, regardless of the reality of the situation.

I’m two weeks into TMS now. If anything, I feel worse because I’m starting to develop anxiety and having more trouble sleeping. My psychiatrist said her patients often saw improvement within a week (which made me more anxious when that week passed that I might be among the 20% or so that don’t benefit), but the TMS folks said actually to expect more like four weeks so I’m trying to be patient. If it doesn’t work out, I can do genetic testing to see if that helps pinpoint a medication that will actually help. I’m trying meditation, working on gratitude, connecting with people (something I never put much effort into before), contradicting my negative thoughts, and other random things in case they might help. And exercising, that seems to help. And just keeping busy to distract myself from feeling hopeless. I don’t have a happy ending yet, but everyone tells me things will get better if I just keep trying.

I guess if there’s a silver lining, it’s that people have come out of the woodwork to tell me they understand what I’m going through because they have been there. This is so common, there should be no reason to feel shame or to avoid treatment like I did for so long. It’s made me realize that in my fierce self-sufficiency I’ve never been open to being helped, or for that matter to helping others. But it turns out that nearly everybody needs some help sometimes, and I hope that out of this experience I’ll learn to be a more decent human being than I have been so far.

2015-12-28

SSL is dead. Long live TLS.

Hmm, do you think there are any other protocols that could be resurrected with a different name? How about good old HTTP? It hasn’t been about “Hypertext” transport for a long time. I mean sure, HTML is still around, but half the time it’s being written on the fly by your Javascript app anyway, not to mention there are CSS, SVG, images, audio, video, and a host of other things being transported via HTTP. And it’s not just passively transferring files, it’s communicating complicated application responses.

Maybe we should just call it Transport Protocol, “TP” for short. Yeah, I like it!

Happy New Year!

2015-12-8

I can’t believe it’s already December.

Problem du jour: getting logs from a pod via the OpenShift API. You would think I could just look at the impl of the oc logs command, but as usual it’s too tangled a mess (or, more likely, I just need to understand how to really use the go tools I have).

oc logs first muddies the water trying to figure out what kind of resource the user wants logs for. I can hopefully ignore this since I already have the pod I created ready.

Interlude – trying to figure out what gets injected into a pod’s /etc/resolv.conf file. Because someone is getting a wildcard domain added to their search directive and that causes everything to resolve to that domain IP, including e.g. github.com. I couldn’t get a useful read on what settings are relevant. I thought there was a setting for whether or not to inject the skydns nameserver; now I can’t find it. I created a pod on my devenv and it didn’t get anything injected. So I’m not sure all the sources of input to this file.

WordPress used to have a button to remove the distractions and make the editor take up the whole window. What ever happened to that?

So back to getting logs. Looks like I need to store the command’s Factory somewhere in order to be able to get to the LogsForObject method. Kubernetes or OpenShift factory? I have the OpenShift factory from my command and it contains the Kubernetes factory so it’s all the same.

I got the pod running… after I remembered to actually have the diagnostic call the necessary code. Disconcerting when you run a diagnostic and get *nothing* back. Now I have the pod being created and a readCloser with the results. Reminding myself how to use a readCloser.

Pro tip: don’t try to Fscanln a reader. Create a bufio Scanner instead.

See the thing I made

I wrote an article for the Red Hat Developers Blog. I haven’t felt much like blogging this year, but there’s one thing at least. If I have articles I think would be of outside interest, I’ll probably post them over there. This blog should return to its original purpose, which was for me to blather about my frustrations and solutions in a kind of stream of consciousness.

libvirt boxen for OpenShift v3

I promise I have not been struggling with vagrant the whole time since my last post. Actually I updated the vagrant-openshift docs and made some other fixes so the whole thing is a little more sane and obvious how to use, and then went on to other stuff. Today I’m just trying to put together OpenShift v3 libvirt boxen to put up for the public next to the virtualbox ones. Should be easy, actually it probably is; my problems today all seem to be local.

It would be nice if, just once, vagrant had a little transparency. It doesn’t have a verbose mode, and never tells you where anything is or should be.

$ vagrant box list
aws-dummy-box (aws, 0)
fedora_base (libvirt, 0)
fedora_inst (libvirt, 0)
openstack-dummy-box (openstack, 0)

Ah, yeah… so… where are those defined? What images do they point to, and where were they downloaded from?

The errors are the worst. When something goes wrong, could you please tell me what you think you got from me, what you tried to do with that, and what went wrong? No.

$ vagrant up --provider=libvirt
Bringing machine 'openshiftdev' up with 'libvirt' provider...
Name `origin_openshiftdev` of domain about to create is already taken.
Please try to run `vagrant up` command again.

Just try to figure out what is specifying “origin_openshiftdev” as a domain and what to do about it. Or how to release it so I can, in fact, run vagrant up again.

$ vagrant status
Current machine states:

openshiftdev not created (libvirt)

The Libvirt domain is not created. Run `vagrant up` to create it.
$ vagrant destroy
==> openshiftdev: Domain is not created. Please run `vagrant up` first.

Part of the problem is that I have at least three semi-autonomous bits of vagrant to deal with. There’s vagrant itself, which keeps track of box definitions. There’s the Vagrantfile I’m feeding it from OpenShift Origin, which might interact with the vagrant-openshift plugin (though I don’t think so on vagrant up) but in any case defines what hosts I’m supposed to be creating. Finally, there’s the provider plugin (libvirt in this case) that has to interface with the virtualization to actually manage the hosts. If something goes wrong, I can’t even tell which part is complaining, much less why.

Enough complaining, what is going on?

The primary input to vagrant is a “box”. This is really just a tarball that contains a minimal Vagrantfile, metadata file, and the real payload, the disk image of the virtual host. The vagrant “box” is provider-specific – the metadata specifies a provider.

When you run vagrant up, the local Vagrantfile should specify which box to start with – a URL to retrieve it and the name for vagrant to import it as. The first run will download and unpack it under ~/.vagrant.d/boxes/<name>/<version>/<provider>/ (note, you can have multiple providers for the same box name/version). Subsequent runs just use that box definition. Simple enough as it goes.

vagrant up also creates a local .vagrant/ directory to keep track of “machines” (which are intended to represent actual running virtual hosts instantiated from boxes). Machines are stored under .vagrant/machines/<name>/<provider>, where the name comes from the Vagrantfile VM definition. In OpenShift’s Vagrantfile we have config.vm.define “openshiftdev”, so for the libvirt provider I could expect to see a directory .vagrant/machines/openshiftdev/libvirt once I’ve brought up a machine. (Under vbox you can define a master and several minions, which would all have different names. I hope we can do that soon with the other providers too.)

I was planning to build a libvirt box from scratch, but then I realized there is a Vagrant plugin “vagrant-mutate” that will take an existing box and change it to another provider. Since we already have boxes defined for vbox I thought I’d just try this out to make a libvirt version of it.

$ vagrant mutate \
  https://mirror.openshift.com/pub/vagrant/boxes/openshift3/centos7_virtualbox_inst.box \
  libvirt
Downloading box centos7_virtualbox_inst from https://mirror.openshift.com/pub/vagrant/boxes/openshift3/centos7_virtualbox_inst.box
Extracting box file to a temporary directory.
Converting centos7_virtualbox_inst from virtualbox to libvirt.
 (100.00/100%)
Cleaning up temporary files.
The box centos7_virtualbox_inst (libvirt) is now ready to use.

So far, so good. Or not, because what does “ready to use” mean? Where is it? Turns out, it means said box is stored under my ~/.vagrant.d/boxes directory for use with the next vagrant up. It kept the same name with the provider embedded in it, but if I just change the name…

$ mv ~/.vagrant.d/boxes/centos7_{virtualbox_,}inst
$ vagrant box list
aws-dummy-box (aws, 0)
centos7_inst (libvirt, 0)
fedora_base (libvirt, 0)
fedora_inst (libvirt, 0)
openstack-dummy-box (openstack, 0)

… everything works out fine. So to use that with my openshift/origin Vagrantfile, I just put that name into my .vagrant-openshift.json file like so:

"libvirt": {
  "box_name": "centos7_inst"
},

Note that I don’t need to specify a box_url because the box is already local. Folks will need the box_url to access it once I publish it. So let’s vagrant up already…

$ vagrant up --provider=libvirt
Bringing machine 'openshiftdev' up with 'libvirt' provider...
/home/luke/.vagrant.d/gems/gems/fog-1.27.0/lib/fog/libvirt/requests/compute/list_volumes.rb:32:in `info': 
Call to virStorageVolGetInfo failed: Storage volume not found: 
no storage vol with matching path '/mnt/VMs/origin_openshiftdev.img'
(Libvirt::RetrieveError)

Ah. This is definitely due to some messing around on my part, because I deleted that image as I thought vagrant was saying earlier it was in the way (remember “Name `origin_openshiftdev` of domain about to create is already taken” ?). This error at least seems safe to pin on the libvirt provider, but I’m not sure what to do about it. Shouldn’t libvirt just clone the image from the vagrant box to create a new VM? How did my request to instantiate the “centos7_inst” box as “openshiftdev” get translated into looking for that particular file to exist?

I’m guessing (since grep got me nowhere) that the libvirt provider takes the directory I’m in and the box being requested and uses that as the VM name. Or at least, a volume name from which VMs can be cloned for Vagrant usage.

virsh to the rescue

I’m not really very knowledgeable of libvirt, mainly because I’ve been able to run VMs just fine using the graphical virt-manager interface and didn’t really need a lot more. I deleted that image above using virt-manager, figuring it would take care of referential integrity. Now that I’m venturing into the world of scripted VM management, I have been fiddling a little with virsh, so let’s apply that:

# virsh vol-list default
 Name                     Path 
------------------------------------------------------------------------------
[...]
 origin_openshiftdev.img  /mnt/VMs/origin_openshiftdev.img

Hmm, yes, libvirt does actually seem to expect that volume to be there. And then it’s failing trying to use it because the actual file isn’t there. So let’s nuke the volume record, wherever that may be.

# virsh vol-delete origin_openshiftdev.img default
Vol origin_openshiftdev.img deleted

And vagrant up --provider=libvirt suddenly works again.

Updating libvirt boxes

One extra note about using libvirt as a provider: as soon as you use vagrant to start a libvirt box you have downloaded, the vagrant-libvirt plugin makes a copy of the image from the box definition and uses that. The copy is made in libvirt’s default storage pool (unless you tell it otherwise… BTW, quite a few interesting options at the vagrant-libvirt README) and is named <box_name>_vagrant_box_image.img. So my box above translates to /mnt/VMs/centos7_inst_vagrant_box_image.img (I use a separate mount point for my VM storage because it’s just too easy to fill your root fs otherwise). Then when you actually create a VM, it uses a copy-on-write snapshot of that image, which seems to be named after the project and VM definition (my problem volume above, origin_openshiftdev.img). That way it’s a pretty fast, efficient startup from a consistent starting point.

Of course this could be a bit confusing if you actually want to update your vagrant box. You might download a new box definition from vagrant’s perspective, but vagrant-libvirt sees it already has a volume with the right name and keeps using that (in fact, once it has copied the volume, you may as well truncate the box.img under .vagrant to save space). You have to nuke the libvirt volume to get it to use the updated box definition. virt-manager seems to do just as well as virsh vol-delete at this (not sure what happened before in my case). So e.g.

# virsh vol-delete centos7_inst_vagrant_box_image.img default

Then the next vagrant up with that box will use the updated box definition.

vagrant setup

I may be an idiot, but I’ve simply never used vagrant successfully before.

“Just vagrant up and you’re ready to go!” say all the instructions. Yeah, that probably works fine with the default VirtualBox, which is available for all major desktop platforms. But I don’t want to use any more proprietary Oracle crap than I absolutely have to. I don’t even want to run VMs on my local host (all my RAM is already taken up by my browser tab habit), but if I did it would be on KVM/QEMU that’s native to Fedora. But I have access to AWS and OpenStack, so why would I even do that?

Well, if you want to use something other than the default, you have to add provider plugins. Alright, sounds easy enough.

$  vagrant plugin install vagrant-openstack-provider
Installing the 'vagrant-openstack-provider' plugin. This can take a few minutes...
Installed the plugin 'vagrant-openstack-provider (0.6.0)'!
$ vagrant plugin install vagrant-aws
Installing the 'vagrant-aws' plugin. This can take a few minutes...
Installed the plugin 'vagrant-aws (0.6.0)'!

Oh yeah, easy-peasy man! OK, let’s fire up OpenShift v3:

$ git clone https://github.com/openshift/origin
$ cd origin
$ vagrant up --provider=aws
There are errors in the configuration of this machine. Please fix
the following errors and try again:
SSH:
* `private_key_path` file must exist: PATH TO AWS KEYPAIR PRIVATE KEY

Hm, OK, must be something I need to provide. Look through the Vagrantfile, looks like it’s expecting an entry for AWSPrivateKeyPath in my .awscreds file. I have a private key file, I can do that. Try again…

$ vagrant up --provider=aws
Bringing machine 'openshiftdev' up with 'aws' provider...
/home/luke/.vagrant.d/gems/gems/fog-aws-0.0.6/lib/fog/aws/region_methods.rb:6:in `validate_aws_region': Unknown region: "<AMI_REGION>" (ArgumentError)

Erm… right, more stuff to fill in. I don’t particularly want to edit the Vagrantfile, and not sure which AMI_REGION I should use. Surely someone on my team has specified all this somewhere? A search brings me to https://github.com/openshift/vagrant-openshift which looks like it ought to at least create me a config file that Vagrant will read. Sounds good, let’s go:

$ cd vagrant-openshift
$ bundle
Fetching git://github.com/mitchellh/vagrant.git
Fetching gem metadata from https://rubygems.org/.........
Resolving dependencies...
[.......]
Using vagrant-openshift 1.0.12 from source at .
Your bundle is complete!
Use `bundle show [gemname]` to see where a bundled gem is installed.

$ rake vagrant:install
vagrant-openshift 1.0.12 built to pkg/vagrant-openshift-1.0.12.gem.
The plugin 'vagrant-openshift' is not installed. Please install it first.
Installing the 'pkg/vagrant-openshift-1.0.12.gem' plugin. This can take a few minutes...
Installed the plugin 'vagrant-openshift (1.0.12)'!

$ cd ~/go/
[luke:/home/luke/go] $ vagrant openshift3-local-checkout -u sosiouxme
/home/luke/.rvm/rubies/ruby-1.9.3-p545/lib/ruby/site_ruby/1.9.1/rubygems/dependency.rb:298:in `to_specs': Could not find 'vagrant' (>= 0) among 218 total gem(s) (Gem::LoadError)
[...stack trace]

Whaaaaat? I’ve entirely broken vagrant now, and I have no idea how. Vagrant seems just a little more… fragile?… than I was expecting. Fine, let’s move to ruby 2.0 and define a gemset just for vagrant, such that if I hose things up again, it’s contained. (I tried ruby 2.1 first but had an error getting rubygems from rubygems.org… well, that’s not vagrant’s fault.) Wait, I can’t do that, recent vagrant versions are no longer published as a rubygem; I’m supposed to get it from my OS. I have it installed as the vagrant-0.6.5 RPM. If I try to add plugins under rvm, it complains that the vagrant *gem* isn’t installed. Which of course it isn’t… if you do install it, it just tells you not to do that.

OK, so let’s just go with the system ruby that apparently the RPM is expecting.

$ rvm use system
Now using system ruby.
$ vagrant plugin install vagrant-aws
Installing the 'vagrant-aws' plugin. This can take a few minutes...
Installed the plugin 'vagrant-aws (0.6.0)'!
$ bundle
[...]
Using vagrant 1.7.2 from git://github.com/mitchellh/vagrant.git (at master)
 [ should I be worried the version doesn't match?]
Installing vagrant-aws 0.6.0
[...]
$ vagrant openshift3-local-checkout -u sosiouxme
Waiting for the cloning process to finish
Cloning origin ...
Cloning git@github.com:sosiouxme/origin
Cloning source-to-image ...
Cloning git@github.com:sosiouxme/source-to-image
Cloning wildfly-8-centos ...
Cloning git@github.com:sosiouxme/wildfly-8-centos
Cloning ruby-20-centos ...
Cloning git@github.com:sosiouxme/ruby-20-centos
ERROR: Repository not found.
 [yeah, I haven't cloned all the repos... do I need to???]
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Fork of repo wildfly-8-centos not found. Cloning read-only copy from upstream
ERROR: Repository not found.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Fork of repo source-to-image not found. Cloning read-only copy from upstream
ERROR: Repository not found.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
Fork of repo ruby-20-centos not found. Cloning read-only copy from upstream
remote: Counting objects: 1, done.
remote: Total 1 (delta 0), reused 1 (delta 0)
Unpacking objects: 100% (1/1), done.
From https://github.com/openshift/origin
 * [new branch] master -> upstream/master
 * [new tag] v0.2 -> v0.2
OpenShift repositories cloned into /home/luke/go/src/github.com/openshift

$ cd src/github.com/openshift/origin/
$ vagrant origin-init --stage inst --os fedora lmeyer-osv3dev
Reading AWS credentials from /home/luke/.awscred
Searching for latest base AMI
Found: ami-0221586a (devenv-fedora_559)
$ vagrant up --provider=aws
Bringing machine 'openshiftdev' up with 'aws' provider...
[...]
/home/luke/.vagrant.d/gems/gems/excon-0.43.0/lib/excon/middlewares/expects.rb:6:in `response_call': The key pair 'AWS KEYPAIR NAME' does not exist (Fog::Compute::AWS::NotFound)

OK now what? I really don’t want to have to edit Vagrantfile and deal with keeping that out of git. I haven’t quite torn my hair out before a coworker points out https://github.com/openshift/vagrant-openshift#aws-credentials which was considerably further down than I was looking. OK, so I just needed to add  AWSKeyPairName to my .awscreds, and…

$ vagrant up --provider=aws
Bringing machine 'openshiftdev' up with 'aws' provider...
[...]
==> openshiftdev: Machine is booted and ready for use!

Finally! And “vagrant ssh” works too! There’s not really much running yet, but I’ll figure that out later. Now what if I want to tear down that box and do something different? Let’s see…

$ vagrant -h
/home/luke/.vagrant.d/gems/gems/vagrant-share-1.1.3/lib/vagrant-share/activate.rb:8:in `rescue in <encoded>': vagrant-share can't be installed without vagrant login (RuntimeError)

Really? Ah, seems I ran into a bug and just need to upgrade vagrant by downloading it from vagrantup.com rather than Fedora. Just like I apparently did long ago and forgot about. Fun.

What would probably have been obvious to anyone who knew vagrant is that adding vagrant-openshift actually adds commands to what vagrant can handle, such as “vagrant openshift3-local-checkout” above. There are a bunch more on the help menu.

So, back to running stuff. This looks promising:

$ vagrant install-openshift3
Running ssh/sudo command 'yum install -y augeas' with timeout 600. Attempt #0
Package augeas-1.2.0-2.fc20.x86_64 already installed and latest version
[...]
$ vagrant test-openshift3
***************************************************
Running hack/test-go.sh...
[...]

It’s not obvious (to me) how to actually run openshift via vagrant. I’m guessing you just vagrant ssh in and run it from /data where everything is compiled. I was kind of hoping for more magic (like, here’s a vagrant command that sets up three clustered etcd servers and five nodes, and you just ssh in and “osc get foo” works). Also I need to try out the libvirt and openstack providers. But that’s all I have time for today…

Yep, easy-peasy!

OpenShift 3 from zero

I’ve been working on OpenShift v2 for a long time, supporting our existing customers in various ways, but it’s only fairly recently I’ve been able to take a little time to try out OpenShift v3, which as I previously noted, is a complete departure from v2. Mostly the same people working on it, informed by all of the lessons of v2 – but with totally different technology. And that’s great, because v2 spent an awful lot of time on container and orchestration technology that we’ll get with Docker and Kubernetes “for free” (there’s a price in having to collaborate to achieve our own requirements in projects also developed by others, but participation in a healthy community project should eventually bring about a large return on investment).

With totally different technology in play, trying out v3 is totally different from trying out v2. Under v2, you needed to install and configure a ton of RPMs, some built from the OpenShift Origin source (which in itself could be challenging – extensive BuildRequires – or you could get prebuilt RPMs from yum repos, but they wouldn’t be updated that often) as well as from various other sources (various dependencies like Jenkins, MongoDB, etc.) and the OS. Under v3, the hope is that components will be minimized and come from standard sources, preferably as part of the OS, with OpenShift a fairly self-contained add-on. Certainly, what is available now is not as complex as it will be once we’re talking about HA orchestration, routing, and runtime components, but the reduction in complexity specific to OpenShift is already palpable (mostly by being separated out into the Docker, etcd, and Kubernetes components that Red Hat is leveraging as a community participant rather than project owner).

As a rather fast-moving project, unhampered as yet by any semblance of production usage and the need for stability, v3 can still present a few challenges to approach. Any guide to setting it up will inevitably be obsolete quickly as changes introduce inconsistencies from any snapshot in time. And so, expect that things will be renamed (it’s kubecfg… wait, kubectl… wait, openshift cli!), that capabilities will evolve (surely we can figure out how to interact with SELinux enforcing and firewalld), and that you may need to dig around to figure out what the new reality is even when referencing relatively recent guides. (Sidebar: in this day and age, it’s hard to believe there are still blog posts about evolving technology without timestamps. Seriously? If I can’t tell what time period you’re discussing, your post may as well be misinformation.)

Getting to the starting line

This blog post is a case in point. What does it take to get going with OpenShift v3? Well, that depends on what you mean. Do you just want a running instance to poke at, or do you actually want to start building from source so you can modify it as needed?

Let’s start with running it. v3 is based on Docker as the container technology. (Docker isn’t just about running containers, though – it’s a whole infrastructure around building and distributing container images.) You need to have Docker, and Docker is based on Linux. If you’re not running Linux, you can’t run Docker directly, but you can run a fairly minimal virtual machine running Linux for the purpose, and in general I would recommend that even if you do run Linux on your desktop – best if you can set up a test system without disturbing anything else. Any way you can get your hands on a VM will do – whether running locally or in some IaaS cloud you have access to. The v3 OpenShift Origin project comes with a Vagrantfile if the Vagrant approach to provisioning a VM appeals to you, but it’s up to you. I’m not going to walk through that – it will totally depend on what you have and what you know.

But – which operating system to use, and what version/flavor? While Docker is available on most recent Linux distributions, the OpenShift layer on top of it will only be tested and developed regularly on a few Red Hat-related operating systems, so in the interests of minimizing potential problems, I’d recommend one of those:

Fedora 21

It’s free. It’s fairly cutting edge. It has a huge feature set. This is a pretty good base for testing and development – the only problem I could see with using it is that, being fairly fast-moving, it is more likely to have bugs. That and, I suppose, if you’re developing, you have to beware of using language/OS/library features that aren’t available elsewhere yet. Fedora 21 also ships Kubernetes (separately) and golang if that is relevant. You’ll want the “Server” flavor, not the “Cloud” one (yet – see below).

RHEL 7 / CentOS 7

RHEL 7 is Red Hat’s eventual target for running v3 in a supportable fashion. It’s an Enterprise OS, meaning it doesn’t change quickly and you can expect features to be stable across its (long!) lifetime, so it will tend to trail Fedora significantly. It’s not free of price, but the CentOS clone of it is free to use and updates are available from open yum repositories; it follows updates to RHEL pretty quickly (hopefully more so now that CentOS has Red Hat backing). Since most people can’t afford to blow an Enterprise subscription just to fool around with new technology, I’d recommend CentOS 7.

It’s important to note that Docker is included in the “Extras” channel of RHEL 7, which brings a different level of support. Extras are supported in the sense that Red Hat will fix bugs, but not in the sense that updates are backward compatible as with the rest of the OS. Since Docker is still under rapid development, this is the right place for it – Red Hat does not want to get stuck supporting essentially an early beta for ten years. (Current docker RPM is version 1.3.2.) Expect that version to get updates as needed to incorporate required features for OpenShift and other projects, and maybe for it to migrate to the non-Extras channel at some point.

To get golang or gcc-go, you need to add the “Optional” channel, which isn’t supported at all. For testing, the distinction isn’t important, but just be aware that these aren’t a supported part of the OS. Kubernetes isn’t distributed in any of these channels yet (though Fedora 21 does have an RPM for it). It’s not clear to me whether RHEL Extras will ship Kubernetes before OpenShift v3 goes GA, or if we’ll just compile in a fork of Kubernetes as we currently do. Same for etcd.

RHEL 7 Atomic / CentOS 7 Core / Fedora Cloud?

Proejct Atomic servers exist essentially just to host Docker containers. They won’t even let you install packages, instead managing whole-system (atomic) updates via ostree. So, it’s unsuitable for any kind of development, really (you could run a container that provides development tools and libraries, but that seems rather awkward and counter to the point). It’s currently in beta and it seems unlikely to be ready to support OpenShift v3 at GA, but it’s definitely a target for deployment some time later. There’s no actual need for a Docker host to enable traditional package management, since you can just supply any software you want in containers, and OpenShift is no different – it’s a goal to be able to deliver the whole thing via containers. Interestingly enough, the beta Atomic install includes builds of Kubernetes and etcd, in somewhat older versions. It will be interesting to see where this goes, but I don’t see a lot of point to using it as a vehicle for trying OpenShift v3 just now.

Getting Docker ready

Once you have a VM running a Docker-capable Linux as above, you of course need to install it and run the service.

# yum install docker
# setenforce 0
# systemctl enable docker
# systemctl start docker

I don’t know if the “setenforce 0” is still necessary today – certainly the end goal is to have everything running under the protection of SELinux. It’s also worth noting that if you are using firewalld, you should disable it or add docker to the trusted zone in order for networking to work out.

Docker in general requires root access to run, but you can also enable other users to access it by adding them to the docker group:

# usermod -aG docker <user>

(The user must log out and in to gain the new privileges; and be aware this just provides Docker privileges – you’ll still need sudo/root in order to perform other system tasks.)

Docker is set up. Now we run OpenShift v3 in one of three ways. Currently, a single binary runs all necessary services as well as providing a client to access them (all assuming running on the local host – of course things are more complicated without that assumption). It’s just a question of obtaining it and executing it.

Just run it (as a Docker container)

Using Docker, starting up an openshift instance is super easy:

$ docker run \
  -v /var/run/docker.sock:/var/run/docker.sock \
  --net=host --privileged \
  openshift/origin start

What’s going on here?

Well, hopefully it’s evident that you’re running a Docker container. The first time you run this, Docker will pull down the “openshift/origin” container image from the Docker Hub, which you can think of as GitHub for Docker images. This is an image that OpenShift engineers build from source periodically and upload to the Docker Hub. Presumably when it’s time for v3 to go GA, Red Hat will set up a separate authenticated Docker registry to distribute the v3 container images (at least that seems like a likely distribution mechanism – we will see) and you’ll just docker run index.docker.redhat.com/openshift or something like that instead.

Of course, OpenShift isn’t just some container running a workload – it’s actually intended to do orchestration of other containers. So it needs to run as a privileged container, meaning it actually has the ability to “break out” of the container to manage the host system. In particular, it needs a view of the host’s network and docker server, which is what the other options on the command line are about. (The “-v” option mounts things from the host filesystem into the container filesystem.) I should have mentioned that this is going to bind to a number of ports on your host — 4001, 7001, 8080… which of course will fail if there’s already anything listening there, and will be exposed to the external network if it succeeds.

The final word on the command line (“start”) is an argument to the container entry point, which is /usr/bin/openshift (just another binary sitting in a container image). If it makes you a little nervous to pull an image from the internet and run it as a privileged container, well… it should. (So build it yourself! More later.)

Since the docker run wasn’t daemonized, you’ll just see the output from pulling down the container and running it. OpenShift starts up a single binary with REST APIs available for OpenShift, Kubernetes, etcd, a Kubelet, and miscellaneous other stuff. It will run until you hit Ctrl-C, at which point the container exits. Alternatively, run docker with the “-d” option and use “docker logs” to watch the logs:

$ docker run -d \
  -v /var/run/docker.sock:/var/run/docker.sock \
  --net=host --privileged \
  openshift/origin start 
9bd1133f5e0b79e48e7dfca8a23cde06274441442e673b41e85a0b2158c1de9f
$ docker logs -f 9bd1133f5e0b79e48e7dfca8a23cde06274441442e673b41e85a0b2158c1de9f
I1229 21:31:47.229648 1 start.go:174] Starting an OpenShift all-in-one, reachable at http://172.16.4.182:8080 (etcd: http://172.16.4.182:4001)
I1229 21:31:47.229886 1 start.go:184] Node: localhost
I1229 21:31:47.229943 1 etcd.go:29] Started etcd at 172.16.4.182:4001
[...]

Bam! Just by running this privileged container, you’re ready to run through Ben Parees‘s three in-depth blog posts. Well, sort of. The “openshift” binary in this image implements both client and server runtimes. In order to run “openshift” client commands you can execute another container (from the same image):

$ docker run --net=host openshift/origin cli get pods
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS

(You need –net=host so it can reach the ports on the host where the other container is listening.) Kinda clunky. Probably better to just get a shell in a container:

$ docker run -it --net=host --entrypoint=/bin/bash openshift/origin 
[root@localhost openshift]# openshift cli get pods
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS

(“-it” gets you an interactive tty, and “–entrypoint” runs a shell instead of the openshift executable.)

And then if you actually do that, you find that openshift has moved on since October, “openshift kube list pods” is now “openshift cli get pods” and the JSON deployment defined in that first blog post no longer matches the API. Ah, the fun never ends!

If the privileged Docker container running the service is stopped for any reason (Ctrl-C, docker stop, reboot…) then you can simply look it up and start it again. (Some fields omitted for brevity)

$ docker ps -a
CONTAINER ID IMAGE COMMAND STATUS
145d0692fbb1 openshift/origin:latest "/bin/bash" Up 37 hours
272f6bf4c15c openshift/origin:latest "/usr/bin/openshift Exited (2) 37 hours ago
$ docker start 272f6bf4c15c
272f6bf4c15c

If instead you start a new container with “docker run”, it will not have any of the data generated during interactions with the previous container (unless you go to pains to have them share a volume mounted at /var/lib/openshift).

Just download it

Well, if all those Docker tricks look shady to you, you can always just work with a good old-fashioned binary. Check for the latest release on github. Download it, unpack, and run it:

# wget https://github.com/openshift/origin/releases/download/v0.2/openshift-origin-v0.2-20-gfe983146fbac7f-fe98314-linux-amd64.tar.gz
# tar zfx openshift-origin-v0.2-20-gfe983146fbac7f-fe98314-linux-amd64.tar.gz
# # ./openshift start &
[1] 21828
[root@localhost bin]# I1231 20:25:16.354385 21828 start.go:174] Starting an OpenShift all-in-one, reachable at http://172.16.4.182:8080 (etcd: http://172.16.4.182:4001)
[...]
# ./openshift cli get pods 
NAME IMAGE(S) HOST LABELS STATUS

This is the same thing as you got from the container, just running outside a container. It binds to the same ports and provides the same services. Currently, it stores data in subdirectories of the pwd, instead of inside the container. Pretty simple? True. But it’s not much harder to generate it yourself from source.

Just compile it

Unlike OpenShift Origin v2, v3 is pretty darn simple to compile yourself from the source. It helps that we’re not trying to build a bunch of RPMs out of it.

It’s a little confusing that “get started developing v3” instructions are currently spread (somewhat duplicated and out of sync) across the Origin project README, CONTRIBUTING, and HACKING documents. I kind of expect the latter two will merge at some point and the README will simply point to the result for those trying to work on the source code. Let’s also note that there is a docs directory for describing how things work or will work (or once worked until they changed direction). Engineers aren’t known for great documentation but we’re trying to be helpful/transparent here, and I believe the plan is for actual documentation writers to contribute to this substantially as the project matures.

The build will likely get more complicated as we get closer to a finished product, but should still remain a lot simpler than v2. There could perhaps be multiple binaries each housing a different component, or possibly we’ll continue with a single binary housing all (simply varying the invocation to provide whatever is necessary for a specific host). For now, it’s a piece of cake: compile one binary (“openshift”) from one repository.

You just need golang and git on the VM you’re working with. As mentioned previously, to do this on RHEL 7 you’ll need the “Extras” and “Optional” channels (Fedora does not have this separation):

# subscription-manager repos --enable rhel-7-server-extras-rpms \
  --enable rhel-7-server-optional-rpms

Then just install golang (and maybe some attendant stuff) and git:

# yum install -y golang git golang-vim make

Sidebar: You may wonder about using gcc-go as an alternate toolchain. Without going into great depth, some initial attempts to use gcc-go to compile Docker have had promising results. The two toolchains will tend to vary a bit but maintain “eventual parity” over time. So I’m pretty hopeful we’ll start seeing Red Hat distribute go projects compiled with the gcc-go toolchain, which brings the benefit of distributing on more architectures than just x86_64. But for now… we’ll assume golang.

So, once you’ve installed golang, you need to create a Go work directory and set up environment variables to use it (I’m assuming you’ll do development as a non-root user, although it works as well either way):

$ mkdir $HOME/go

In your ~/.bash_profile file, set a GOPATH and augment your PATH by adding to the end:

export GOPATH=$HOME/go
export PATH=$PATH:$GOPATH/bin

These set up the location that go will use by default for various actions, and adds the go/bin subdir to your path. You’ll need a new shell to get the updated variables, or you can just run the two “export” commands above at the command line. Now you’re ready to clone the github repo, compile it, and use the “openshift” binary:

$ go get github.com/openshift/origin 
$ cd $GOPATH/src/github.com/openshift/origin
$ hack/build-go.sh
++ Building go targets for linux/amd64: cmd/openshift
++ Placing binaries
$ sudo _output/local/go/bin/openshift start
[... the usual startup output ...]

Let me just mention that if the godeps update between builds, you’ll need to clean out your deps first. You can do this with make clean in the top of the repo (assuming make is installed). All it does is rm -rf _output Godeps/_workspace/pkg so you could just do that manually. Also, plain make runs the build script above.

If you want to make execution a little easier, create the ~/go/bin directory and put a couple symlinks in it:

$ mkdir ~/go/bin
$ ln -s `pwd`/_output/local/go/bin/openshift ~/go/bin/openshift 
$ ln -s `pwd`/_output/local/go/bin/openshift ~/go/bin/osc

openshift is of course our usual command, but what’s osc? When you symlink the binary with this name, it is treated as a shortcut for openshift cli, basically the v3 analog to v2 rhc:

$ osc get pods
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS

What’s next?

So now you know what operating system to run in a test VM, and the available mechanisms (Docker, download, or build) for obtaining OpenShift v3. Hopefully that gets you to the starting line. What can you actually do with it? I’ll be exploring that further myself, but for now I’ll leave you with the CONTRIBUTING and HACKING documentation (to explain building, testing, and the road ahead) as well as Ben’s great blog posts on usage: