Back to the future (of OpenShift)

I started this blog originally for just sort of writing down random stuff I tried or discovered. It morphed over time into very rare posts along the lines of “I just spent a week figuring this out, let me write it down to save everyone else the trouble”.

Well, OpenShift Enterprise 2.2 is out the door, and that will be in maintenance mode while we work on version 3. Just when I felt like I knew something about v2, it’s time to return to being a dunce because the world has been upended for v3. So maybe it’s time to return to stumbling around and writing down what happens.

Everything old is new ag…. no, wait:

Everything new is really, really new

Approximately nothing from OpenShift v2 will survive recognizably in v3. It will be as different as systemd is from sysv, as different as Linux is from Windows, as different as solar energy is from the Hoover dam. Here’s what’s on my hit list to get up to speed on (let me know what I missed):

RHEL 7 / Atomic

OSE 2 runs on RHEL 6. About the time Fedora 20 and Ruby on Rails 4 came out, it became evident that trying to make it span RHEL 6 and newer platforms was going to be way more trouble than we wanted. We gave up on that and left Origin users to run on CentOS 6 rather than try to keep including Fedora.

This brings some good things, though. Managing dependencies for OSE 2 on RHEL 6 has been a bit of a nightmare. All signs point to that going away completely for v3. As in, you might not even need yum at all. If the eventual platform we recommend is Atomic, platform updates will be whole-system run via rpm-ostree (AKA atomic). If so, then I’ll need to know about that distribution mechanism. If not, it still looks like there will be a lot less to install and configure on the actual OS.

So:

  • rpm-ostree / atomic
  • systemd – have to understand more than just “systemctl enable foo; systemctl start foo” – how to define services, how daemons are spun off and monitored, where logs go…
  • firewalld – is this just a frontend to iptables?
  • btrfs?

go

go is the new hotness. Ruby on Rails is old and broken. OK, not old and broken, but docker, kubernetes, etcd, and the OpenShift layer on top are all go-based. Fortunately I used C all through college… picking up go doesn’t look difficult, should be fun.

golang vs gcc-go – the former is what most are using, the latter gets us more supported platforms if it works with the codebase.

Docker

Docker will be replacing our homegrown containers. It’s a formalization of a lot of the same concepts – creating and containing processes with regards to network, file access, resource usage, etc. Some questions for me to get through:

  • How do I get files into/out of a container? Bind mounts, other kinds of mounts, …? What happens when it goes away?
  • What exactly happens with exposing container networking?
  • How does SELinux contain a Docker container?
  • How do cgroups contain a Docker container in RAM/CPU/etc?
  • How do I control what user runs the processes in a container?
  • How does UnionFS compose multiple containers?
  • How do I configure where images come from?
  • How do I figure out what went wrong after one exits and goes away?

… and a million other things.

Kubernetes

Kubernetes is one orchestration layer on top of Docker. It will handle things like ensuring you have the expected number of copies of an image running across the various hosts on the cluster, and providing a proxy (aka “service”) for reaching them at a stable location.

Kubernetes introduces the concept of “pods” which are essentially just related containers running together on a host and sharing resources. As far as OpenShift is concerned, pods will likely only ever have a single container and thus be synonymous, but the terminology is there nonetheless. Do not confuse “pods” with “apps” (which are also composed of containers, but potentially spread across multiple hosts).

Things to learn:

  • Kubernetes masters present a REST API, so need to know that a bit.
  • How are multiple kubernetes masters synchronized? Just via entries in etcd, or more directly?
  • How do kubernetes masters communicate with minions (kubelets)?
  • How do services/replication masters determine whether a container/pod is working or not?

etcd

Distributed key-value store. I’m not sure why we needed another one, but it seems that it’s going to be the store for lots of critical stuff. Which critical stuff? Good question, probably not *all* of it… What else might we use for a data store?

Aside from the general capabilities of etcd, I need to learn how to cluster and shard them, and how the RAFT consensus synchronizer works (or when it doesn’t work).

OpenShift v3

Of course, this is going to add a further layer on top of Kubernetes, a layer to define apps and user access to them. A lot of it is still in pretty early stages, e.g. there’s not really any concept of users or access controls yet. That’ll change.

  • REST API (parallel but separate from Kubernetes)
  • Building container layers from source code
  • Deployment strategies
  • How does OpenShift influence the placement algorithm with parallels for the scaled/HA apps, zones, and regions of v2?
  • What does the routing layer look like? (We aren’t simply going to expose Kubernetes services) Good gracious, the networking looks to be complicated for this.
  • How will we define and mount storage for use by containers / apps?

Angular.js web console

Having a web application server is so last year (or maybe decade?). The data is all available from REST APIs… now your web app can just be static pages with a ton of JavaScript doing all the work on the client side. This replaces the OpenShift v2 web console app. At least it’s one less service to keep running, and you won’t need to hit “reload” all the time to watch things changing.

Is anything staying the same?

Technology-wise, nothing is staying the same. Get ready for that (I’ve marveled that the rest of the team could pivot so quickly). But we’ve spent a few years now building a PaaS, and of course there are certain patterns that are going to pop up no matter what technology we use. Despite all the technology changes, those same issues are probably what we’ll be beating our heads against, and where hopefully our previous experience will help OpenShift prevail.

Infrastructure, nodes, and routing

OpenShift will probably constitute the infrastructure only – the apps will actually run on hosts that run Linux, Docker, and kubelets. But the general pattern will remain – an orchestration interface, a cluster of compute nodes, and routing layer to reach them.

Composing apps

Apps will still be put together from several components – potentially several containers (I don’t think we’ll call them gears), each potentially composed of some kind of framework plus some of your code. Defining and wiring these together will be the core of what OpenShift continues to do.

Access control

We’re still going to have users. There will still be teams. There may be more layers (e.g. probably admins, “utility” users). It will still be necessary to define things that those users and teams can access. And it will still be necessary to interface with the various ways in which enterprises already define users, groups, authentication, and authorization (Kerberos, LDAP, …).

Proxies (AKA layers of indirection)

In OpenShift v2, there are a number of ways in which your request to an app can actually reach the thing that answers the request, often going through multiple proxies. In perhaps the most complicated case, with an HA app setup, you lookup the app by name (DNS itself consists of several layers of indirection) and reach the external routing layer, which forwards to a node host routing layer, which forwards to a load-balancing gear layer, which forwards to another node’s port proxy, which finally forwards to the application server running in a gear. V3 will differ in the details, but not the pattern.

These proxies don’t exist just to peel back layers of the onion; each point provides an opportunity to hide complicated behavior behind a simple facade. For example:

  • DNS records provide all sorts of routing opportunities, including directing the user to a data center that’s available and geographically close to them.
  • A routing layer can monitor multiple endpoints of the application such that outages are detected and requests are directed to functioning endpoints. These can also hide the fact that gears are being moved, rolling deployments, etc.
  • The node host routing layer can hide the fact that a gear was actually idle when a request arrived, bringing it up only when needed and conserving resources otherwise.
  • The load-balancing gear layer balances traffic and implements sticky sessions.

As you can see, proxies are actually where a lot of the “magic” of a PaaS happens, and you can expect this pattern to continue in v3.

Implementing an OpenShift Enterprise routing layer for HA applications

My previous post described how to make an application HA and what exactly that means behind the scenes. This post is to augment the explanation in the HA PEP of how an administrator should expect to implement the routing layer for HA apps.

The routing layer implementation is currently left entirely up to the administrator. At some point OpenShift will likely ship a supported routing layer component, but the first priority was to provide an SPI (Service Provider Interface) so that administrators could reuse existing routing and load balancer infrastructure with OpenShift. Since most enterprises already have such infrastructure, we expected they would prefer to leverage that investment (both in equipment and experience) rather than be forced to use something OpenShift-specific.

Still, this leaves the administrator with the task of implementing the interface to the routing layer. Worldline published an nginx implementation, and we have some reference implementations in the works, but I thought I’d outline some of the details that might not be obvious in such an implementation.

The routing SPI

The first step in the journey is to understand the routing SPI events. The routing SPI itself is an interface on the OpenShift broker app that must be implemented via plugin. The example routing plugin that is packaged for Origin and Enterprise simply serializes the SPI events to YAML and puts them on an ActiveMQ message queue/topic. This is just one way to distribute the events, but it’s a pretty good way, at least in the abstract. For routing layer development and testing, you can just publishes messages on a topic on the ActiveMQ instance OpenShift already uses (for Enterprise, openshift.sh does this for you) and use the trivial “echo” listener to see exactly what comes through. For production, publish events to a queue (or several if multiple instances need updating) on an HA ActiveMQ deployment that stores messages to disk when shutting down (you really don’t want to lose routing events) – note that the ActiveMQ deployment described in OpenShift docs and deployed by the installer does not do this, being intended for much more ephemeral messages.

I’m not going to go into detail about the routing events. You’ll become plenty familiar if you implement this. You can see some good example events in this description, but always check what is actually coming out of the SPI as there may have been updates (generally additions) since. The general outline of the events can be seen in the Sample Routing Plug-in Notifications table from the Deployment Guide or in the example implementation of the SPI. Remember you can always write your own plugin to give you information in the desired format.

Consuming SPI events for app creation

The routing SPI publishes events for all apps, not just HA ones, and you might want to do something with other apps (e.g. implement blue/green deployments), but the main focus of a routing layer is to implement HA apps. So let’s look at how you do that. I’m assuming YAML entries from the sample activemq plugin below — if you use a different plugin, similar concepts should apply just with different details.

First when an app is created you’re going to get an app creation event:

$ rhc app create phpha php-5.4 -s

:action: :create_application
:app_name: phpha
:namespace: demo
:scalable: true
:ha: false

This is pretty much just a placeholder for the application name. Note that it is not marked as HA. There is some work coming to make apps HA at creation, but currently you just get a scaled app and have to make it HA after it’s created. This plugin doesn’t publish the app UUID, which is what I would probably do if I were writing a plugin now. Instead, you’ll identify the application in any future events by the combination of app_name and namespace.

Once an actual gear is deployed, you’ll get two (or more) :add_public_endpoint actions, one for haproxy’s load_balancer type and one for the cartridge web_framework type (and possibly other endpoints depending on cartridge).

:action: :add_public_endpoint
:app_name: phpha
:namespace: demo
:gear_id: 542b72abafec2de3aa000009
:public_port_name: haproxy-1.4
:public_address: 172.16.4.200
:public_port: 50847
:protocols:
- http
- ws
:types:
- load_balancer
:mappings:
- frontend: ''
 backend: ''
- frontend: /health
 backend: /configuration/health

You might expect that when you make the app HA, there is some kind of event specific to being made HA. There isn’t at this time. You just get another load_balancer endpoint creation event for the same app, and you can infer that it’s now HA. For simplicity of implementation, it’s probably just best to treat all scaled apps as if they were already HA and define routing configuration for them.

Decision point 1: The routing layer can either direct requests only to the load_balancer endpoints and let them forward traffic all to the other gears, or it can actually just send traffic directly to all web_framework endpoints. The recommendation is to send traffic to the load_balancer endpoints, for a few reasons:

  1. This allows haproxy to monitor traffic in order to auto-scale.
  2. It will mean less frequent changes to your routing configuration (important when changes mean restarts).
  3. It will mean fewer entries in your routing configuration, which could grow quite large and become a performance concern.

However, direct routing is viable, and allows an implementation of HA without actually going to the trouble of making apps HA. You would just have to set up a DNS entry for the app that points at the routing layer and use that. You’d also have to handle scaling events manually or from the routing layer somehow (or even customize the HAproxy autoscale algorithm to use stats from the routing layer).

Decision point 2: The expectation communicated in the PEP (and how this was intended to be implemented) is that requests will be directed to the external proxy port on the node (in the example above, that would be http://172.16.4.200:50847/). There is one problem with doing this – idling. Idler stats are gathered only on requests that go through the node frontend proxy, so if we direct all traffic to the port proxy, the haproxy gear(s) will eventually idle and the app will be unavailable even though it’s handling lots of traffic. (Fun fact: secondary gears are exempt from idling – doesn’t help, unless the routing layer proxies directly to them.) So, how do we prevent idling? Here are a few options:

  1. Don’t enable the idler on nodes where you expect to have HA apps. This assumes you can set aside nodes for (presumably production) HA apps that you never want to idle. Definitely the simplest option.
  2. Implement health checks that actually go to the node frontend such that HA apps will never idle. You’ll need the gear name, which is slightly tricky – the above endpoint being on the first gear, it will be accessible by a request for http://phpha-demo.cloud_domain/health to the node at 172.16.4.200. When the next gear comes in, you’ll have to recognize that it’s not the head gear and send the health check to e.g. http://542b72abafec2de3aa000009-demo.cloud_domain/health.
  3. Flout the PEP and send actual traffic to the node frontend. This would be the best of all worlds since the idler would work as intended without any special tricks, but there are some caveats I’ll discuss later.

Terminating SSL (TLS)

When hosting multiple applications behind a proxy, it is basically necessary to terminate SSL at the proxy. (Despite SSL having been essentially replaced by TLS at this point, we’re probably going to call it SSL for the lifetime of the internet.) This has to do with the way routing works under HTTPS; during the intialization of the TLS connection, the client has to indicate the name it wants (in our case the application’s DNS name) in the SNI extension to the TLS “hello”. The proxy can’t behave as a dumb layer 4 proxy (just forwarding packets unexamined to another TLS endpoint) because it has to examine the stream at the protocol level to determine where to send it. Since the SNI information is (from my reading of the RFC) volunteered by the client at the start of the connection, it does seem like it would be possible for a proxy to examine the protocol and then act like a layer 4 proxy based on that examination, and indeed I think F5 LBs have this capability, but it does not seem to be a standard proxy/LB capability, and certainly not for existing open source implementations (nginx, haproxy, httpd – someone correct me if I’m missing something here), so to be inclusive we are left with proxies that operate at the layer 7 protocol layer, meaning they perform the TLS negotiation from the client’s perspective.

Edit 2014-10-08: layer 4 routing based on SNI is probably more available than I thought. I should have realized HAproxy 1.5 can do it, given OpenShift’s SNI proxy is using that capability. It’s hard to find details on though. If most candidate routing layer technologies have this ability, then it could simplify a number of the issues around TLS because terminating TLS could be deferred to the node.

If that was all greek to you, the important point to extract is that a reverse proxy has to have all the information to handle TLS connections, meaning the appropriate key and certificate for any requested application name. This is the same information used at the node frontend proxy; indeed, the routing layer will likely need to reuse the same *.cloud_domain wildcard certificate and key that is shared on all OpenShift nodes, and it needs to be made aware of aliases and their custom certificates so that it can properly terminate requests for them. (If OpenShift supported specifying x509 authentication via client certificates [which BTW could be implemented without large structural changes], the necessary parameters would also need to be published to the routing layer in addition to the node frontend proxy.)

We assume that a wildcard certificate covers the standard HA DNS name created for HA apps (e.g. in this case ha-phpha-demo.cloud_domain, depending of course on configuration; notice that no event announces this name — it is implied when an app is HA). That leaves aliases which have their own custom certs needing to be understood at the routing layer:

$ rhc alias add foo.example.com -a phpha
:action: :add_alias
:app_name: phpha
:namespace: demo
:alias: foo.example.com

$ rhc alias update-cert foo.example.com -a phpha --certificate certfile --private-key keyfile
:action: :add_ssl
:app_name: phpha
:namespace: demo
:alias: foo.example.com
:ssl: [...]
:private_key: [...]
:pass_phrase:

Aliases will of course need their own routing configuration entries regardless of HTTP/S, and something will have to create their DNS entries as CNAMEs to the ha- application DNS record.

A security-minded administrator would likely desire to encrypt connections from the routing layer back to the gears. Two methods of doing this present themselves:

  1. Send an HTTPS request back to the gear’s port proxy. This won’t work with any of the existing cartridges OpenShift provides (including the haproxy-1.4 LB cartridge), because none of them expose an HTTPS-aware endpoint. It may be possible to change this, but it would be a great deal of work and is not likely to happen in the lifetime of the current architecture.
  2. Send an HTTPS request back to the node frontend proxy, which does handle HTTPS. This actually works fine, if the app is being accessed via an alias – more about this caveat later.

Sending the right HTTP headers

It is critically important in any reverse-proxy situation to preserve the client’s HTTP request headers indicating the URL at which it is accessing an application. This allows the application to build self-referencing URLs accurately. This can be a little complicated in a reverse-proxy situation, because the same HTTP headers may be used to route requests to the right application. Let’s think a little bit about how this needs to work. Here’s an example HTTP request:

POST /app/login.php HTTP/1.1
Host: phpha-demo.openshift.example.com
[...]

If this request comes into the node frontend proxy, it looks at the Host header, and assuming that it’s a known application, forwards the request to the correct gear on that node. It’s also possible (although OpenShift doesn’t do this, but a routing layer might) to use the path (/app/login.php here) to route to different apps, e.g. requests for /app1/ might go to a different place than /app2/.

Now, when an application responds, it will often create response headers (e.g. a redirect with a Location: header) as well as content based on the request headers that are intended to link to itself relative to what the client requested. The client could be accessing the application by a number of paths – for instance, our HA app above should be reachable either as phpha-demo.openshift.example.com or as ha-phpha-demo.openshift.example.com (default HA config). We would not want a client that requests the ha- address to receive a link to the non-ha- address, which may not even resolve for it, and in any case would not be HA. The application, in order to be flexible, should not make a priori assumptions about how it will be addressed, so every application framework of any note provides methods for creating redirects and content links based on the request headers. Thus, as stated above, it’s critical for these headers to come in with an accurate representation of what the client requested, meaning:

  1. The same path (URI) the client requested
  2. The same host the client requested
  3. The same protocol the client requested

(The last is implemented via the “X-Forwarded-Proto: https” header for secure connections. Interestingly, a recent RFC specifies a new header for communicating items 2 and 3, but not 1. This will be a useful alternative as it becomes adopted by proxies and web frameworks.)

Most reverse proxy software should be well aware of this requirement and provide options such that when the request is proxied, the headers are preserved (for example, the ProxyPreserveHost directive in httpd). This works perfectly with the HA routing layer scheme proposed in the PEP, where the proxied request goes directly to an application gear. The haproxy cartridge does not need to route based on Host: header (although it does route requests based on a cookie it sets for sticky sessions), so the request can come in for any name at all and it’s simply forwarded as-is for the application to use.

The complication arises in situations where, for example, you would like the routing layer to forward requests to the node frontend proxy (in order to use HTTPS, or to prevent idling). The node frontend does care about the Host header because it’s used for routing, so the requested host name has to be one that the OpenShift node knows in relation to the desired gear. It might be tempting to think that you can just rewrite the request to use the gear’s “normal” name (e.g. phpha-demo.cloud_domain) but this would be a mistake because the application would respond with headers and links based on this name. Reverse proxies often offer options for rewriting the headers and even contents of responses in an attempt to fix this, but they cannot do so accurately for all situations (example: links embedded in JavaScript properties) so this should not be attempted. (Side note: the same prohibition applies to rewriting the URI path while proxying. Rewriting example.com/app/… to app.internal.example.com/… is only safe for sites that provide static content and all-relative links.)

What was that caveat?

I mentioned a caveat both on defeating the idler and proxying HTTPS connections to the node frontend, and it’s related to the section above. You can absolutely forward an HA request to the node frontend if the request is for a configured alias of the application, because the node frontend knows how to route aliases (so you don’t have to rewrite the Host: header which, as just discussed, is a terrible idea). The caveat is that, strangely, OpenShift does not create an alias for the ha- DNS entry automatically assigned to an HA app, so manual definition of an alias is currently required per-app for implementation of this scheme. I have created a feature request to instate the ha- DNS entry as an alias, and being hopefully easy to implement, this may soon remove the caveat behind this approach to routing layer implementation.

Things go away too

I probably shouldn’t even have to mention this, but: apps, endpoints, aliases, and certificates can all go away, too. Make sure that you process these events and don’t leave any debris lying around in your routing layer confs. Gears can also be moved from one host to another, which is an easy use case to forget about.

And finally, speaking of going away, the example routing plugin initially provided :add_gear and :remove_gear events, and for backwards compatibility it still does (duplicating the endpoint events). These events are deprecated and should disappear soon.

HA applications on OpenShift

Beginning with version 2.0, OpenShift Enterprise supports making applications “highly available”. However, there is very little documentation on the details of implementing HA apps or exactly how they work. We surely have customers using these, but they’re mostly doing so with a lot of direct help from our consulting services. Nothing in the documentation brings it all together, so I thought I’d share what I know at this point. I think we can expect these implementation details to remain fairly stable for the lifetime of OSE 2.

This is kind of a brain dump. We are working to get most of this into the manuals if it isn’t already (at least what’s fit for it and not just me rambling). But I don’t know anywhere else to find all this in one place right now.

Why and how?

The motivation behind supplying this feature and the basic architectural points and terminology are well covered at the start of the HA PEP. Definitely read this for background if you haven’t already; I won’t repeat most of the points.

As a super quick overview: HA is an enhancement to scaled apps. Standard scaled apps allow easy horizontal scaling by deploying duplicate instances of the application code and framework cartridge in multiple gears of your app, accessed via a single load balancer (LB) proxy gear (LB is performed by the haproxy cartridge in OSE 2). For a scaled app to become HA, it should have two or more LB gears on different nodes that can each proxy to the gears of the app. That way, if a node containing one LB gear goes down, you can still reach the app via another. This only makes the web framework HA — shared storage and DB replication aren’t OpenShift features (yet), so practically speaking. your app should be stateless or use external HA storage/DB if you actually want it to be HA.

So how do we direct HTTP requests to the multiple LB gears? Via the routing layer, which is described in the PEP, but which OpenShift does not (yet) supply a component/configuration for. If you are interested in HA apps, then chances are good you already have some kind of solution deployed for highly-available routing to multiple instances of an application (e.g. F5 boxes). All you need to do is hook OpenShift and your existing solution together via the Routing System Programming Interface (SPI). Once this is configured, HA apps are given an alias in DNS that resolves to the routing layer, which proxies requests to the LB gears on OpenShift nodes, which proxy requests to the application gears.

Making an app HA (the missing User Guide entry)

The OSE admin guide indicates how administrators can configure OSE to allow HA apps and enable specific users to make apps HA. I’ll assume you have a deployment and user with this capability.

To make an HA app, you first create a scaled app, then make it HA. The “make-ha” REST API call is implemented as an event (similar to “start” or “stop”) on an application. Direct REST API access is currently the only way to do this – rhc and other tools do not implement this call yet. So, currently the only mention is in the REST API guide. So, for example:

$ rhc create-app testha ruby-1.9 -s

Your application ‘testha’ is now available.

$ curl -k -X POST https://broker.example.com/broker/rest/domains/test/applications/testha/events –user demo:password –data-urlencode event=make-ha

long JSON response including: “messages”:[{“exit_code”:0, “field”:null, “index”:null, “severity”:”info”, “text”:”Application testha is now ha”}], “status”:”ok”

Upon success, the app will have scaled to two gears, and your second gear will also be a LB gear. You can confirm by sshing in to the second gear and looking around.

$ rhc app show –gears -a testha

ID State Cartridges Size SSH URL
53c6c6b2e659c57659000002 started haproxy-1.4 ruby-1.9 small 53c6c6b2e659c57659000002@testha-test.example.com
53c6c74fe659c57659000021 started haproxy-1.4 ruby-1.9 small 53c6c74fe659c57659000021@53c6c74fe659c57659000021-test.example.com

$ ssh 53c6c74fe659c57659000021@53c6c74fe659c57659000021-test.example.com

> ls

app-deployments app-root gear-registry git haproxy ruby

> ps -e

12607 ? 00:00:00 haproxy
13829 ? 00:00:00 httpd

 

A third gear will not look the same; it will only have the framework cartridge. I should mention that at this moment there’s a bug in rhc such that it displays all framework gears as having the haproxy LB cartridge, when actually only the first two do.

What does make-ha actually do?

Behind the scenes are a few critical changes from the make-ha event.

First, a new DNS entry has been created to resolve requests to the router. How exactly this happens depends on configuration. (Refer to the Admin Guide for details on how the HA DNS entry is configured.) In the simplest path (with MANAGE_HA_DNS=true), OpenShift itself creates the DNS entry directly; the default is just to prepend  “ha-” to the app name and point that entry at ROUTER_HOSTNAME. Thus our app above would now have a DNS entry ha-testha-test.example.com. With MANAGE_HA_DNS=false, OpenShift counts on the routing layer to receive the routing SPI event and create this DNS entry itself.

In either case, this DNS entry is only useful if it points at the router which serves as a proxy. A request to one of the nodes for “ha-testha-test.example.com” would not be proxied correctly – it’s supposed to be relayed by the routing layer as a request for either “testha-test.example.com” or a secondary LB gear. It’s also possible for the router to just proxy directly to the individual gears (endpoints for which are provided in the routing SPI); however, this may not be desirable for a variety of reasons.

A second change that occurs is that the parameters for the haproxy cartridge are modified. By default in a scaled application, the haproxy cartridge manifest has a minimum and maximum scale of 1 (see the manifest), so there will always be exactly one. But when you make an app HA, in the MongoDB record for the haproxy cartridge instance in your app, the minimum is changed to 2, and the maximum is changed to -1 (signifying no maximum). Also the HA multiplier is applied, which I’ll discuss later. As a result of raising the minimum to 2, the app scales up one gear to meet the minimum.

There are some interesting oddities here.

First, I should note that you can’t scale the haproxy cartridge directly via the REST API at all. You’ll just get an error. You only have the ability to scale the framework cartridge, and the haproxy cartridge may be deployed with it.

Also, once your app is HA, you can no longer scale below two gears:

$ rhc scale-cartridge -a testha ruby –min 1 –max 1
Setting scale range for ruby-1.9 … Cannot set the max gear limit to ‘1’ if the application is HA (highly available)

By the way, there is no event for making your app not HA. Just destroy and re-create it.

Also, if you already scaled the framework cartridge to multiple gears, then making it HA will neither scale it up another gear, nor deploy the haproxy cartridge on an existing gear (which is what I would have expected). So it will not actually be HA at that point. Instead, an haproxy cartridge will be deployed with the next gear created. If you then scale down, that gear will be destroyed, and your app again effectively ceases to be HA (Edit: this is considered a bug). So, make sure you make your app HA before scaling it, so you are assured of having at least two LB gears. A little more about this at the end.

How does auto-scaling work with HA?

If you’re familiar with OpenShift’s auto-scaling, you may be wondering how two or more LB gears coordinate traffic statistics in order to decide when to scale.

First, I’ll say that if you’re going to the trouble to front your OpenShift app with an expensive HA solution, you may want to disable auto-scaling and let your router track the load and decide when to scale. Practically speaking, that can be tricky to implement (there actually isn’t an administrative mechanism to scale a user-owned app, so workarounds are needed, such as having all HA apps owned by an “administrative” user), but it’s something to think about.

That said, if you’re using auto-scaling, perhaps with a customized algorithm, you’ll want to know that the standard haproxy scaling algorithm runs in a daemon on the first gear (“head” gear) of the app, in both regular scaled apps and HA apps. In either case, it bases scaling decisions on a moving average of the number of HTTP sessions in flight. The only difference with an HA app is that the daemon makes a request each sampling period to all of the other LB gears to gather the same statistic and add it into the average.

Also, auto-scaling does not occur if the head gear is lost. So, auto-scaling is not HA – another reason to consider manual scaling. Which brings me to…

What happens when a gear in an HA app is lost?

If a node crashes or for some other reason one or more gears of your HA app is out of commission (rack PDU cut out? Someone tripped over the network cable?), how does your app react? Well, this is no different from a regular scaled app, but it bears repeating: nothing happens. Remaining LB gears notice the missing gears are failing health checks and pull them out of rotation – and that’s it. There is no attempt to replace the missing gear(s) even if they are LB gears, or to augment the gear count to make up for them. No alerts are sent. As far as I’m aware, nothing even notifies the broker to indicate that a gear’s state has changed from “started” to “unresponsive” so it’s hard for the app owner to even tell.

The whole point of HA apps is to remove the single point of failure in standard scaled apps by providing more than one endpoint for requests, instead of the app being unreachable when the head LB gear is lost. However, there are still some functions performed only by the head gear:

  1. Auto-scaling
  2. Deployment of application code changes

These functions are still not HA in an HA app. They still rely on the head gear, and there’s no mechanism for another gear to take over the head gear designation. What’s more, if a gear is out of commission when an application update is deployed, there isn’t any mechanism currently for automatically updating the gear when it returns (the next deploy will bring it in sync of course). These are all known problems that are in our backlog of issues to address, but aren’t terribly obvious to those not intimately familiar with the architecture.

The general assumption in OpenShift when a node disappears is that it will probably come back later – either after some crisis has passed, or via resurrection from backups. So, there is no attempt at “healing” the app. If, however, a node goes away with no hope for recovery, there is a tool oo-admin-repair which can do a limited amount of cleanup to make the MongoDB records accurately reflect the fact that the gears on that node no longer exist. I’m not sure the extent to which scaled apps are repaired; I would expect that the list of gears to route to is accurately updated, but I would not expect the head gear to be replaced (or re-designated) if lost, nor any gears that aren’t scaled (databases or other services that don’t scale yet). If the head gear is intact and auto-scaling in use, then scaling should continue normally. If auto-scaling is disabled, I would not expect any scaling to occur (to replace missing gears) without the owner triggering an event. I haven’t rigorously researched these expectations, however. But let’s just say OpenShift nodes aren’t ready to be treated like cattle yet.

How do I know my gears are spread out enough?

Under OSE 2.0, gears gained anti-affinity, meaning that gears of a scaled app will avoid being placed together on the same node if there are other nodes with available capacity. OSE 2.1 provided more explicit mechanisms for controlling gear placement. The most relevant one here is putting nodes in availability zones — for example, you could use a different zone for each rack in your datacenter. If possible, OpenShift will place the gears of an HA app across multiple zones, ensuring availability if one entire zone of nodes is lost at once.

To have better assurance than “if possible” on your HA app spanning zones, take a look at the two settings in the broker mcollective plugin that force your app to do so.

There’s one major caveat here. Anti-affinity only applies when a gear is placed. The OpenShift administrators could move gears between nodes after the fact, and if they’re not being careful about it, they could move the LB gears of an HA app all to the same node, in which case it would effectively cease to be HA since one node outage would leave the app unavailable. There’s really nothing to prevent or even detect this circumstance at this time.

Verifying that your gears actually are spread across zones as expected is currently a laborious manual operation (list all the gears, see which node their addresses resolve to, check which zones the nodes are in), but the REST API and the tools that access it are being expanded to expose the zone and region information to make this easier.

Let me add one more bit of direction here regarding zones, regions, and districts. Although you need to create districts, and although you use the district tool to assign regions and zones to your nodes, there really is no relationship between districts and zones/regions. They exist for (almost) orthogonal purposes. The purpose of districts is to make moving gears between nodes easy. The purpose of zones and regions is to control where application gears land with affinity and anti-affinity. Districts can contain nodes of multiple zones, zones can span multiple districts. It actually makes a lot of sense to have multiple zones in a district and zones spanning districts, because that way if in the future you need to shut down an entire zone, it will be easy to move all of the gears in the zone elsewhere, because there will be other zones in the same district. OTOH, while you *can* have multiple regions in a district, it doesn’t make a lot of sense to do so, because you won’t typically want to move apps en masse between regions.

The scaling multiplier: more than 2 LB gears

Of course, with only two LB gears, it only takes two gear failures to make your app unreachable. For further redundancy as well as extra load balancing capacity, you might want more than the two standard LB gears in an HA app. This can be accomplished with the use of a (currently esoteric) setting called the HA multiplier. This value indicates how many of your framework gears should also be LB gears. A multiplier of 3 would indicate that every 3rd gear should be a LB gear (so, after 6 gears were created with the first two being LBs, the 9th, 12th, and so on would all be LBs). A multiplier of 10 would mean gears 30, 40, 50,… would be LB gears.

The default multiplier is 0, which indicates that after the first two LB are created to satisfy the minimum requirement for the HA app (remember, the haproxy minimum scale is changed to 2 for an HA app), no more will ever be created. The multiplier can be configured in two ways:

  1. Setting DEFAULT_HA_MULTIPLIER in broker.conf – currently undocumented, but it will be read and used if present. This sets the default, of course, so it is only relevant at the time an app is created. Changing it later doesn’t affect existing apps.
  2. Using oo-admin-ctl-app to change the multiplier for a specific app, e.g:

# oo-admin-ctl-app -l demo -a testha -c set-multiplier –cartridge haproxy-1.4 –multiplier 3

Note that both are administrative functions, not available to the user via the REST API.

The gear-to-LB ratio: rotating out the LB gears

At app creation, LB gears also directly (locally) serve application requests via the co-located framework cartridge. Once a scaled app is busy enough, gears serving as load balancers may consume significant resources just for the load balancing, leaving less capacity to serve requests. Thus, for performance reasons, after a certain limit of gears created in an app, LB gears remove themselves from the load balancing rotation (“rotate out”). Under this condition, the framework cartridge is left running on LB gears, but receives no requests.

The limit at which this occurs is governed by an environment variable OPENSHIFT_HAPROXY_GEAR_RATIO which is read by the haproxy cartridge. The idea is that when the ratio of total gears to LB gears (rounded) reaches this limit, that is when the LB gears rotate themselves out.

The default value is 3. So in a plain scaled app, once the third gear is created, the first gear is removed from rotation. In an HA app, once the 5th gear is created (5 gears / 2 LB gears rounds to 3) the first two gears are removed from rotation (resulting in an actual reduction from 4 gears servicing requests to 3). In general, it would be unwise to set this value higher than the HA multiplier, as the LB gears would be rotated in and out unevenly as the app scaled.

The most obvious way to change this value is to set it node-wide in /etc/openshift/env/OPENSHIFT_HAPROXY_GEAR_RATIO (an administrative action). However, being an environment variable, it’s also possible for the user to override it by setting an environment variable for an individual app (there’s no administrative equivalent).

Integration

The primary method of integrating a routing layer into OpenShift is via the routing SPI, a service that announces events relevant to a routing layer, such as gears being created. The Routing SPI is an interface in the broker application that may be implemented via a plugin of your choosing. The documentation describes a plugin implementation that relays the events to an ActiveMQ messaging broker, and then an example listener for consuming those messages from the ActiveMQ broker. It is important to note that this is only one implementation, and can be substituted with anything that makes sense in your environment. Publishing to a messaging bus is a good idea, but note that as typically deployed for OpenShift, ActiveMQ is not configured to store messages across restarts. So, definitely treat this only as informational, not a recommended implementation.

Another method for integrating a routing layer is via the REST API. Because the REST API has no administrative concept and access is restricted per user, the REST API is somewhat limited in an administrative capacity; for example, there is no way to list all applications belonging to all users. However querying the REST API could be suitable when implementing an administrative interface to OpenShift that intermediates all requests to the broker, at least for apps that require the routing layer (i.e. HA apps). For example, there could be a single “administrative” user that owns all production applications that need to be HA, and some kind of system for regular users to request apps via this user. In such cases, it may make sense to retrieve application details synchronously while provisioning via the REST API, rather than asynchronously via the Routing SPI.

The most relevant REST API request here is for application endpoints. This includes the connection information for each exposed cartridge on each gear of a scaled app. For example:

$ curl -k -u demo:password https://broker.example.com/broker/rest/domains/test/applications/testha/gear_groups?include=endpoints | python -mjson.tool

... ( example LB endpoint: )
 {
   "cartridge_name": "haproxy-1.4", 
   "external_address": "192.168.122.51", 
   "external_port": "56432", 
   "internal_address": "127.9.36.2", 
   "internal_port": "8080", 
...
   "protocols": [
     "http", 
     "ws"
   ], 
   "types": [
     "load_balancer"
   ]
 }
...

 

Odds and ends

It’s perhaps worth talking about the “gear groups” shown in the API response a little bit, as they’re not terribly intuitive. Gear groups are sets of gears in the app that replicate the same cartridge(s). For a scaled app, the framework cartridge would be in one gear group along with the LB cartridge that scales it. The haproxy LB cartridge is known as a “sparse” cartridge because it is not deployed in all of the gears in the group, just some. If a database cartridge is added to the app, that is placed in a separate gear group (OpenShift doesn’t have scaling/replicated DB cartridges yet, but when it does, these gears groups will become larger than one and will “scale” separately from the framework cartridge). Without parameters, the gear_groups REST API response doesn’t indicate which cartridge is on which gear, just the cartridges that are in a gear group and the gears in that group; this is why rhc currently indicates the haproxy is located on all framework gears. This will be fixed by specifying the inclusion of endpoints in the request.

Remember how making an app HA makes it scale up to two LB gears, but not it if it’s already two gears? I suspect the “minimum” scale from the cartridge is applied to the gear group, without regard for the actual number of cartridge instances; so, while making an app HA will set the haproxy cartridge “minimum” to 2, the effect is that the gear group scale minimum is set to 2 (i.e. scale is a property of the gear group). If the gear group has already been scaled to 2 before being made HA, then it doesn’t need to scale up to meet the minimum (and in fact, won’t allow scaling down, so you can’t ever get that second gear to have the haproxy LB).

One thing discussed in the PEP that has no implementation (aside from a routing layer component): there’s no mechanism for a single LB gear to proxy only to a subset of the other gears. If your app is scaled to 1000 gears, then each LB gear proxies to all 1000 gears (minus LB gears which we assume are rotated out). A few other pieces of the PEP are not complete as yet: removing or restarting specific gears, removing HA, and probably other bits planned but not delivered in the current implementation.

 

The 7 easy steps TIME VAMPIRES hope you won’t use. #6 changed my life!

Vampire Kitty

Picture by Faris Algosaibi, under CC by 2.0 license (https://creativecommons.org/licenses/by/2.0/). Cropped from original.

You sit down after dinner to research a few home improvement ideas on the web. A funny picture with a link catches your eye and… when bed time rolls around you’ve read fourteen motivational stories, watched a series of skateboarding turkey videos, grumbled on Facebook about obviously corrupt politicians in another state, shared some really pretty pictures of Antarctica you found, and not done any research! How did this happen? How does this keep happening?

How well you adapt for the rest of the 21st century will depend mainly on how well you defend against the distractions and outright manipulation in the increasing stream of available information. With the rise of social media, the voices screaming “look at me!” have shifted tactics – not only are they trying to get your attention, they’re trying to hijack your friends list. It didn’t take long to discover how to push our buttons; the psychological techniques are becoming well-known and ubiquitous. Every social media whore dreams of “going viral” and will post any outrageous thing if it gets a few million hits to their crappy ad farm.

“Go viral” is a good term for it; these are attention viruses spread via social media, using your resources as a host body to spread themselves. In addition to wasting your time, they tend to be inflammatory, misleading, and low in information. They make the internet a worse place. And I, for one, have gotten sick of seeing the same kinds of headlines trying to suck me in, the same kind of misdirection and manipulation stirring the internet’s collective psyche.

The good news is that you can fight back. The enemy may have your number, but you can learn to recognize the manipulation and avoid it. Here are some current ways – but rest assured that their bag of tricks will continue to adapt (you might want to run a search every so often for the latest).

1. Avoid posts with numbers in the title

Perhaps it’s the sense of accomplishment we feel as we tick through the list. Perhaps it’s the curiosity to see “are there *really* (only) that many?” Perhaps it’s just to see if we’re already smart enough to know the whole list. Maybe numbers are inherently more credible. Whatever the reason, social media experts have figured out that we will pretty much click on (and share) any damn thing presented as a numbered list. Even if you know it will probably tell you nothing new. (Numbered lists aren’t the only offenders; consider, for example, the trend of “99% will fail this simple test!” articles.)

Actual news is not presented as “25 things you didn’t know about XYZ”. Useful advice is not presented as “4 easy steps”. In fact, it’s incredibly rare for it to have any number at all in the title. So if it does, that’s a red flag: you can feel simultaneously smug and smarter about skipping this link because it is undoubtedly an attention virus.

Especially with an addendum like “Number X blew my mind!”

2. Avoid titles with emotionally charged words

“Shocking!” “Amazing!” “You won’t believe…” “Blown away” “Can’t stop laughing/crying” “Epic” ALL CAPS!

These and other attention-seeking techniques are used exclusively by attention-desperate social media whores. Not by actually informative articles. Not by people who have a purpose for writing other than maximizing clicks and shares. It is manipulation, pure and simple. The more prevalent this sort of thing becomes, the more it drowns out balanced, informative writing on the internet. And the more you read it, the more often your blood pressure and cortisol levels will rise needlessly. Don’t click that link! Don’t do it!

3. Avoid posts that you react to with “no way!” or “oh yeah?”

If you can feel your eyebrows rising just from reading the headline (police brutality, stupid politician tricks, “you’ll never guess”), you can bet it’s deliberately misleading in order to shock you and draw you in. Resist. And for the love of Bog, don’t get drawn into threads 100 comments long by people who didn’t even read the article. You will accomplish nothing but making yourself and perhaps a few others angry. Don’t bother unless you’re a troll and that’s how you get your lulz.

4. If you find yourself reading an attention virus, at least avoid sharing it

So you might enjoy following Facebook links for some meaningless entertainment from time to time. I get it. But… do you have to infect your friends too? Do you have to reward these time vampires?

No. You don’t. In fact, with the Information Deluge unfolding this century, it’s your responsibility not to.

If (perhaps by accident) you find yourself visiting one of these, at least keep it to yourself. You cover your mouth when you cough and sneeze, right? Have the same courtesy for your readers’ brains. Or are you one of those people still forwarding stupid chain letters?

5. Use ad-blocking browser plugins

The push for sensationalizing the internet is all about displaying ads. More clicks mean more ad views and more revenue. If you kill the ad revenue, you stop rewarding the behavior.

Also, you don’t need to waste your attention on ads. I have basically never seen an ad on Facebook. I don’t really see any ads on the web other than the occasional text ad or YouTube intro. How? I’ve been using ad-blocking software since it was an esoteric art requiring running a local proxy and manually tweaking browser settings and blacklist entries.

It’s a lot easier now. For years it’s been as easy as searching for browser plugins and clicking a few buttons to install them. I know about AdBlock+ for FireFox and AdBlock+ for Chrome. If you’re using something else there’s probably a plugin for that too (even for mobile).

Personally, I think advertising funding is a blight on the internet (“you won’t believe” I once worked for an advertising startup) and would like to see it destroyed in favor of other models. If you disagree and feel morally obliged to allow websites to manipulate your mind in return for the benefit they bring you (or even find targeted ads actually – choke! – informative), you can usually configure an ad-blocker to allow ads selectively, and still starve the time vampires you stumble upon.

6. Use site-blocking browser plugins

Self-restraint is a great thing to cultivate, but most of us would admit to needing a little help. And the wonderful thing about the information age is that there are tools to help you… automatically.

You can use site-blocker plugins to block whole sites that you know are full of trash, or just to restrict the amount of time that you waste at them. For example, BlockSite for Chrome and BlockSite for Firefox can block sites at specified times and days of the week. Also consider StayFocusd for Chrome, which tracks and limits the time you waste as well as providing a “nuclear option” to block all “bad” sites or allow only “good” sites for a few hours to help you concentrate. LeechBlock for Firefox appears similar. These double as procrastination-blockers, useful beyond simple avoidance of attention viruses.

Consider blocking all of the most addictive sites on the internet or pretty much anything linked from Buzzfeed (they recommend blocking themselves!). Or just look through your browser history to see where the time goes.

7. Filter your news sources

The easiest way to save money is to have it automatically deducted from your paycheck. You don’t miss the money you never see. Similarly, the easiest way to reserve your attention for worthy topics is to block ones you know tend to be trash. You don’t have to decide to ignore the tripe you never see.

Spam, trolls, and general “noise” have all been with us since the dawn of the internet. News-readers on usenet used killfiles to automatically ignore posts. Once email became a cesspool of spam and phishing, filtering became a standard tool there too (some services automating it with great sophistication). Social networking may take a little longer because frankly, Facebook’s profits are built on sucking you in and using your friends list and interests to advertise to you. It’s unlikely they’ll be providing useful automated filters of any variety soon.

Sick of the clutter on Facebook? Try F.B. Purity. It’s a browser plugin that essentially rewrites the Facebook interface, allowing you to filter out what you don’t want (hopefully this post has given you some good ideas). It’s pretty easy to install; just be aware that Facebook’s interface is changing all the time, and when it does you may experience bizarre glitches due to a mismatch between what Facebook provides and what F.B. Purity expects, at which point you’ll need to (1) recognize what’s going on (2) possibly wait for an update from FBP or just disable it for a while, and (3) update FBP. So this isn’t for everyone, but it’s what’s available right now. Perhaps other social networks that aren’t as invested in cramming junk in your face will lead the way in enabling filtering, forcing Facebook to do the same. Or perhaps Facebook will become irrelevant. I don’t know what will happen, but if users don’t start valuing the ability, it won’t appear of its own accord. I suggest taking matters into your own hands.

Other sources often do provide methods of filtering, or there may be a browser plugin to enable it. Search for these and use them.

Irony

Am I aware of the irony/hypocrisy inherent in this post at multiple levels? Yes; yes, I am.

But now you know. If you make this post the last time you’re ever roped in by these tactics, I can die happy.

Now share this with all your friends, and leave some comments below!

Response to “PaaS for Realists” re OpenShift

I work on OpenShift but would not claim to be the voice of OpenShift or Red Hat. I came across Paas for Realists the other day and thought it could use a quick response. No one is vetting it but me :)

There are some good points in this article. Just like other technology, a PaaS does not magically solve anything. Actually it carries forward all of the flaws of the technology it includes, and then adds a layer of complexity on top. The PaaS does a lot of work for you, and that can be really nice, but it can get in the way too, and the operational model is not always a good fit. It’s important to know what you’re getting and what you’re giving up by employing such a solution.

I wish I could just annotate the article, but quote and response will have to do…

Magical autoscaling

As I said in my previous post, this really doesn’t exist. Your application has to be DESIGNED to scale this way.

Agreed; auto-scaling is a difficult problem to tackle, and if your application isn’t designed for stateless, horizontal scaling up and down, it’s just not going to work well. This isn’t really specific to PaaS.

I would note, by the way, that while OpenShift does enable auto-scaling web frameworks according to a default algorithm, you can either override the algorithm or just disable it and scale manually, whatever works best for your app. One size does not fit all. Sticky sessions are built in if you need that, though sessions are only clustered with the JBoss EAP cartridge (so for all others, in-memory or on-disk sessions will be lost if you scale down or lose a gear).

Magical Autorecover

Just like autoscaling, this is also not what you think it is. Unless your application maintains exactly ZERO state, then you will never see this benefit.

OpenShift doesn’t really claim magical autorecovery or self-healing (yet). Agreed, if you’re storing state on a gear, it can be lost to an outage. Scaling just means you have copies of your webapp. If you want to maintain state in a production setting, you’ll need to store it off-PaaS. You would need to set up something to account for outages in a more traditional deployment too; OpenShift just doesn’t do anything special to help (yet).

I’m sure someone will call me on this and I’m willing to listen but I do know for a fact that the autofailover model of things like your MySQL instance depend on migratable or shared storage (at least from my reading of the docs).

Databases can generally be made HA via replication of some variety. We’ve done some R&D on making database cartridges scalable in order to provide HA storage to an app; it will probably happen at some point – definitely a weakness right now. For now, if you want a HA database, set it up outside the PaaS. You would have had to do that without a PaaS anyway, and your DBAs are already pretty good at it, right? What OpenShift *does* get you is the ability to develop your app on the PaaS against a throwaway DB, then when you want to go to production on the PaaS, you push exactly the same code to your production app and just set some environment variables to point to the production DB.

Same thing for storage; if you want HA storage, set it up outside the PaaS. This is even trickier to solve in-PaaS than DBs, but we’re hoping to address it based on the version of NFS that just came out with RHEL 7.

Also one of the more hilarious bits I’ve found is the situation with DNS. I can’t count the number of shops where DNS changes where things like wildcard DNS were verboten. Good luck with the PaaS dyndns model!

OpenShift doesn’t need wildcard DNS, but it does use DDNS, and that can definitely be a sticking point to demo in organizations where even delegating a subdomain is a bureaucratic battle. But at least it’s a battle you only have to have once at solution deployment, instead of for every single app you deploy. Do you have a better suggestion for how to dynamically make apps available? Most places don’t like to provide a pool of IPs to burn per-app; even more than they don’t like DDNS.

Operational Immaturity

Any tool that uses MongoDB as its persistent datastore is a tool that is not worth even getting started with. You can call me out on this. You can tell me I have an irrational dislike of MongoDB.

You have an irrational dislike of MongoDB :)

Well alright, not totally irrational. The default write concern for earlier versions of the mongo clients left something to be desired if you cared about DB integrity, we’ve encountered memory leaks when using SSL connections, and we haven’t made the leap to MongoDB 2.6 yet. I’m sure you have more horror stories to share.

But the fact is, we’ve been running using MongoDB as the core of OpenShift for years now – both in the public service and for our private solution – and it has seriously been very solid. Our ops guys said (and I quote) “We had a couple of outages early on that were traced back to mongo bugs but generally we don’t even think about it.  Mongo just keeps ticking and that’s fantastic.”

Was the use of MongoDB your only criticism here? I think we do provide pretty thorough instructions on how to make the OpenShift infrastructure solid.

Additionally I’ve found next to zero documentation on how a seasoned professional (say a MySQL expert) is expected to tune the provisioned MySQL services. The best I can gather is that you are largely stuck with what the PaaS software ships in its service catalog. In the case of OpenShift you’re generally stuck with whatever ships with RHEL.

Tuning is a definite weakness. It is hard to both provide software setup you don’t have to think much about, and also allow you to administer it. This is a known concern that I think we will work toward with the docker-based next generation.

You’re not stuck with whatever ships with RHEL on OpenShift. Publicly and privately we’ve added support for several SCLs which can provide completely new platforms or just different versions than what ship with RHEL. You can also add a custom cartridge to run pretty much any technology that speaks HTTP (and depending on your needs, many that don’t).

Another sign of operational immaturity I noticed in OpenShift is that for pushing a new catalog item you actually have to RESTART a service before it’s available.

Do you mean, to add a cartridge, you need to restart something? If so, that’s not really true, although it was in the past, and I’m not sure we’ve clarified the directions well enough since.

Disaster Recovery

After going over all the documentation for both tools and even throwing out some questions on twitter, disaster recovery in both tools basically boils down to another round of “good luck; have fun”.

[…]

Again based on the research I’ve done (which isn’t 1000% exhaustive to be fair), I found zero documentation about how the administrator of the PaaS would back up all the data locked away in that PaaS from a unified central place.

The OpenShift infrastructure just depends on redundancy for HA/DR. Hopefully that’s pretty straightforward given the directions.

To make your applications recoverable, use highly-available storage for the nodes and back it up. There’s not a great deal of detail in that section of docs, but does there need to be?

Affinity

Affinity issues make the DR scenario even MORE scary. I have no way of saying “don’t run the MySQL database on the same node as my application”.

Well, that’s true, but with OpenShift you *do* have the ability to define availability zones and the gear placement algorithm will ensure that your scaled app is spread across them. Once we get scaled DB cartridges I expect the same will apply for them (see above re “in-PaaS DBs aren’t for production yet”). And if that’s not good enough for you, we have a hook to customize the gear placement algorithm until it is good enough for you.

Unless your engineering organization is willing to step up to the shared responsibility inherent in a PaaS, then you definitely aren’t ready. Until then, your time and money is better spent optimizing and standardzing your development workflow and operational tooling to build your own psuedo-PaaS.

Agreed, your developers are not going to just be able to ignore that they’re working and deploying on a PaaS. It’s not a magical solution. It’s a specific way to provide services with pros and cons all its own and that context needs to be understood by all stakeholders.

It *may* be possible to create your own PaaS specific to your needs and be happier with it than you would be purchasing someone else’s solution. But I will say that we have run into a lot of forward-thinking companies that did exactly this within the last few years, and now are desperate to get away from maintaining that solution. Keeping up with ever-churning platforms and security updates always takes more work than anyone expects. So if you think PaaS is right for you, also ask yourself: do you want to be in the building-a-PaaS business, or the using-a-PaaS business?

OpenShift logging and metrics

Server logs aren’t usually a very exciting topic. But if you’re a sysadmin of an OpenShift Enterprise deployment with hundreds of app servers coming and going unpredictably, managing logs can get… interesting. Tools for managing logs are essential for keeping audit trails, collecting metrics, and debugging.

What’s new

Prior to OpenShift Enterprise 2.1, gear logs simply wrote to log files. Simple and effective. But this is not ideal for a number of reasons:

  1. Log files take up your gear storage capacity. It is not hard at all to fill up your gear with logs and DoS yourself.
  2. Log files go away when your gear does. Particularly for scaled applications, this is an unacceptable loss of auditability.
  3. Log file locations and rotation policies are at the whim of the particular cartridge, thus inconsistent.
  4. It’s a pain for administrators to gather app server logs for analysis, especially when they’re spread across several gears on several nodes.

OSE 2.1  introduced a method to redirect component and gear logs to syslogd, which is a standard Linux service for managing logs. In the simplest  configuration, you could have syslog just combine all the logs it receives into a single log file (and define rotation policy on that). But you can do much more. You can filter and send log entries to different destinations based on where they came from; you can send them to an external logging server, perhaps to be analyzed by tools like Splunk. Just by directing logs to syslog we get all this capability for free (we’re all about reusing existing tools in OpenShift).

Where did that come from?

Well, nothing is free. Once you’ve centralized all your logging to syslogd, then you have the problem of separating entries back out again according to source so your automation and log analysis tools can distinguish the logs of different gears from each other and from other components. This must be taken into account when directing logs to syslogd; the log entries must include enough identifying information to determine where they came from down to the level of granularity you care about.

We now give instructions for directing logs to syslog for OpenShift components too; take a look at the relevant sections of the Administration Guide for all of this. Redirecting logs from OpenShift components is fairly straightforward. There are separate places to configure if you want to use syslog from the broker rails application, the management console rails application, and the node platform. We don’t describe how to do this with MongoDB, ActiveMQ, or httpd, but those are standard components and should also be straightforward to configure as needed. Notably left out of the instructions at this point are instructions to syslog the httpd servers hosting the broker and console rails apps; but the main items of interest in those logs are error messages from the actual loading of the rails apps, which (fingers crossed) shouldn’t happen.

Notice that when configuring the node platform logging, there is an option to add “context” which is to say, the request ID and app/gear UUIDs if relevant. Adding the request ID allows connecting what happened on the node back to the broker API request that spawned the action on the node; previously this request ID was often shown in API error responses, but was only logged in the broker log. Logging the request ID with the logs for resulting node actions to the syslog  now makes it a lot easier to get the whole picture of what happened with a problem request, even if the gear was destroyed after the request failed.

Distinguishing gear logs

There are gear logs from two sources to be handled. First, we would like to collect the httpd access logs for the gears, which are generated by the node host httpd proxy (the “frontend”). Second, we would like to collect logs from the actual application servers running in each gear, whether they be httpd, Tomcat, MongoDB, or something else entirely.

Frontend access logs

These logs were already centralized into /var/log/httpd/openshift_log and included the app hostname as well as which backend address the request was proxied to. A single httpd option “OpenShiftFrontendSyslogEnabled” adds logging via “logger” which is the standard way to write to the syslog. Every entry is tagged with “openshift-node-frontend” to distinguish frontend access logs from any other httpd logs you might write.

With 2.1 the ability to look up and log the app and gear UUIDs is added. A single application may have multiple aliases, so it is hard to automatically collate all log entries for a single application. Also, an application could be destroyed and re-created with the same address, though it is technically a different app from OpenShift’s viewpoint. Also, the same application may have multiple gears, and those gears may come and go or be moved between hosts; the backend address for a gear could also be reused by a different gear after it has been destroyed.

In order to uniquely identify an application and its gears in the httpd logs for all time, OSE 2.1 introduces the “OpenShiftAnnotateFrontendAccessLog” option which adds the application and gear UUIDs as entries in the log messages. The application UUID is unique to an application for all time (another app created with exactly the same name will get a different UUID) and shared by all of its gears. The gear UUID is unique to each gear; note that the UUID (Universally Unique ID) is different from the gear UID (User ID) which is just a Linux user number and may be shared with many other gears. Scale an application down and back up, and even if the re-created gear has the same UID as a previous gear, it will have a different UUID. But note that if you move a gear between hosts, it retains its UUID.

If you want to automatically collect all of the frontend logs for an application from syslog, the way you should do it is to set the “OpenShiftAnnotateFrontendAccessLog” option and collect logs by Application UUID. Then your httpd log entries look like this:

Jun 10 14:43:59 vm openshift-node-frontend[6746]: 192.168.122.51 php-demo.openshift.example.com – – [10/Jun/2014:14:43:59 -0400] “HEAD / HTTP/1.1″ 200 – “-” “curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2″ (3480us) – 127.1.244.1:8080/ 53961099e659c55b08000102 53961099e659c55b08000102

The “openshift-node-frontend” tag is added to these syslog entries by logger (followed by the process ID which isn’t very useful here). The app and gear UUIDs are at the end there, after the backend address proxied to. The UUIDs will typically be equal in the frontend logs since the head gear in a scaled app gets the same UUID as the app; they would be different for secondary proxy gears in an HA app or if you directly requested any secondary gear by its DNS entry for some reason.

Gear application logs

In order to centralize application logs, it was necessary to standardize cartridge logging such that all logs go through a standard mechanism that can then be centrally configured. You might think this would just be syslog, but it was also a requirement that users should be able to keep their logs in their gear if so desired, and getting syslog to navigate all of the permissions necessary to lay down those log files with the right ownership proved difficult. So instead, all cartridges now must log via the new utility logshifter (our first released component written in “go” as far as I know). logshifter will just write logs to the gear app-root/logs directory by default, but it can also be configured (via /etc/openshift/logshifter.conf) to write to syslog. It can also be configured such that the end user can choose to override this and have logs written to gear files again (which may save them from having to navigate whatever logging service ends up handling syslogs when all they want to do is debug their app).

Here distinguishing between which gear is creating the log requires somewhat more drastic measures. We want to indicate which gear created each log entry, but we can’t trust each gear to self-report accurately (as opposed to spoofing the log traffic actually coming from another gear or something else entirely). So the context information is added by syslog itself via a custom rsyslog plugin, mmopenshift. Properly configuring this plugin requires an update to rsyslog version 7, which (to avoid conflicting with the version shipped in RHEL) is actually shipped in a separate RPM, rsyslog7. So to usefully consolidate gear logs into syslog really requires replacing your entire rsyslog with the newer one. This might seem extreme, but it’s actually not too bad.

Once this is done, any logs from an application can be directed to a central location and distinguished from other applications. This time the distinguishing characteristics are placed at the front of the log entry, e.g. for the app server entry corresponding to the frontend entry above:

2014-06-10T14:43:59.891285-04:00 vm php[2988]: app=php ns=demo appUuid=53961099e659c55b08000102 gearUuid=53961099e659c55b08000102 192.168.122.51 – – [10/Jun/2014:14:43:59 -0400] “HEAD / HTTP/1.1″ 200 – “-” “curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2″

The example configuration in the manual directs these to a different log file, /var/log/openshift_gears. This log traffic could be directed to /var/log/messages like the default for everything else, or sent to a different destination entirely.

Gear metrics

Aside from just improving log administration capabilities, one of the motivations for these changes is to enable collection of arbitrary metrics from gears (see the metrics PEP for background). As of OSE 2.1, metrics are basically just implemented as log messages that begin with “type=metric”. These can be generated in a number of ways:

  • The application itself can actively generate log messages at any time; if your application framework provides a scheduler, just have it periodically output to stdout beginning with “type=metric” and logshifter will bundle these messages into the rest of your gear logs for analysis.
    • Edit 2014-06-25: Note that these have a different format and tag than the watchman-generated metrics described next, which appear under the “openshift-platform” tag and aren’t processed by the mmopenshift rsyslog plugin. So you may need to do some work to have your log analyzer consider these metrics.
  • Metrics can be passively generated in a run by the openshift-watchman service in a periodic node-wide run. This can generate metrics in several ways:
    • By default it generates standard metrics out of cgroups for every gear. These include RAM, CPU, and storage.
    • Each cartridge can indicate in its manifest that it supports metrics, in which case the bin/metrics script is executed and its output is logged as metrics. No standard cartridges shipped with OSE support metrics at this time, but custom cartridges could.
    • Each application can create a metrics action hook script in its git repo, which is executed with each watchman run and its output logged as metrics. This enables the application owner to add custom metrics per app.

It should be noted that the cartridge and action hook metrics scripts have a limited time to run, so that they can’t indefinitely block the metrics run for the rest of the gears on the node. All of this is configurable with watchman settings in node.conf. Also it should be noted that watchman-generated logs are tagged with “openshift-platform” e.g.:

Jun 10 16:25:39 vm openshift-platform[29398]: type=metric appName=php6 gear=53961099e659c55b08000102 app=53961099e659c55b08000102 ns=demo quota.blocks.used=988 quota.blocks.limit=1048576 quota.files.used=229 quota.files.limit=80000

The example rsyslog7 and openshift-watchman configuration will route watchman-generated entries differently from application-server entries since the app UUID parameter is specified differently (“app=” vs “appUuid=”). This is all very configurable.

I am currently working on installer options to enable these centralized logging options as sanely as possible.

OpenShift, Apache, and severe hair removal

I just solved a problem that stumped me for a week. It was what we in the business call “a doozy”. I’ll share here, mostly to vent, but also in case the process I went through might help someone else.

The problem: a non-starter

I’ve been working on packaging all of OpenShift Enterprise into a VM image with a desktop for convient hacking on a personal/throwaway environment. It’s about 3 GB (compressed) and takes an hour or so to build. It has to be built on our official build servers using signed RPMs via an unattended kickstart. There have been a few unexpected challenges, and this was basically the cherry on top.

The problem was that the first time the VM booted… openshift-broker and openshift-console failed to start. Everything else worked, including the host-level httpd. Those two (which are httpd-based) didn’t start, and they didn’t leave any logs to indicate why. They didn’t even get to the point of making log files.

And the best part? It only happened the first time. If you started the services manually, they worked. If you simply rebooted after starting the first time… they worked. So basically, the customer’s first impression would be that it was hosed… even though it magically starts working after a reboot, the damage is done. I would look like an idiot trying to release with that little caveat in place.

Can you recreate it? Conveniently?

WTF causes that? And how the heck do I troubleshoot? For a while, the best I could think of was starting the VM up in runlevel 1 (catch GRUB at boot and add the “1” parameter), patching in diagnostics, and then proceeding to init. After one run, if I don’t have what I need… recreate the VM and try again. So painful I literally just avoided it and busied myself with other things.

The first breakthrough was when I tried to test the kinds of things that happen only at the first boot after install. There are a number, potentially – services coming up for the first time and so forth. Another big one is network initialization on a new network and device. I couldn’t see how that would affect these services (they are httpd binding only to localhost), but I did experiment with changing the VM’s network device after booting (remove the udev rule, shut down, remove the device, add another), and found that indeed, it caused the failure on the next boot reliably.

So it had to be something to do with network initialization.

What’s the actual error?

Being able to cause it at will on reboot meant much easier iteration of diagnostics. First I tried just adding a call to ifconfig in the openshift-broker init script. I couldn’t see anything in the console output, so I assumed it was just being suppressed somehow.

Next I tried to zero in on the actual failure. When invoked via init script, the “daemon” function apparently swallows console output from the httpd command, but it provides an opportunity to add options to the command invocation, so I looked up httpd command line options and found two that looked helpful: “-e debug -E /var/log/httpd_console”:

-e level
Sets the LogLevel to level during server startup. This is useful for temporarily increasing the verbosity of the error messages to find problems during startup.

-E file
Send error messages during server startup to file.

This let me bump up the logging level at startup and capture the server startup messages. (Actually probably only the latter matters. Another one to keep in mind is -X which starts it as a single process/thread only – helpful for using strace to follow it. Not helpful here though.)

This let me see the actual failure:

 [crit] (EAI 9)Address family for hostname not supported: alloc_listener: failed to set up sockaddr for 127.0.0.1

Apparently the httpd startup process tries to bind to network interfaces before it even opens logs, and this is what you get when binding to localhost fails.

What’s the fix?

An error is great, but searching the mighty Google for it was not very encouraging. There were a number of reports of the problem, but precious little about what actually caused it. The closest I could find was this httpd bug report:

Bug 52709 - Apache can’t bind to 127.0.0.1 if eth0 has only IPv6

[…]

This bug also affects httpd function ap_get_local_host() as described in http://bugs.debian.org/629899
Httpd will then fail to get the fully qualified domain name.

This occurs when apache starts before dhclient finished its job.

Here was something linking the failure to bind to 127.0.0.1 to incomplete network initialization by dhclient. Suddenly the fact that my test “ifconfig” at service start had no output did not seem like a fluke. When I started the service manually, ifconfig certainly had output.

So, here’s the part I don’t really claim to understand. Apparently there’s a period after NetworkManager does its thing and other services are starting where, in some sense at least, the network isn’t really available. At least not in a way that ifconfig can detect, and not in a way that allows httpd to explicitly bind to localhost.

As a workaround, I added a shim service that would wait for network initialization to actually complete before trying to start the broker and console. I could add a wait into those service scripts directly, but I didn’t want to hack up the official files that way. So I created this very simple service (openshift-await-eth0) that literally just runs “ifconfig eth0″ and waits up to 60 seconds for it to include a line beginning “inet addr:”. Notably, if I have it start right after NetworkManager, it finishes immediately, so it seems the network is up at that point, but goes away just in time for openshift-broker and openshift-console to trip over it. So I have the service run right before openshift-broker, and now my first boot successfully starts everything.

Since this is probably the only place we’ll ever deploy OpenShift that has dhclient trying to process a new IP at just the time the broker and console are being started, I don’t know that it will ever be relevant to anyone else. But who knows. Maybe someone can further enlighten me on what’s going on here and a better way to avoid it.

Follow

Get every new post delivered to your Inbox.

Join 205 other followers