Wednesday

Returning from a long silence, going to try once again to make a habit of journaling. Expect it to be mundane.

Also returning from a long vacation — two weeks (that’s long for me) plus two days of F2F with my team. So, a fair amount of time going through email, trying to respond to quick things, turning the rest into personal Trello cards. For a long time I tried to turn things into todos in the GMail app, which had the advantage of enabling nice references to emails so I could return to them and follow up when done with something. However it didn’t do a very good job of capturing the state of each task and I was clearly not really using it. So, trying something else. Not sure personal Trello will stick either, but I gotta keep trying things until something does.

Right now I’m stuck trying to get openshift-ansible to run to test a little change I’m making. openshift_facts module is failing inexplicably:

<origin-master> (0, 'Traceback (most recent call last):\r\n File "/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py", line 2470, in <module>\r\n main()\r\n File "/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py", line 2457, in main\r\n protected_facts_to_overwrite)\r\n File "/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py", line 1830, in __init__\r\n protected_facts_to_overwrite)\r\n File "/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py", line 1879, in generate_facts\r\n facts = set_selectors(facts)\r\n File "/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py", line 496, in set_selectors\r\n facts[\'logging\'][\'selector\'] = None\r\nTypeError: \'unicode\' object does not support item assignment\r\n', 'Shared connection to 192.168.122.156 closed.\r\n')
fatal: [origin-master]: FAILED! => {
 "changed": false, 
 "failed": true, 
 "module_stderr": "Shared connection to 192.168.122.156 closed.\r\n", 
 "module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py\", line 2470, in <module>\r\n main()\r\n File \"/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py\", line 2457, in main\r\n protected_facts_to_overwrite)\r\n File \"/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py\", line 1830, in __init__\r\n protected_facts_to_overwrite)\r\n File \"/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py\", line 1879, in generate_facts\r\n facts = set_selectors(facts)\r\n File \"/tmp/ansible_QyeeOK/ansible_module_openshift_facts.py\", line 496, in set_selectors\r\n facts['logging']['selector'] = None\r\nTypeError: 'unicode' object does not support item assignment\r\n", 
 "msg": "MODULE FAILURE", 
 "rc": 0
}

And since that error happens early in init of the first master, it cascades to the node which fails trying to look up the master’s version, giving a lovely masking error at the end of the output:

fatal: [origin-node-1]: FAILED! => {
 "failed": true, 
 "msg": "The task includes an option with an undefined variable. The error was: {{ hostvars[groups.oo_first_master.0].openshift_version }}: 'dict object' has no attribute 'openshift_version'\n\nThe error appears to have been in '/home/lmeyer/go/src/github.com/openshift/openshift-ansible/playbooks/common/openshift-cluster/initialize_openshift_version.yml': line 16, column 5, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n pre_tasks:\n - set_fact:\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: {{ hostvars[groups.oo_first_master.0].openshift_version }}: 'dict object' has no attribute 'openshift_version'"

Yeah, so… Ansible has a great way of welcoming you back.

 

Advertisements

Concerns for preflight check design

Lately I’m working on preflight checks for OpenShift v3 Ansible installs/upgrades. There is no piece right now that checks that you have everything you might reasonably need set up for an install/upgrade and bails out before doing anything if you don’t. What happens right now is that you get partway through the install/upgrade and then find out… oh, you have the wrong repos enabled or whatever, UGLY ERROR -> Fix it and start over again… bleah. Nobody enjoys SEV1 support calls in the middle of the night. For installs and particularly for upgrades, we’d really like the sysadmin to be able to run a preflight check before their outage window and find out about any common problems at that time.

So my latest conundrum is figuring out what the user expects during a preflight check. This is not as straightforward as you might think. The installer does a pretty good job of figuring out what you meant without you having to specify everything down to the last detail (because humans are not reliably good at doing that). Thing is, it may install and configure a number of things on your systems… just in order to figure out how to run.

This isn’t a big deal in the installer, because when you run an install or upgrade, you expect to install and configure things. Preflight checks are different because you’d like to affect system state as little as possible. The whole idea is to do checks before you make changes. So if we just reuse the logic the installer uses, users may be unpleasantly surprised to find their systems being changed.

So, for example. Pretty much the first thing that we want is facts about the configuration and the systems, which the openshift_facts role provides. This role runs various custom Ansible modules on target systems, which requires several dependencies to be present on those systems. If they aren’t there, they’re installed.

An Origin RPM install requires enabling an Origin repo. Unless you configure one beforehand, for Origin this is usually set up by the openshift_repos role, which is a dependency of the openshift_version role. So if you want to run the preflight checks before an install, you won’t have any Origin repo to check RPMs unless the checks configure this repo like the installer does.

The openshift_version role itself relies on some clever things to determine the version to install. If you’re doing an RPM install, it uses the repoquery tool to determine the precise version of RPMs that are available, so it can match it with the precise version of images to run; thus yum-utils is installed to provide repoquery. If you’re doing an enterprise containerized install, it looks up the precise version of images available by running a docker image on the remote host — and on an RPM-based host, installs and configures firewalld and docker to run that.

So in thinking about this, I’ve tried to determine if there’s any way to tease out just what we need for preflight checks and put that in a shared role, without having to go through as thorough a setup as we would for an install or upgrade. Or if we can make simplifying assumptions to do only what we need. Without going through too detailed an analysis, I think the answer is basically… no. We do not want to create and maintain parallel logic in the preflight checks for the very complex ways in which the installer determines what to do.

Reflecting a bit further, letting preflight config setup alter the systems is not really a problem, practically speaking.  If the user is installing a new cluster or adding hosts to an existing one, the target hosts are not in production yet, so altering them should be acceptable. If the user is upgrading, all of the necessary config and dependencies should already be in place, so hosts won’t be substantially altered. So, just depend on the same logic from the installer (and perhaps improve the user-friendliness of the output when things go wrong even before preflight checks). And very clearly document expectations.

Running an OpenShift install into containers

For testing purposes, we would like the ability to set up and tear down a whole lot of OpenShift clusters (single- or multi-node). And why do this with VMs when we have all of this container technology? A container looks a lot like a VM, right? And we have the very nifty (but little-documented) docker connection plugin for Ansible to treat a container like a host. So we ought to be able to run the installer against containers.

Of course, things are not quite that simple. And even though I’m not sure how useful this will be, I set out to just see what happens. Perhaps we could at least have a base image from an actual Ansible install of OpenShift that runs an all-in-one cluster in a container, rather than going through oc cluster up or the like. Then we would have full configuration files and separate systemd units to work with in our testing.

So first, defining the “hosts”. It took me a few iterations to get to this given the examples go in a different direction, but I can just define containers in my inventory as if they were hosts, and specify the docker connection method for them as a host variable. Here’s my inventory for an Origin install:

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
deployment_type=origin
openshift_release=1.4
openshift_uninstall_images=False

[masters]
master_container ansible_connection=docker

[nodes]
master_container ansible_connection=docker
node_container ansible_connection=docker

[etcd]
master_container ansible_connection=docker

To ensure the containers exist and are running before ansible tries to connect to them, I created a play to iterate over the inventory names and create them:

---
- name: start up containers
  hosts: localhost
  tasks:
  - name: start containers
    with_inventory_hostnames:
      - all
    docker_container:
      image: centos/systemd
      name: "{{ item }}"
      state: started
      volumes:
        - /var/run/docker.sock:/var/run/docker.sock:z

This uses the Ansible docker_container module to ensure there is a docker container for each hostname that is running the centos/systemd image (a base CentOS image that runs systemd init). Since I don’t really want to run a separate docker inside of each container once the cluster is up (remember, I want to start a lot of these, and they’ll pretty much all use the same images, so I’d really like to reuse the docker image cache), I’m mounting in the host’s docker socket so everyone will use one big happy docker daemon.

Then I just have to run the regular plays for an install (this assumes we’re in the openshift-ansible source directory):

- include: playbooks/byo/openshift-cluster/config.yml

Now of course it could not be that simple. After a few minutes of installing, I ran into an error:

TASK [openshift_clock : Start and enable ntpd/chronyd] *************************
fatal: [master_container]: FAILED! => {
 "changed": true, 
 "cmd": "timedatectl set-ntp true", 
 "delta": "0:00:00.200535", 
 "end": "2017-03-14 23:43:39.038562", 
 "failed": true, 
 "rc": 1, 
 "start": "2017-03-14 23:43:38.838027", 
 "warnings": []
}

STDERR:

Failed to create bus connection: No such file or directory

I looked around and found others who had similarly experienced this issue, and it seemed related to running dbus, but dbus is installed in the image and I couldn’t get it running. Eventually a colleague told me that you have to run the container privileged for dbus to work. Why this should be, I don’t know, but it’s easily enough done.

On to the next problem. I ran into an error from within Ansible that was trying to use 1.4 as a string when it’s specified as a float.

TASK [openshift_version : set_fact] **********************************************************************************
fatal: [master_container]: FAILED! => {
 "failed": true
}

MSG:

The conditional check 'openshift_release is defined and 
openshift_release[0] == 'v'' failed. The error was: 
error while evaluating conditional (openshift_release is 
defined and openshift_release[0] == 'v'): float object has no element 0

Having seen this sort of thing before I could see this was due to how I specified the openshift_release in my inventory. It looks like a number so the YAML parser treats it as one. So I can just change it to "1.4" or v1.4 and it will be parsed as a string. I think this was only a problem when I was running Ansible from source; I didn’t see it with the released package.

Next problem. A playbook error because I’m using the docker connection plugin and so no ssh user is specified and thus it can’t be retrieved. Well, even though it’s unnecessary, just specify one in the inventory.

[OSEv3:vars]
ansible_user=root

Next problem. The installer complains that you need to have NetworkManager before running the install.

TASK [openshift_node_dnsmasq : fail] *******************************************
fatal: [master_container]: FAILED! => {
 "changed": false, 
 "failed": true
}

MSG:

Currently, NetworkManager must be installed and enabled prior to installation.

And I quickly found out that things will hang if you don’t restart dbus (possibly related to this old Fedora bug) after installing NetworkManager. Alright, just add that to my plays:

- name: set up NetworkManager
  hosts: all
  tasks:
  - name: ensure NetworkManager is installed
    package:
      name: NetworkManager
      state: present
  - name: ensure NetworkManager is enabled
    systemd:
      name: NetworkManager
      enabled: True
  - name: dbus needs a restart after this or NetworkManager and firewall-cmd choke
    systemd:
      name: dbus
      state: restarted

When I was first experimenting with this it went through just fine. On later tries, starting with fresh containers, this hung at starting NetworkManager, and I haven’t figured out why yet.

Finally it looked like everything is actually installing successfully, but then of course starting the actual node failed.

fatal: [node_container]: FAILED! => {
 "attempts": 1, 
 "changed": false, 
 "failed": true
}

MSG:

Unable to start service origin-node: Job for origin-node.service 
failed because the control process exited with error code. 
See "systemctl status origin-node.service" and "journalctl -xe" for details.

# docker exec -i --tty node_container bash
[root@9f7e04f06921 /]# journalctl --no-pager -eu origin-node 
[...]
systemd[1]: Starting Origin Node...
origin-node[8835]: F0315 19:13:21.972837 8835 start_node.go:131] 
cannot fetch "default" cluster network: Get 
https://cf42f96fd2f8:8443/oapi/v1/clusternetworks/default: 
dial tcp: lookup cf42f96fd2f8: no such host
systemd[1]: origin-node.service: main process exited, code=exited, status=255/n/a
systemd[1]: Failed to start Origin Node.


Actually I got a completely different error previously related to ovs that I’m not seeing now. These could be anything as far as I know, but it may be related to the fact that I didn’t expose any ports or specify any external IP addresses for my “hosts” to talk to each other nor arrange any DNS for them to resolve each other. In any case, something to remedy another day. So far the playbook and inventory look like this:

---
- name: start up containers
  hosts: localhost
  tasks:
    - name: start containers
  with_inventory_hostnames:
    - all
  docker_container:
    image: centos/systemd
    name: "{{ item }}"
    state: started
    privileged: True
    volumes:
    - /var/run/docker.sock:/var/run/docker.sock:z

- name: set up NetworkManager
  hosts: all
  tasks:
    - name: ensure NetworkManager is installed
      package:
        name: NetworkManager
        state: present
    - name: ensure NetworkManager is enabled
      systemd:
        name: NetworkManager
        enabled: yes
        state: started
    - name: dbus needs a restart after this or NetworkManager and firewall-cmd choke
      systemd:
        name: dbus
        state: restarted

- include: openshift-cluster/config.yml

 

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
deployment_type=origin
openshift_release="1.4"
openshift_uninstall_images=False
ansible_user=root

[masters]
master_container ansible_connection=docker

[nodes]
master_container ansible_connection=docker
node_container ansible_connection=docker

[etcd]
master_container ansible_connection=docker

Customizing OpenShift JBoss confs without customizing the cartridge

I added a feature recently to enable OpenShift administrators to specify (at the broker) a custom location to get the default app git template from. This allows you to customize the initial experience of developers when they create an app; so you can, for example, put your organization’s name and logo on them. This should be out in Origin nightly builds now and the Enterprise 2.0.3 point release coming soon.

For JBoss applications, there is an added use for this feature. JBoss configuration files are located in the application git repository, so if you wanted to change the default confs for these cartridges as an administrator, say to add a custom valve, you can. Users are free to ignore this, of course, either by specifying a different source for their code or blowing your changes away after creating the app. Still, it can be useful to set the defaults the way you like, and with this feature, you don’t have to customize the cartridge to do it. You just need to maintain a custom git repository.

There’s a slight complication, though, as I discovered when trying to demonstrate this. The JBoss cartridges construct configuration files with three processing steps in between the source and the outcome. These are:

  1. The “install” step of cartridge instantiation modifies the Maven pom.xml that ships with the default template, replacing strategically-placed {APP_NAME} entries with the application name. If you construct your template using the source, Maven will not like it if you leave these as-is.
  2. The “setup” step of cartridge instantiation combines shared configuration files with version-specific configuration files from the cartridge source.
  3. Most of the conf files in the application git repo are directly symlinked from the actual gear configuration. However, there are a few that aren’t, which happen to be the ones you tend to want to change. These are actually templates that are processed during every build of the application (i.e. every git push).

These aren’t hard to work around, but they’re a little surprising if you don’t know about them. Let me demonstrate how I would do this with an example. Let’s say we wanted to change the log format on a JBoss EWS 2.0 cartridge.

  1. First, create an EWS 2.0 app with the installed default:
    • rhc app create template jbossews-2.0
  2. Now edit the resulting “template” directory that git creates as needed:
    • Change .openshift/config/server.xml log valve as follows:
      <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
             prefix="localhost_access_log." suffix=".txt"
             pattern="CHANGED %h %l %u %t &quot;%r&quot; %s %b" />
    • Note, this is one of the files that is interpreted with every git push. The Connector element has expressions in it which are evaluated at build time on the gear.
    • Edit the pom.xml file. This is optional, but you may want to use a different groupId, artifactId, etc. than just the “template” app name. It’s possible to use env vars here, e.g.
      <groupId>${env.OPENSHIFT_APP_DNS}</groupId>

      … however, Maven will give out WARNINGs with every build and threatens to break in the future if you do this, so I don’t recommend it.

    • Commit the changes.
       git commit -am "Template customizations"
  3. Now put this git repo somewhere all the nodes can see it. You can put it on github if you like, or your internal gitolite instance, or just on a plain web server. For simplicity, I just put it directly on the node filesystem, but remember that all nodes have to have the template available in the same place (although it could be useful to vary the template contents according to gear profile):
    # mkdir -p /etc/openshift/templates
    # git clone template /etc/openshift/templates/jbossews2.git
  4. Now modify the broker to specify this as the default. In /etc/openshift/broker.conf:
    DEFAULT_APP_TEMPLATES=jbossews-2.0|file:///etc/openshift/templates/jbossews2.git

    … and restart the broker:

    $ service openshift-broker restart

    Of course, with multiple brokers, you need to do this for all of them.

At this point, whenever you create a new jbossews-2.0 application, it will default to using your template and the changed access log format.

vim as IDE

I happened to learn vi as my first major text editor back in the 90s. There are many great editors, and I have no interest in proving that vim is best. It’s just what I use, and what many others use, and it’s available everywhere.

A friend recently observed “it looks like your vim does things that mine doesn’t.” Vim is almost infinitely extensible. It takes some time to incorporate everything, to be sure, and vim lacks somewhat in discoverability. But when you are working on code all day every day, it pays to invest some time in improving and learning your tools. And no matter how much you know about vim, you can always find some feature to surprise and delight you.

vim tips and tricks sites abound, so I don’t really have much to add to these:

  1. http://www.vim.org/docs.php
  2. http://vim.wikia.com/wiki/Vim_Tips_Wiki + http://vim.wikia.com/wiki/Best_Vim_Tips
  3. http://vimcasts.org/
  4. http://pragprog.com/book/dnvim/practical-vim
  5. http://learnvimscriptthehardway.stevelosh.com/ (learn how to seriously customize vim)

I spend most of my time working with git between various related repositories, mostly coding in ruby and bash. If you are doing the same thing, you might be interested in some of the plugins I’ve added to make life a little easier and have vim help as much as possible with the workflow. You really can get to the point where vim pretty much does everything you need. I’m still getting these into my fingers, but thought I’d pass them on:

  1. NERDTree – this is a handy directory plugin. vim already has a directory display; if you start up vim with a directory name, you get a directory listing. It’s not a tree, though, and it goes away once you pick a file to edit. Invoke NERDTree (I mapped “:NT” to toggle it on and off) and it keeps a directory tree structure in a vertical split on the left; choose a file and it opens in a buffer on the right. If you dismiss NERDTree and bring it back later, it comes back with the same state – same directories opened.
  2. Fugitive – Sweet git integration plugin from Tim Pope. I will never work another merge conflict without it. It does so much stuff there are five vimcasts introducing it. May also introduce you to standard vim features you never heard of, like the quickfix list.
  3. Rails.vim – another Tim Pope invention for working with Rails. The idea is to make all those TextMate users jealous (you may want some addons like SnipMate though – and see this classic post for pointers to really decking out your vim Rails IDE).

That’s just three, and that’ll keep you busy for a long time. There are plenty more (see that last link and various recommendations on StackOverflow).

vim for OpenShift and oo-ruby

One more addition – if you happen to be in my very particular line of work, you get to work with a lot of ruby files that don’t *look* like ruby files to vim, because they’re scripts that invoke oo-ruby as their executable.

What’s oo-ruby? It’s a shim to wrap Ruby such that you get a Ruby 1.9 environment whether you are on Fedora (where 1.9 is native currently) or on RHEL (where it is provided by an SCL).

But the problem is, if the file doesn’t end in .rb, vim doesn’t know what filetype to give it, so syntax highlighting and all the other goodies that come with a known filetype don’t work. You have to help vim recognize the filetype as follows. Create or edit .vim/scripts.vim and add the following vimscript:

if did_filetype() " filetype already set..
    finish " ..don't do these checks
endif
if getline(1) =~ '^#!.*\<oo-ruby\>'
    setfiletype ruby
endif

This checks the first line of the file for “oo-ruby” somewhere after the shebang and, if present and filetype is not otherwise determined, sets filetype to ruby. Problem solved!

Highly available apps on OpenShift

One question we’re working through in OpenShift is how to make sure applications are highly available in the case of node host failure. The current implementation isn’t satisfactory because a single gear relies on its node host to function. Host goes down, gear goes down, app goes down.

We have scaled applications which expand the application out to multiple gears, but they have a single point of failure in the proxy layer (all requests go through one proxy gear). If there is a database cartridge to the app, that also is a single point of failure (we don’t offer database scaling yet). Finally, there’s no way to ensure that the gears don’t actually end up all on the same node host (except by administratively moving them). They are placed more or less randomly.

This is a hot topic of design debate internally, so look for a long-term solution to show up at some point. (Look for something to crystalize here.) What I want to talk about is: what can we do now?

If you have your own installation of OpenShift Origin or OpenShift Enterprise, here is one approach that may work for you.

  1.  Define a gear profile (or multiple) for the purpose of ensuring node host separation. It need not have different resource parameters, just a different name. Put the node(s) with this profile somewhere separate from the other nodes – a different rack, a different room, a different data center, a different Amazon EC2 region; whatever will satisfy your level of confidence criteria in what size failure you can expect your app to survive.
  2. When you create your app, do so twice: one for each gear profile. Here I’m supposing you’ve defined a gear profile “hagear” in addition to the default gear profile.
    $ rhc app create criticalApp python
    $ rhc app create criticalAppHA python -g hagear

    You can make them scaled apps if you want, but that’s a capacity concern, not HA.

  3. Now, develop and deploy your application. When you created “criticalApp” rhc cloned its git repository into the criticalApp directory. Code up your application there, commit it, and deploy with your normal git workflow. This puts your application live on the default gear size.
  4. Copy your git repository over to your HA gear application. This is a git operation and you can choose from a few methods, but I would just add the git remote to your first repository and push it straight from there:
    $ rhc app show criticalAppHA

    Output will include a line like:

    Git URL = ssh://3415c...@criticalAppHA-demo.example.com/~/git/criticalAppHA.git/
    

    … which you can just add as a remote and push to:

    $ cd criticalApp
    $ git add remote ha ssh://...
    $ git push ha master

    Now you have deployed the same application to a separate node with profile “hagear” and a different name.

  5. Load balance the two applications. We don’t have anything to enable this in OpenShift itself, but surely if you’re interested in HA you already have an industrial strength load balancer and you can add an application URL into it and balance between the two backend apps (in this example they would be http://criticalAppHA-demo.example.com/ and http://criticalApp-demo.example.com/). If not, Red Hat has some suitable products to do the job.

This should work just fine for some cases. Let me also discuss what it doesn’t address:

  • Shared storage/state. If you have a database or other storage as part of your application, there’s nothing here to keep them in sync between the multiple apps. We don’t have any way that I know of to have active/active or hot standby for database gears. If you have this requirement, you would have to host the DB separately from OpenShift and make it HA yourself.
  • Partial failures where the load balancer can’t detect that one of the applications isn’t really working, e.g. if one application is returning 404 for everything – you would have to define your own monitoring criteria and infrastructure for determining that each app is “really” available (though the LB likely has relevant capabilities).
  • Keeping the applications synchronized – if you push out a new version to one and forget the other, they could be out of sync. You could actually define a git hook for your origin gear git repo that automatically forwards changes to the ha gear(s), but I will leave that as an exercise for the reader.

It might be worth mentioning that you don’t strictly need a separate gear profile in order to separate the nodes your gears land on. You could manually move them (oo-admin-move) or just recreate them until they land on sufficiently separate nodes (this would even work with the OpenShift Online service). But that would be somewhat unreliable as administrators could easily move your gears to the same node later and you wouldn’t notice the lack of redundancy until there was a failure. So, separating by profile is the workaround I would recommend until we have a proper solution.

Stuff that should just work…

Had one of those Maven/Eclipse experiences that was so infuriating, I need to record it here to make sure I can fix it easily next time.

Using STS 2.9.1 I created a “Dynamic Web Module” project. For the runtime I targeted tc Server / Tomcat 6. Then I proceeded to do some rudimentary JSP stuff, only to find that the JSTL taglibs were not resolving. It seems the expectation is that these would be provided by the container. Under JBoss they probably would be, but not under Tomcat. (I guess it makes sense… if you have multiple apps in the container, just let each one bundle the version desired – fewer container dependencies).

Fine; so I added the maven nature and the jstl.jar dependency. At some point I did Maven > Update project configuration. Suddenly the class that I had defined to back a form is not found. Also I’m getting this really annoying project error:

Dynamic Web Module 3.0 requires Java 1.6 or newer. [...] Maven WTP Configuration Problem
One or more constraints have not been satisfied.

WTF? Of course STS/Eclipse are configured to use Java 1.6… but my project apparently isn’t. So I go change that, but it doesn’t fix that error, and any time I update project config with Maven, it’s back to using JRE 1.5 and my Java source files are no longer on the build path as source files.

Turns out (took longer to find than to tell about it) the Maven compiler plugin doesn’t use Eclipse settings and just imposes its own defaults, i.e. Java 5, unless otherwise configured by a POM. And since a “Dynamic Web Project” uses Servlet 3.0 it requires Java 6. Boom.

Easy to fix, though annoying that I have to and there isn’t some Eclipse setting for this. Just add under the top-level POM:

<build>
    <plugins>
       <plugin>
          <groupId>org.apache.maven.plugins</groupId>
          <artifactId>maven-compiler-plugin</artifactId>
          <version>2.1</version>
          <configuration>
             <source>1.6</source>
             <target>1.6</target>
          </configuration>
       </plugin>
    </plugins>
 </build>

(Cripes, WordPress, can I get a “code” formatting/paste option already?? “Preformatted” most certainly isn’t.)

Then have Maven update project config again and 1.6 is in play. Oh, and instead of using the “src” directory for my source, I went ahead and changed to src/main/java as Maven expects, so that future “config update” will pick that up.