Saturday, November 2, 2013

Finding well known resources in the JVM

A lot of people don't seem to know this, but it is actually extremely easy to find resources in the JVM without class path scanning.

Quite a few people use a scanning library or implement one themselves (which given the number and variety of ways you can specify a classpath is always going to miss something out). But if you have a well known resource, then its pretty easy to gather up all instances of it in all of the different jars that make up your application, and this is true if you are using Groovy, Java, Scala, JPython or JRuby (or any of the others, as long as you can get access to a class loader).

Since 1.2 of Java, there has been this method in the ClassLoader interface:

    public Enumeration<URL> getResources(String name) throws IOException;

So all you need to do is call:

    getClass().getClassLoader().getResources("/META-INF/myfiles.properties")

for example and all copies of that will be returned. It may not be the most efficient implementation, but it is the most well supported and it comes straight in the JVM, no extra dependencies required.

Sunday, August 25, 2013

Groovydoc and Maven

Groovydoc and Maven


One of the things we discovered this last week, when trying to release to Apache Maven Central, was that javadoc (which is required for every artifact that has code in it) was not being generated for our Groovy artifacts.

Now in the past, we have used GMaven, which generated stubs (albeit poorly) and those caused the javadoc to trigger off, but since we moved to the Eclipse Maven plugin, this no longer worked - it compiling them both together. So we needed to start using actual Groovydoc - and it turned out, on a brief search, that there was no Maven plugin for Groovydoc.

There was however an Ant plugin, but having an Ant script, although nice that it can be dropped back to, is usually somewhat error prone and can lead to difficult configuration problems. That seemed to be the case for the Ant plugin as well, with quite a few people not being able to get it working. One of the good things however that I discovered was that the real work was in fact done outside of the Ant Groovydoc task - it just collected the information and passed it on.

The one thing I learned however, and the reason I am writing this up, is that source paths must be relative. If you pass a path that ends up having the full path to the file in it, Groovydoc will treat it as being in the Default package - and you will get a whole lot of classes in your DefaultPackage in the generated documentation. If you encounter this problem when using Ant or Gradle, then this will be the reason - make sure you use offsets from the directory where you run the build script from.

I have sorted this problem out in the Maven build, and it will pick up all source directories that get added and include them - this means generated sources and anything you add with the build helper Maven plugin.

The documentation and source is over on Github, and the artifact is in Maven Central.

Saturday, August 10, 2013

Latest plugins on Apache Maven Central

Blue Train Software Plugins


I've made a couple of plugins lately for the projects that we use at Group Applications at the University of Auckland. I decided they would be better as open source plugins, so I did them on my own time, and they address two aspects of the lifecycle of Maven projects that we build these days.

The Release POM Plugin

The first one,  the Release POM plugin is specifically designed to generate a single pom with all transitive dependencies resolved. All dependendencies also add an exclusion clause for all dependencies of their own.

Why? Because sometimes, particularly when you are patching a production artifact, you need to make sure all dependencies stay exactly the same, except for the one you are changing. That is to me what a patch is, and that is very hard to do, as Maven doesn't actually store the versions of the artifacts you use. 

This is exacerbated because we use version ranges, which greatly aids development, bug fixing, feature enhancement and general working within the team, but it also means you need to make sure versions are locked down once you do an actual release intended for production. 

The release pom plugin and its documentation are on Github.

The Karma Runner Plugin


This one is for our Javascript, as we use AngularJS, however it can be used with any Javascript library, it just works very well with AngularJS. It requires the use of Node JS as the Karma Runner project uses that framework. 

Much of what the Karma Runner Plugin does could be done with a considerable amount of manual setup in a Maven pom - it just assumes you are using a war or (preferably) Servlet 3 JAR setup, it scans your dependencies, unpacks the javascript, re-writes the Karma config file, brings in local developer overrides (e.g. browser setups) and runs your tests. It means with minimum fuss, and maximum compatibility, we can run our Jasmine tests for Angular JS. In our case, the definition of the plugin is in the parent of every Servlet 3 jar project, nothing else needs to be configured for an individual project.

If you use Karma in a Maven project lifecycle, it really is a useful plugin.

The Karma Runner plugin for Karma is on Github.

Monday, April 30, 2012

Critical failures of Apache Maven Central

Today's failure of the meta-data of logback really hit us hard and resulted in significant downtime. What was the issue? Its an issue that appears to happen relatively frequently - someone releases a new version of their library and the release process blows away the Maven metadata - making it appear to anything checking the meta data that there is only one version available - the current one.

As at time of writing, the Maven metadata for logback-classic looks like this:


<metadata>

<groupId>ch.qos.logback</groupId>

<artifactId>logback-classic</artifactId>

<versioning>

<latest>1.0.2</latest>

<release>1.0.2</release>

<versions>

<version>1.0.2</version>
</versions>

<lastUpdated>20120426133004</lastUpdated>
</versioning>
</metadata>

We are using the version range [0.9.17] for Grails 1.3.7 projects and [1.0.1] for Grails 2.0.3 projects. This is good Maven hygiene - we want the build to fail if these specific versions are not available for a good reason (such as no repository available) - not for a bad reason like someone broke the repository. We don't want Maven to choose a different version, we want that specific version.

When you use version ranges, you need the meta data - that tells Maven what versions are available. If you don't specify a range, it shouldn't use the meta data. However... With Maven 3.0.4 today, that turned out not to happen for some reason.

Whats worse, is that when we wanted to work around the problem by relaxing the version ranges, it didn't work. It just got worse and weirder. Eventually, on the brainwave of Michael McCallum - installing the artifact in our 3rd party repository and moving it ahead of Central in the Nexus allowed us to get back to work.

Now if this was the first time this happened, it wouldn't ring such alarm bells, but it isn't. Its at least the third (SOLR being the first time we hit it). I'm now in the unenviable position to have to discuss with my team whether direct Apache Maven Central access will be banned. If the artifact isn't taken from Central and put into our 3rd party repo, it can't be used. We just cannot afford the downtime.

What I don't understand is why there is no automatic process for repair for Central. Its a critical resource, which we enormously appreciate but its value in accessing directly has become much lessened after today's extreme waste of productive, valuable time.

Update: Another one - http://repo1.maven.org/maven2/woodstox/stax2/2.1/



Friday, April 27, 2012

Code Lounge, Post Mortem + Hangout Lounge

We held our first Code Lounge on Wednesday - Anzac Day in Auckland. We went to the morning service at Glendowie College and then proceeded a rapid clean up to ensure we were ready for the three people arriving. I've posted elsewhere about the results (ZeroMQ vs RabbitMQ vs Pusher), the day went pretty well in my opinion - we stopped every hour and made sure we were aware of what the other team was doing - we set ourselves goals which we overstepped periodically and when we got to the end of the day (3pm) and had only an hour left, we decided whether we continued learning (David and Irina decided to continue getting the Clustering working for Rabbit and trying to work out its fail over strategy) and Mark and I decided to push on with Pusher (which turned out to be an exercise in re-writing the library instead).

All in all a good day. But I was lying on the bed after an exhausting week while listening to my wife practice Cello (which is a surprisingly meditate experience) and thinking about my backlog of books to read. They consist of programming language books (Dart 4 Hipsters for example), Tool books (a few around VIM are bubbling around, Pragprog have just released a new early access), Electronics books (MSP430 and various others), General informational/idealogical books (Information Diet, Biohacking) and fiction. And I'm not really reading them, particularly the programming language ones and I really really want to.

So I'm going to get inspired by Chris Strom (Dart 4 Hipsters, The Spdy Book, Backbone.js recipes) and try starting chains (chains.cc) and see if I can just knuckle down and self educate.

I spoke with Mark about it tonight and wondered aloud whether or not having a Hangout open when I was working away at a particularly topic might be a worthwhile idea. I think I will do it, I will try just having an open hangout around  a particular topic and ensure I circle Programming related people in my G+ stream. If they want to come in they can otherwise I at least will continue to learn. I'll set a goal for 1 hour a day and see how I get on.

(PS Manning have a great special on at the moment which includes a lot of interesting books, I'm trying desperately to not go and buy them and have them sitting there as well!)

Monday, March 26, 2012

Code Lounge

Last year I started a conference - I was looking for a barcamp style conference that I could really enjoy going to - that would be full of topics I wanted to cover. It eventually got named The Exceptional Conference (given Illegal Argument as the theme) - and it got a good turnout. However, the topics really didn't interest me all that much. They interested a lot of other people however and the feedback was good. We got some Pizza sponsorship from Fronde, the rest was paid for by my company Blue Train Software Ltd.

We had a few people come up from Wellington - which was excellent, I enjoy John and Nigel's company and it was great to meet the other guys who seem to form a really informed and friendly group. John and Nigel decided to run one in Wellington and so it was simply polite that I went - Mark and I went down (I took my son Xavier with me). Unfortunately it was pretty much the same topics, so I didn't enjoy it a whole lot. I felt pretty guilty about that - John, Nigel, and the VUW crew had gone to a lot of trouble to organize it - I just couldn't get into it. Later I just realized that my original goal - something discussing things that I am interested in is hard to find at conferences - even ones of such a small size. I get it from the IllegalArgument podcast, but only in general chat. I also found there wasn't enough core-geek depth.

So when Nigel asked when the next conference was going to take place I said I'd bow out of this one, but more prodding from Nigel and Mark got me thinking, and I really appreciate that they did prod.

What I realized I wanted in reflection was what John, Nigel, Mark and myself did at the start of all this - a geek weekend or even just a single day where we could take a topic or idea and really explore it - a small number of people (4 in that case - perhaps up to 8?) would probably work. Idea had to be fairly tight, people shouldn't know too much about it otherwise its not a learning experience - or perhaps they should - I don't know yet. Close environment, good for pairing, good food, good drinks (coffee for those that like it, water for those that don't :-), good wifi, projector if necessary, white board, all that good stuff.

And I realized I had most of these in my lounge - and so was born the concept of CodeLounge. The idea is to take a topic, advertise your lounge will be a meeting place on a specific date for a specific set of times, specify what you can provide and just go for it. At the moment, I'm not sure how it will turn out - but I am hoping that we'll learn ways of making the experience interesting.

I registered a domain name for it - CodeLounge.IO (a play on Google.IO - a conference I no-longer wish to attend, the videos are good enough), there is no website yet - perhaps that is the first job (Twitter Bootstrap anyone?), but we have a Meetup group and a logo.

I call them a micro-code-camp.


Bluegrails Maven Plugin

There have been some considerable changes made to the Bluegrails Maven plugin - we are starting support for Grails at 2.0.1 - it is quite difficult to ensure compatibility as I have found a number of quirks to how the Grails Gant scripts behave.

We are now using it internally, and it seems to be going well - we found one dependency that wasn't in central (Spring UAA) but that can be excluded and is only used by the plugin (via the scripts inclusion). I hope to have video tutorials up once we are happy that it is solid and working well.

It can be downloaded on Github now under the http://www.github.com/bluegrails/grails-maven repository.