Sunday, December 2, 2018

Working with DialogFlow and Actions on Google

by Richard Vowles

In a phrase, do not use Javascript for this.

The development cycle is appalling, the documentation is abominable, and putting functions in the cloud is incredibly slow and tedious.

Given node 8, use Typescript.

Mono-repos, Microservices and Continuous Delivery

by Richard Vowles

Mono-repos, Microservices and Continuous Delivery

Since I have been involved with ClearPoint NZ Ltd, and its Accelerate Continuous Delivery effort (renamed from Connect as people were considering it a "product") - something that has evolved over the two or so years I have been working with them, I have come to realize that the only thing that matters is that at any time you can deliver a bug fix to the client, and one's precious releases and versions don't really matter.

Under Continuous Delivery, for an application,  we never release anything. The code always rolls forward, and that is a fundamental in the customer engagement model. We always roll forward, and new code is put behind a feature toggle. This feature toggle goes through various stages - initial commit into the repository (and promotion to production), code and tests being written, eventual turn on in production, removal from code base.

While we are very heavily focused on the Microservices pattern, we are pragmatic, modularity is actually more important, many changes cross more than one repository (especially when you separate API from implementation, thats at least two, then you have the consumer app, thats three), and sometimes when you are refactoring - especially in the early s tages where you are getting your patterns right, changes affect many different repositories. Common functionality gets pulled out, so you don't repeat yourself over and over again, it allows you to consistently wrap Open Tracing and Metrics around your calls, queues, etc, and consistently expose them.

Furthermore, especially when you are starting - what was a monolithic app old style realistically represents the application space of the problem you are solving. You might have products, stock levels, promotions, order buckets, etc - but those would have just been part of the one app in the past, but mature Microservices departments realize that old school modularity in a Monolith are just being pushed out with experimental edges as you chop up your problem. You end up with an Application space - if another group starts using your core services, then you can start versioning those, but ideally you really want stuff to propagate.

Further, you want your master branch to always be green, so running your build, cross Microservice integration and e2e tests should always operate in a separate namespace and only merge once they have passed. Ideally these changes are kept small, incrementals (so they are easy to review and don't result in long running branches) and are behind the feature toggles.

Given we don't actually release any Java artifacts, and we can use Git to determine what changed between the master branch and what you are trying to submit, and then tell the Maven Reactor build what to build. These then general Docker images in our case (generally) and we now have a new manifest ready for a Canary deploy into our e2e test cycle - consisting of master artifacts + the new docker images.

Having moved to a Monolithic repository for an "application space" has made all of this considerably easier. All of our artifacts are in there - Terraform, Helm for k8s, Jenkins Pipeline code, how container images used for the process are build, as well as all test and source code - all in the one repository. We then have a submit-queue, and the change in confidence in the pipeline, tests and code that gets delivered is something else. UX testing is the only fly in the ointment because it can be hard and be flaky, but that isn't something one can do much about.

In all, having moved our codebase towards Microservices and a Monolithic repository has been a pretty amazing experience.


Monday, October 23, 2017

Maven Release and Sub-Directories

by Richard Vowles

Revisiting Repository Management for Java Artifacts

On a single repository per Java artifact

On the Illegal Argument Podcast that I formed with +Mark Derricutt several  years ago now, and before I left it as I had "run out of things to say", I argued vehemently that for Java projects to have the correct contracts and to be releasable as proper binary artifacts they had to have their own repository in Git.

Git is different from Subversion - which we were all shifting from - in one pretty fundamental way - tags are not tree relative. You don't tag from where you are, you tag the whole repository. This meant that if you were releasing and had other artifacts in your repository, they would all come along with that tag. If you had to do a patch fix for production and you were on a release cycle, you had to swap back to that tag, branch again and do a patch fix. Multiple artifacts in one repository would mean they would all swap back to earlier versions, requiring you to then have to clone out a repository specifically for that fix, which essentially gave you one repository per artifact so you might as well start as you mean to go on and not muddy the waters.

Further, traditional build systems would want to build a particular repository, and since best practice is, and remains, that you should only build what changed and rely on the declared dependencies - that your build system would create a cascading build for when you actually need one.

The downside to this of course is an explosion in repositories.

So a few things have turned up that has changed my mind on this and I'd like to detail one of them here.

Mono-repos, and Maven Release

If you are doing CD, you operate on snapshots - there is no need to do anything else. Unless you have third parties relying on your artifacts. And then you are back into this single repo vs mono repo problem. The problem came that when you release, the whole of your Git repository is checked out into your target folder from your tag, and compiled again. Up until 2.5, you couldn't actually release subfolders.

I realized as part of the Connect Project that I was releasing from sub-folders successfully. So I went to talk to Mark about it, who told me I should be able to - which led to this blog post.

Now you can, and this changes the game somewhat.  In my case, I have a bunch of repositories that I'm totally find chucking in a single mono-repo and just releasing forward. I can branch and do a patch release if I want, but they are all largely unrelated, they don't depend on each other and should never be released using a Multi-Module Build.

One of the real pains of managing finely grained repositories is having to manage so many. This ability to release from a single mono-repo has tipped me into that camp. I can still do patches - I'd never have merged them back anyway - but this is going to make my life considerably easier.

Why I hate Multi-Module Builds

While we are here, lets have a rant about Multi-Module Builds.

A Multi-Module build is one where you have a pom that only has module references in it - and these reference artifacts that are in subdirectories. They are not in themselves evil, they work well for CD as long as you tell them what to build. They do not work well for open source projects.

Typically these are used in open source projects for having all artifacts released together, they all have the same version number and in released projects, they tend to have a slow cadence. A bug fix takes forever to get released because everything gets released, even when it doesn't need to. 99% of artifacts in these kinds of projects experience no change, it is simply because the build process is silly.

This kind of project really annoys me. Projects that release like Spring (although the level of stupid in Spring's build system beggars belief) and CXF (as much as I appreciate the work they do to make me not have to deal with the vagaries of WebServices) but it means they batch their bug fixes. And you can wait weeks for a fixed bug to be released, because they batch them.

What should they do? They should individually release their artifacts as soon as they have been verified to be correct and have a single artifact that represents the project as a whole - a simple pom that just lists the project modules in their specific versions as dependencies. This allows bug fixes to just release day after day, with no change in all of the other artifacts and then they can batch them in the pom only release if they want. And people who need the fixes can just override that released pom with the new artifact. Simple.

I distinguish Multi-Module builds from Reactor Builds. Reactor Builds can take in a whole bunch of artifacts and are never intended for release - just to make a developer's life easier to pick up the Application in their IDE or build a complete installation of an application. They are used in Applications, Multi-Module builds are abused in Libraries.

I'll be shifting the Connect project Java repositories to a single repository soon.











Monday, April 25, 2016

Docker Registry and running mini-code-camps

by Richard Vowles

Most of my interest in the last six months has really been consumed by electronics - it happens each year around Christmas when I start having a bit of time to myself again. As I have now started back into the technology deep dive, having found something I'm actually interested in learning and pushing into the +Code Lounge sessions, I am hitting the same Docker problem I had before - bandwidth. I simply do not have enough bandwidth on ADSL to support even two or three people downloading Docker images.

The last time we tried it, it just wasted huge amounts of time and people were failing to pull images. This time however, there appears to be a Docker Registry project, which I am hoping will solve my problem.

My next Code Lounge is on Kubernetes and we will be using Docker, CoreOS, flanneld, etc all that stuff - hopefully to create a distributed docker cluster on people's machines. As such, I am upgrading my old Mac mini to ensure I can actually run it. The last time I tried to install Docker Machine on my Mac, it didn't have the architecture to run and kept crashing until I learnt you had to install it via HomeBrew.

I'll report back how successful I have been.

Sunday, November 30, 2014

Fast Delivery

by Richard Vowles

So I wanted to give myself an exercise in creativity in the last week, scratch an itch and see just how fast I can deliver a project. It tooks me 2.5 days.

My local high school (secondary school) runs a website that the pupils and their parents can log into. It allows you to see the  usual grades and so forth, but also their timetable.

That can be pretty important as they run a 6 day cycle - so if you have a long weekend or a holiday, if even a particularly busy week it can be difficult for some people (particularly my son) to remember what "day" it is at school.

In my case, the project was to allow students (including my son) to synchronize that school provided timetable (which is customized per pupil) with their Google Calendar - which is synchronized to their phone. Many of the kids turn off their data plan, as they are only allowed one device on the school network.

So in my case I wanted to let them log in with their Google Account - give me permission to their Calendar, tell me what their school username and password is (no OAuth there,  I encrypt and store) and then I go off and create a calendar and push their timetable into it. And I do that every week, automatically.

Now this had a few challenges:


  • I had never worked with OAuth before (or OAuth2 in this case). I had read the book, felt I understood it and then forgot it. +Mark Derricutt and I had done a short stint with the Google provided libraries and found some problems, but I couldn't even find that project. 
  • I hadn't worked with the Calendar API and that turns out to have some significant quirks
  • I wanted to make a mobile first application, it had to look good and work - so I wanted to use Material Design components
  • I wanted to run it against an https server
  • I needed to see what parts of the start-up framework were "quirky" and needing of cleaning rough edges

Some lessons I learnt from the experience - stuff that will make it into the next +Code Lounge I run with +Irina Benediktovich .

  • Busy logic - that there is one or more inflight XHR requests really need to be pulled out into a separate module and just included in every project. It is simply such a useful structural pattern that it simply has to be a module
  • The OAuth2 stuff behaved in an unexpected fashion, every single time you go through the "login" process, you get a new token. And if you are starting and stopping your service, the session you get passed is different to the session your request.getSession().getId() provides you - so you have to make special concessions to  try and track the user properly
  • I still haven't found a way to track a user in an opaque fashion - if they change their primary email address, their account is hosed. I was pretty sure there was a way to do this, but I haven't worried about it too much yet.
  • JAWR really hates CSS with embedded SVG images. That took me a while to figure out.
  • Make sure your server is running on the correct time zone or ensure you provide a time zone to all constructed DateTime's in your Google APIs. 
  • It turns out that Chrome on some phones can only search - it ignores any URLs that you type in. It is bizarre and very frustrating!
I may add more to this post as I remember things!

Saturday, November 22, 2014

Polymer without Dependency Injection is dead to me

by Richard Vowles



I have been watching quite a bit of the content from the recent Chrome Dev Summit - its the only way to watch it (after the fact) because there is so much fluff in the talks. I understand why this has to be done, but appreciate that they get off the fluff and into the useful how-tos. The Service Worker stuff I am afraid to say was cleanly and clearly covered in Jake Archibald's DevBytes with the Train Spotters example, and for a feature that still isn't readily available, it seemed fairly heavily hyped. I have yet to check whether Safari will support it on iOS, but if it does I think we are going to start to be able to claw back some territory from native phone apps.

The Material Design / Web Components / Polymer focus however was more interesting - this has been going on for some time and I have been avidly following it. I like the format of web components much more than Angular Directives, I like the encapsulation of essentially everything you need. 

What I don't like however is the lack of dependency injection - even an interface for adding your own.

Whenever I see a "new XXXX' in someone's code - particularly for a service of some kind I outwardly cringe - with Angular and DI, we have been able to significantly improve the way that we build our web applications, focusing on what the customer needs and wants to see in terms of interactivity, workflow, look and feel, and just overall user experience. We attribute this not just to the superior nature of developing in Angular, but partcularly the DI capability. We can mock out our services, and even when we eventually replace them, we are still able to use them for our unit tests with Karma.

There is no such capability with +Polymer - it is listed as part of the "future roadmap" but really, it is critical. The ability to inject your actual services provides for such as well structured application.

The only workaround at the moment is to reverse it, so that every object would pull them from the global context - no Hollywood principle for us!

I would really like to use and recommend Polymer, but I simply cannot at the moment and won't be able to until I see at least some activity in this area.

Tuesday, May 27, 2014

Groovy fun with @CompileStatic

by Richard Vowles

Groovy is one of the few languages that allow both static and dynamic compilation, and among the "big" languages on the JVM (Java, Groovy, Scala, JRuby and Clojure), the only one as far as I am aware.

Unless I am doing Grails (we stopped at 2.1.13 because of the crazy buggy-ness of that platform, its dependence on Hibernate and just random magic insanity that happens). So just using Groovy in a basic Spring + AngularJS web framework has been empowering - I've only been swapping back to non type safe only really for tests. I still look at Mockito with pain.

One of the things that has bugged me however is Closures. If I wanted a callback to work, I lost the type safety. Consider this method:

  void listProfiles(User user, boolean favourite, String filter, Closure callback)

So I now am not telling the compiler what types are being passed. I thought about this problem tonight and remember that an interface with a single method is treated as a closure - so I thought I'd try it:

    interface ListProfileCallback {
       void profile(Profile profile, boolean favourite)
    }

    void listProfiles(User user, boolean favourite, String filter, ListProfileCallback callback)
and sure enough - the type system kicks in and tells me I'm missing a parameter!


(image taken from http://www.luvyababes.co.uk/)