Thursday, 30 October 2014

Climate Faith For Beginners

There's a recent Guardian article about combating apathy about climate change.

Reports show the biggest threat to progress on climate change is cynicism. That’s why 10:10’s #itshappening project showcases positive action happening around the world now

I get this a lot on facebook - I have a high number of friends there, but it's rare that I see any of them post anything about raising awareness about climate change, and just as rare for them to like, share or comment on my frequent climate change posts. If cute kittens were the face of global warming activism we'd be in renewable utopia right now.

One of my biggest concerns about climate change is the serious possibility that in practice or in theory it's already too late. We know we're pretty close to exceeding the maximum amount of CO2 we can actually emit, but we also know that thanks to political efforts by the fossil fuel industry, we're still on track for the worst-case scenario for emissions.

The question is, assume that's the case, what happens then? I think, if I was an atheist (or possibly if I was pagan), then I'd either give up and just try to have as much fun as I could, while I could; or I might end up thinking that resolving global warming issues by violence would become my raging response to this slow inexorable crisis.

But I'm not, I'm a Christian who believes that we have a responsibility to take care of the planet. There's a school of theology that argues that Jesus won't return until the world is wrecked (and God plans to complete the destruction). In my post God Loves Green, you can read why I think that theology is misplaced so I won't cover it further here.

I think that a Christian perspective offers hope in a changing climate that I wouldn't have if I had other beliefs. And it centres on the message about Jesus. The message is this: everything we've fouled up, and everything people have damaged in us, was personally dumped on Jesus on the cross. The whole lot, as if he was to blame. He died, beyond any hope of coming back to life, and yet that's exactly what happened: Jesus was raised from the dead. It means that he overcame the lot and buying into him means you buy into that life, forever, from this moment.

This informs my perspective on climate change in three major ways.

Firstly: I'm already a winner; if I die tomorrow, I've already won. It means I can afford to lose in this world, I am not number 1. A practical example of this is the question of how I use my earnings. Almost a decade ago I was in a position to be able to buy a house of my own, instead I decided to invest in wind turbines and rent in shared accommodation. It put me at a disadvantage in a number of ways (e.g. my status amongst my peers who were buying houses), but I can afford to lose.

Secondly, because of the relationship between me (or us) and Jesus there really is a responsibility to act faithfully w.r.t God's creation. It does matter whether we de-carbonize or not, we can't just pray it away (though prayer, being the catalyst for God in action, is integral to everything we do and integral to the changes we see in others).

Thirdly, if we act faithfully, God will do the rest - he can do the impossible.

Put together, for me, my faith is the biggest hope I have for combatting climate change: it gives me a basis for being active even if the odds are against me. It means that I have a big, green light from God to act faithfully and it means I have hope even if I lose, because I've already won.

Tuesday, 16 September 2014

TUKES (The United Kingdom Except Scotland) Isn't Good Company.

I'm not very patriotic, to me that means I think the UK and Scotland will be worse off if The UK Excludes Scotland (TUKES). My lack of patriotism means I'm pro-union. Here's two personal reasons why:

  • Thanks to Scotland being in the UK it's easier for me to invest in Scotland. Case in point: I've invested thousands of pounds in the Boyndie Windfarm Cooperative; which means I effectively supply carbon-free energy to a number of houses in Scotland. With TUKES it would have been much harder for me to invest in Scottish wind farms and by that token harder for me to invest in English wind farms either.
  • My largest sales for FIGnition have been thanks to Strathclyde University, a Scottish University; through their investment with me RS Components decided to take the risk and place an order with me too. It would have been much harder for them to do that if Scotland wasn't part of the UK.
So far who's winning? On balance, Scotland. The amount of money I've made out of Scotland is less than the amount Scotland's made out of me in this respect. And what if Scotland leaves the UK? Then I'll make more money from Scotland than it's likely to invest in me simply because trade won't be quite so open and my current investments have a long-term payback.

To my mind, being part of the UK is in one sense an unpatriotic and therefore good thing to do, because it means considering the interests of the community outside the country. To my mind the whole Nationalist cause, whether it's the Front Nationale in France; the UKIP or the Scottish SNP are all expressions of the same trend when faced with the economic challenges post-2008: an inevitable trend to blame the issues on outsiders (the EU, immigrants or London) combined with hubristic jingoism. It doesn't follow. It's far more likely that the loss of Scotland would lead to greater economic hardship for both TUKES and Scotland.

For example, here's one scenario. If Scotland leaves the UK then it is likely that TUKES would shift in a more right-wing: England, Wales and N.I would become more nationalistic, UKIP (or TUKESIP as perhaps it should be called) would gain power, not least because there would be a loss of confidence in David Cameron, but also because the same trends that would have pushed Scotland into independence have been pushing the whole of the UK towards Nationalism.

This would raise the prospect of TUKES leaving the EU sooner, which would mean that Scotland (though newly independent) would also be at least temporarily leaving the EU. So, Scotland's independence will make it more likely the whole of the UK would leave the EU and the loss of the TUKES from the EU would help destabilise the EU itself (which isn't exactly looking solid right now).

The outgoing President of the European Council, Herman Van Rompuy, said a few years ago that "a rising tide of nationalism is the EU's biggest enemy... In every member state there are people who believe their country can survive alone in the globalised world. It's a lie... the time of the homogenous nation state is over."

The mistake is to think that independence means escaping from domination, but in reality the UK is interdependent and so the process is one of de-interdependence. That's what's really being voted on. It's the same issue as independence from the EU - though the ties are less strong and France/Germany have been the dominant partners, it will be bad for all concerned if we leave.

The mistake with Nationalism is to undervalue the level of interdependence. That's the problem with London, because it has an implicitly built-in Regionalism which under-values the interdependence of the rest of the UK. So, a vote for Nationalism validates London's attitude because it says "Hey, you got it right, we'll take it a step further." The rest of the UK feels the Regionalism and doesn't like it, but the solution isn't to go it alone and to form a break-away Northumbria (though with 15 million people* they'd have nearly 3 times the clobber of Scotland).

The solution is to make London more accountable, to redistribute power across the Kingdom rather than validate the Regionalism. It means bringing in laws that cause concentrations of power within the UK to look further afield and that can only be done by increasing the level of interdependence.

And another reason why I don't want to lose Scotland? Because I really like Scottish people, and so in my mind, The United Kingdom Except Scotland just Isn't Good Company.

(This is the first blog of two on why I think Scottish Independence isn't a good thing, the next blog is coming soon!).

*Comprising of the North West, the North East and Yorkshire & Humber = 14.9million.

Saturday, 5 July 2014

BootJacker: The Amazing AVR Bootloader Hack!

There's an old adage that says if you don't know it's impossible you could end up achieving it. BootJacker is that kind of hack: a way for ordinary firmware on an AVR to reprogram its bootloader. It's something Atmel's manual for AVR microcontrollers with Bootloaders says is impossible (Note the italics):

27.3.1 Application Section.
The Application section is the section of the Flash that is used for storing the application code. The protection level for the Application section can be selected by the application Boot Lock bits (Boot Lock bits 0), see Table 27-2 on page 284. The Application section can never store any Boot Loader code since the SPM instruction is disabled when executed from the Application section.

Here's the background: I'm the designer of FIGnition, the definitive DIY 8-bit computer. It's not cobbled together from hackware from around the web, instead three years of sweat and bare-metal development has gone into this tiny 8-bitter. I've been working on firmware version 1.0.0 for a few months; the culmination of claiming that I'll put audio data transfer on the machine (along with a fast, tiny Floating point library about 60% of the size of the AVR libc one).

Firmware 1.0.0 uses the space previously occupied by its 2Kb USB bootloader and so, needs its own migration firmware image to copy the V1.0.0 firmware to external flash. The last stage is to reprogram the bootloader with a tiny 128b bootloader which reads the new image from external flash. Just as I got to the last stage I came across section 27.3.1, which let me know in no uncertain terms that I was wasting my time.

I sat around dumbstruck for a while ("How could I have not read that?") before wondering whether[1], crazies of crazy, imagining that a solution to the impossible might actually lead me there. And it turns out it does.

The solution is actually conceptually fairly simple. A bootloader, by its very nature is designed to download new firmware to the device. Therefore it will contain at least one spm instruction. Because the spm configuration register must be written no more than 4 cycles before the spm instruction it means there are very few sequences that practically occur: just sts, spm or out, spm sequences. So, all you need to is find the sequence in the bootloader section; set up the right registers and call it.

However, it turned out there was a major problem with that too. The V-USB self-programming bootloader's spm instructions aren't a neat little routine, but are inlined into the main code; so calling it would just cause the AVR to crash as it tried to execute the rest of the V-USB bootloader.

Nasty, but again there's a solution. By using a timer clocked at the CPU frequency (which is easy on an AVR), you can create a routine in assembler which sets up the registers for the Bootloader's out, spm sequence; calls it and just at the moment when it's executed the first cycle of the spm itself, the timer interrupt goes off and the AVR should jump to your interrupt routine (in Application space). The interrupt routine pops the bootloader address and then returns to the previous code - which is the routine that sets up the outspm sequence. This should work, because when you apply spm instructions to the bootloader section the CPU is halted until it's complete.

Here's the key part of BootJacker:

The code uses the Bootloader's spm to first write a page of flash which also contains a usable outspm sequence and then uses that routine to write the rest (because of course you might end up overwriting the bootloader with your own new bootloader!)

BootJacker involves cycle counting, I used a test routine to figure out the actual number of instructions executed after you set the timer for x cycles in the future (it's x-2). In addition I found there was one other oddity: erase and writes always have a 1 cycle latency after the SPM in a bootloader. I fixed this with a nop instruction in my mini bootloader.

This algorithm, I think is pretty amazing. It means that most bootloaders can in fact be overwritten using application firmware containing a version of BootJacker!

[1] As a Christian, I also have to fess' up that I prayed about it too. Not some kind of desperation thing, but some pretty calm prayer, trusting it'll get sorted out :-) 

Saturday, 31 May 2014

Gini Sim

The Gini Index is a measure of wealth distribution.

GiniSim is a simple Javascript program which demonstrates, in simple terms, flaws in free market economics, by showing that trading freely will lead to gross inequality. Copy the program into a .html file; save it and then open it in a browser: you can stop it by pressing the Stop button. Alternatively, you can download and run a simple Java version, GiniSim.jar from here.

Each bar is the wealth of a person, and the simulation starts with everyone having $10 (or £10, or 10€).

Each step simulates a free trade transaction, two monetary notes are picked at random and the person the first one belongs to pays the person the second one belongs to. Intuitively, you’d think this would average out: sometimes some people win and sometimes others would.

In reality what happens is that whenever a person accumulates wealth, it makes it more likely that someone poorer will give money to them. This is due to the fact that the chances of being paid in a transaction is proportional to their wealth - so if someone loses money from a transaction, they become less likely to gain in a future transaction.

How does this correspond to an idealism of the free market? It corresponds, because exchanges take place on the basis of being indifferent towards cash. Therefore when people gain wealth, their wealth acts as a bigger wealth footprint and people (who want to buy things) notice the cash more. If you’re poorer, that doesn’t happen, you’re not noticed, because your visible presence is the cash you hold - you literally disappear.

The important thing to note is that GiniSim demonstrates inequality without the agents involved behaving maliciously. All it does is play fairly towards cash (rather than people). It's a demonstration of a power law.

How does GiniSim1 not correspond to classical economics? GiniSim1 correspond to free trade under mercantilism, which is a zero-sum economic theory.

Here’s GiniSim, copy everything in yellow:

<!DOCTYPE html>
<button onclick="clearInterval(timer)">Stop</button>
<canvas id="myCanvas" width="800" height="600" style="border:1px solid #c3c3c3;">
Your browser does not support the HTML5 canvas tag.

  var c=document.getElementById("myCanvas");
  var ctx=c.getContext("2d");
  var timer;
  var rects=0;
  var cash=[0],people=[0];
  var kStartCash=10;
  var kNumPeople=100;
  function initGini() {
    var ix,iy;
    for(ix=0;ix<kNumPeople;ix++) {
      people[ix]=kStartCash; // everyone starts with $10.
      for(iy=0;iy<kStartCash;iy++) {
        cash[ix*kStartCash+iy]=ix; // each $1 is owned by a person.
  function incomeSwap(from,too) {
    var ix;
    for(ix=0;ix<kNumPeople*kStartCash;ix++) {
      else if(cash[ix]==too)
  function exchange() {
    var aNote=Math.floor(Math.random()*kStartCash*kNumPeople);
    var aOwner=cash[aNote];
    var aNewNote=Math.floor(Math.random()*kStartCash*kNumPeople);
    var aNewOwner=cash[aNewNote];
    if(people[aOwner]>0 ) {
      // can't take cash from people who have nothing.
      while(aOwner>0 && people[aOwner]<people[aOwner-1]) {
      while(aNewOwner<kNumPeople-1 && people[aNewOwner]>people[aNewOwner+1]) {
  function drawImage() {
    var ix;
    for(ix=0;ix<kNumPeople;ix++) {

[Edit: Added link to GiniSim.jar on 20150321. Edit: I knew from the simulations that wealth always gravitates to the rich, but my reasoning for the mechanism was incorrect, because I was trying to derive it from the probabilities of a single transaction. In reality, the mechanism is due to how the probabilities change between one transaction and the next. 20150626.]

Tuesday, 1 April 2014

The Royal Fracking Society

This blog takes a slightly more provocative title to raise questions about the role of the UKs Royal Society with respect to hydraulic fracturing approvals.

It is estimated that there are 11,000Km3 of extractable methane gas reserves in the United Kingdom. In view of increasing questions about energy security worldwide, and the reported economic boom for shale gas/oil in the United States,  the British government is keen to exploit these reserves as much as possible.

This gas will be extracted using a technique called Hydraulic Fracturing - or Fracking - which is proving controversial not least because of concerns about water supplies being contaminated by fracking chemicals, but health related issues, questionable economics and geological issues such as induced earthquakes. As as result there have been widespread protests around the world, notably in the USA and more recently in Europe and here in the United Kingdom.

Large-scale Fracking the UK is in its early stages and activists have been able to draw attention to the cause by protesting at the first few sites, most notably in Balcome in West Sussex and Barton Moss in Manchester. However, the government has already approved of over 600 Fracking permits and it will not be possible to protest at anything more than a small fraction of them.

Protests work, really by massing public conscience against (or in favour of) a cause rather than by physically forcing organisations to comply. The key thing is being able to raise a person's conscience enough to act. It is therefore politically important for the government to argue the case for Fracking: positively, by citing potential economic and employment benefits; energy security, cost and safety regulations as well as negatively by portraying protestors as being disruptive or irrelevant etc. At the moment there is little consensus on fracking amongst the British public.

Today I came across this report on Fracking approvals at Barton Moss in Manchester and one paragraph particularly caught my attention:

This 3-D seismic will also fulfil UKOOG Shale Gas guidelines, the recommendations of the Royal Society and Royal Academy of Engineers and is a requirement of the Department of Energy and Climate Change consent process prior to any shale gas hydraulic fracturing and flow testing operations being undertaken.

It looks to me that Fracking consent can only be granted following the recommendations of the Royal Society (amongst other organisations). Now my thinking is that the Royal Society's brief is that they are only allowed to object to a fracking operation on scientific grounds and the grounds they'd be asked to review would be its viability and potential safety.

However, because it is a scientific body, its responsibility is broader than a narrow remit given by the government; it can (and in fact has a duty to) object on any scientific ground if the scientific consensus warrants it. And, given that the consensus amongst (albeit climate) scientists is around 97% or more that the continued use of fossil fuels will be globally catastrophic, it seems to me that there are good grounds to make the case to the Royal Society itself and get them to act on the basis of their scientific conscience. Here it's important to state that the extra-remit grounds for objection would be on the basis of carbon emissions, not health, economic or geological concerns. The objective basis is that we cannot burn more than a tiny fraction of fossil fuel reserves without causing dangerous, essentially irreversable climate change.

Because, without their approval the DECC can't currently give consent and there's already a high degree of consensus. I imagine that if they were persuaded to publicly object, it would mean that the government would change the law so as not to require the Royal Society's recommendations, but this would have potentially severe negative consequences for British public opinion and also actual safety issues. This would be a major step forward for eliminating Fracking in the UK.

[Note: additional links and labels to be added in a later update]

Tuesday, 18 March 2014

Caller Convention Calamities

Hello AVR people, let's talk about interrupts and the mess calling conventions have made of them!


Back in the early 1990s I had my first long-term job working at a place called Micro-Control Systems developing early solid-state media drivers. These were long ISA cards for PCs stuffed with either battery-backed Static RAM, EPROM or Intel Flash chips that gave you a gargantuan 3Mb per card up to 12Mb of storage with 4 cards.

These cards were bootable (they emulated hard disks) and the firmware was written entirely in 16-bit 8086 assembler with a pure caller-save convention. The thinking behind caller-save conventions is that a subroutine doesn't save registers on entry; instead, the caller of a subroutine saves any registers it's using that are also being used by the callee before doing the call and then restoring them as necessary later. Let's assume for example, we have ProcTop which calls ProcMid a few times which calls ProcLeaf a few times, which doesn't call anything. Caller-save conventions aim to improve performance because leaf procedures, like ProcLeaf here don't need to save registers.

However, I found that caller-saving lead to a large number of hard to trace bugs. This happens because every time you change ProcLeaf you have the potential to use new registers and this can have an effect on the registers ProcMid needs to save or potentially the registers ProcTop needs to save. But also, if you change ProcMid and use new registers you might find you need to save them whenever you call ProcLeaf (if ProcLeaf uses them) as well as having to check ProcTop for conflicts.

This means you need to check an entire call tree whenever you change a subroutine and if you need to save additional registers in ProcMid or ProcTop you might end up restructuring that code etc (which means more testing).

Nasty, nasty, nasty and all because a caller-save convention is used. In the assembler code I wrote (and still write), I use a pure callee-register saving convention. Ironically, caller-saving doesn't even save much performance because pushing and popping registers at the beginning and the end usually occupies only a small fraction of the time spent within a routine.

AVR Interrupts

GCC 'C' calling conventions use a mixture of caller-saving and callee-saving conventions. Most registers below Reg 18 are caller-saved; most of the rest are callee-saved. This, I think, is seen to be a compromise between performance and code-density. I personally wouldn't use caller saving at all, even in a compiler, but for interrupts it's an absolute disaster for the AVR.

That's because every time you need to use a subroutine within an interrupt the interrupt routine itself must then save absolutely every caller-saving register, just in case something used by any of the interrupt's call tree uses them; because of course when dealing with interrupts the compiler can't make assumptions about what registers are safe to use. As a result interrupt latency on an AVR shifts from being excellent (potentially as little as around 12 clock cycles, under 1µs at 20MHz, to 3 times as long, 36 clock cycles, around 1.8µs at 20MHz).

This kind of nonsense isn't just reserved for AVR cpus, the rather neat Cortex M0/M3 etc architectures save, as standard, 8x32-bit registers on entry to every subroutine for the same reason to make it easy for compilers to target Cortex M0 for real-time applications.

What I really want when I write interrupt routines, is to have some control over performance degradation. I want additional registers to be saved only when they need to be as only as much as is actually needed. In short, I want callee-saving and avr-gcc (amongst its zillions of options) doesn't provide that.

For the up-and-coming FIGnition Firmware 1.0.0 I decided to create a tool which would do just that. You use it by first getting GCC to generate assembler code using a compile command such as:

avr-gcc -Wall -Os -DF_CPU=20000000 -mmcu=atmega168 -fno-inline -Iinc/ -x assembler-with-cpp -S InterruptSubroutineCode.c -o InterruptSubroutineCode.s

The interrupt code should be structured so that the top-level interrupt subroutine ( called IntSubProc below) is listed last and all the entire call-tree required by IntSubProc is contained within that file. Then you apply the command line tool:

$./IntWrap IntSubProc InterruptSubroutineCode.s src/InterruptSubroutineCodeModified.s

Where IntSubProc is the name of the interrupt subroutine that's called by your primary interrupt routine. The interrupt routine itself has an assembler call somewhere in it, e.g

asm volatile("call IntSubProc");

That way, GCC won't undermine your efforts by saving and restoring all the caller-saved registers.

IntWrap analyses the assembler code in InterruptSubroutineCode.s and works out which caller-saved registers actually need to be saved according to the call-tree in the code in InterruptSubroutineCode.s. The analysis stops after the ret command for IntSubProc.

The current version of IntWrap is written using only the standard C library and is currently, I would say, Alpha quality. It works for FIGnition, the DIY 8-bit computer from nichemachines :-)

Download from Here.

How Does It Work?

IntWrap trawls through the assembler code looking for subroutines and determining which registers have been modified by the subroutine. The registers that need to be saved by IntSubProc are all the caller-saved registers that have been modified by IntSubProc's call tree, but haven't been saved. To make it work properly, IntWrap must eliminate registers that were saved mid-way down the call-tree. Consider: IntSubProc saves/restores r20 and calls ProcB which saves/restores r18, but modifies r19 and ProcB calls ProcC which modifies r18, r20 and r21. IntWrap should save/restore r19 because ProcB modifies it and should save/restore r21 because ProcC modifies it. But it doesn't need to save r18, because even though ProcC modified it, ProcB save/restored it.

The algorithm works by using bit masks for the registers. For every procedure, it marks which registers have been save/restored and which have been modified and the subroutine's modified registers are modifiedRegs&~saveRestoredRegs . Call instructions can be treated the same way as normal assembler instructions.

IntWrap avoids having to construct a proper call-tree graph by re-analyzing the code if it finds that it can't fully evaluate a call to a subroutine. In this way the modified register bitmasks bubble up through the call-tree with repeated analysis until it's all been solved.

Thursday, 30 January 2014

Rain, Rain Won't Go Away

A couple of weeks ago I thought the UK was starting to turn a corner in recognizing the possibility that our weather is being affected by climate change. The connection between climate change and extreme weather reporting had declined in the 3 years from 2009 from 25% to about 11% in 2012, despite the extensive floods we had that year.

2013 had Century-level floods in Eastern Europe, India, China, Russia, Canada and Oregon but we were largely spared. However, in October however we had the worst storm since 1987; followed by the worst storm surge in 60 years; followed by persistent flooding in Scotland and Southern England over December along with a second storm surge that destroyed Aberyswyth's sea front and caused extensive damage elsewhere.

Since then parts of the country have had continual flooding to the extent that by early January David Cameron was admitting this could be due to climate change; which was backed up by the MET office which called for attribution studies to prove it.

But then at the end of January it was suddenly all put down to not dredging rivers. If that's true, then failing to dredge the River Severn has lead to Jet Stream blocking patterns and our wettest January on record.

So, I decided to take a look at MET office rainfall anomaly images for both 2012 and the end of 2013. I'm picking selected months. Let's see them:
April 2012 vs 1961-1990 April 2012 vs 1981-2010
June 2012 vs 1961-1990 June 2012 vs 1981-2010
July 2012 vs 1961-1990 July 2012 vs 1981-2010
August 2012 vs 1961-1990 August 2012 vs 1981-2010
October 2012 vs 1961-1990 October 2012 vs 1981-2010
November 2012 vs 1961-1990 November 2012 vs 1981-2010
December 2012 vs 1961-1990 December 2012 vs 1981-2010
The above images are for 2012 and tell us some interesting things. Firstly, the three months April, June and July were exceptionally wet. You can see how blue the country is. Secondly, the comparison with 1961-1990 is almost always bluer than 1981-2010. This gives us an indication that the UK was wetter over these months in 1981-2010 compared with 1961-1990. That's because the corresponding months in 2012 are less wet when compared against the more recent range. Now let's look at the flooding in 2013:
October 2013 vs 1961-1990 October 2013 vs 1981-2010
November 2013 vs 1961-1990 November 2013 vs 1981-2010
December 2013 vs 1961-1990 December 2013 vs 1981-2010
Again, we see the same sorts of patterns. We can see how extremely wet October 2013 has been (compared with October 2012). We can also see how the rainfall pattern has been so much more damaging in December 2013 compared with 2012 even though December 2012 looks generally bluer. Finally, also note that November has been getting wetter according to the graph, since November 2013 is relatively dryer compared against the 1981-2010 range vs the 1961-1990, i.e. 1981-2010 was a wetter period.


These images could tell us a couple of important aspects about climate change in the UK:
  • It's generally getting wetter for certain months in the year since the range 1981-2010 is wetter than 1961-1990.
  • We've been seeing some pretty bad weather: all those blue regions tell us it really has been getting worse.
  • Flooding can't just be due to a lack of dredging in the river Severn, because we're looking at pictures of rainfall, not flooding and these images easily explain why it's been so bad.


At the time of publication it wasn't possible to report the images for January 2014 as they hadn't been published by the MET. It is possible now. You can see the same trends are in effect: the anomaly for January 2014 is astonishing in both cases, but less so compared with the average rainfall over 1981 to 2010 (which implies that that period was a bit wetter than 1961 to 1990). In early March it should be possible to add the graphs for February rainfall (which won't be as extreme).

January 2014 vs 1961-1990 January 2014 vs 1981-2010

 Edit 2

And a day later the March 2014 data became available. Again, the same trends are evident. Firstly, Rainfall for the month is extreme - in fact more extreme than January and more extreme than I anticipated just yesterday as it covers Northern Ireland and Great Britain with the exception of the east coast and the North West of England. Secondly, rainfall is less extreme relative to the 1981 to 2010 period which means that that period is wetter. Truly astounding.

February 2014 vs 1961-1990 February 2014 vs 1981-2010