Tuesday, 18 March 2014

Caller Convention Calamities

Hello AVR people, let's talk about interrupts and the mess calling conventions have made of them!

Background

Back in the early 1990s I had my first long-term job working at a place called Micro-Control Systems developing early solid-state media drivers. These were long ISA cards for PCs stuffed with either battery-backed Static RAM, EPROM or Intel Flash chips that gave you a gargantuan 3Mb per card up to 12Mb of storage with 4 cards.

These cards were bootable (they emulated hard disks) and the firmware was written entirely in 16-bit 8086 assembler with a pure caller-save convention. The thinking behind caller-save conventions is that a subroutine doesn't save registers on entry; instead, the caller of a subroutine saves any registers it's using that are also being used by the callee before doing the call and then restoring them as necessary later. Let's assume for example, we have ProcTop which calls ProcMid a few times which calls ProcLeaf a few times, which doesn't call anything. Caller-save conventions aim to improve performance because leaf procedures, like ProcLeaf here don't need to save registers.

However, I found that caller-saving lead to a large number of hard to trace bugs. This happens because every time you change ProcLeaf you have the potential to use new registers and this can have an effect on the registers ProcMid needs to save or potentially the registers ProcTop needs to save. But also, if you change ProcMid and use new registers you might find you need to save them whenever you call ProcLeaf (if ProcLeaf uses them) as well as having to check ProcTop for conflicts.

This means you need to check an entire call tree whenever you change a subroutine and if you need to save additional registers in ProcMid or ProcTop you might end up restructuring that code etc (which means more testing).

Nasty, nasty, nasty and all because a caller-save convention is used. In the assembler code I wrote (and still write), I use a pure callee-register saving convention. Ironically, caller-saving doesn't even save much performance because pushing and popping registers at the beginning and the end usually occupies only a small fraction of the time spent within a routine.

AVR Interrupts

GCC 'C' calling conventions use a mixture of caller-saving and callee-saving conventions. Most registers below Reg 18 are caller-saved; most of the rest are callee-saved. This, I think, is seen to be a compromise between performance and code-density. I personally wouldn't use caller saving at all, even in a compiler, but for interrupts it's an absolute disaster for the AVR.

That's because every time you need to use a subroutine within an interrupt the interrupt routine itself must then save absolutely every caller-saving register, just in case something used by any of the interrupt's call tree uses them; because of course when dealing with interrupts the compiler can't make assumptions about what registers are safe to use. As a result interrupt latency on an AVR shifts from being excellent (potentially as little as around 12 clock cycles, under 1µs at 20MHz, to 3 times as long, 36 clock cycles, around 1.8µs at 20MHz).

This kind of nonsense isn't just reserved for AVR cpus, the rather neat Cortex M0/M3 etc architectures save, as standard, 8x32-bit registers on entry to every subroutine for the same reason to make it easy for compilers to target Cortex M0 for real-time applications.

What I really want when I write interrupt routines, is to have some control over performance degradation. I want additional registers to be saved only when they need to be as only as much as is actually needed. In short, I want callee-saving and avr-gcc (amongst its zillions of options) doesn't provide that.

For the up-and-coming FIGnition Firmware 1.0.0 I decided to create a tool which would do just that. You use it by first getting GCC to generate assembler code using a compile command such as:

avr-gcc -Wall -Os -DF_CPU=20000000 -mmcu=atmega168 -fno-inline -Iinc/ -x assembler-with-cpp -S InterruptSubroutineCode.c -o InterruptSubroutineCode.s

The interrupt code should be structured so that the top-level interrupt subroutine ( called IntSubProc below) is listed last and all the entire call-tree required by IntSubProc is contained within that file. Then you apply the command line tool:

$./IntWrap IntSubProc InterruptSubroutineCode.s src/InterruptSubroutineCodeModified.s

Where IntSubProc is the name of the interrupt subroutine that's called by your primary interrupt routine. The interrupt routine itself has an assembler call somewhere in it, e.g

asm volatile("call IntSubProc");

That way, GCC won't undermine your efforts by saving and restoring all the caller-saved registers.

IntWrap analyses the assembler code in InterruptSubroutineCode.s and works out which caller-saved registers actually need to be saved according to the call-tree in the code in InterruptSubroutineCode.s. The analysis stops after the ret command for IntSubProc.

The current version of IntWrap is written using only the standard C library and is currently, I would say, Alpha quality. It works for FIGnition, the DIY 8-bit computer from nichemachines :-)



Download from Here.

How Does It Work?

IntWrap trawls through the assembler code looking for subroutines and determining which registers have been modified by the subroutine. The registers that need to be saved by IntSubProc are all the caller-saved registers that have been modified by IntSubProc's call tree, but haven't been saved. To make it work properly, IntWrap must eliminate registers that were saved mid-way down the call-tree. Consider: IntSubProc saves/restores r20 and calls ProcB which saves/restores r18, but modifies r19 and ProcB calls ProcC which modifies r18, r20 and r21. IntWrap should save/restore r19 because ProcB modifies it and should save/restore r21 because ProcC modifies it. But it doesn't need to save r18, because even though ProcC modified it, ProcB save/restored it.

The algorithm works by using bit masks for the registers. For every procedure, it marks which registers have been save/restored and which have been modified and the subroutine's modified registers are modifiedRegs&~saveRestoredRegs . Call instructions can be treated the same way as normal assembler instructions.

IntWrap avoids having to construct a proper call-tree graph by re-analyzing the code if it finds that it can't fully evaluate a call to a subroutine. In this way the modified register bitmasks bubble up through the call-tree with repeated analysis until it's all been solved.


Thursday, 30 January 2014

Rain, Rain Won't Go Away

A couple of weeks ago I thought the UK was starting to turn a corner in recognizing the possibility that our weather is being affected by climate change. The connection between climate change and extreme weather reporting had declined in the 3 years from 2009 from 25% to about 11% in 2012, despite the extensive floods we had that year.

2013 had Century-level floods in Eastern Europe, India, China, Russia, Canada and Oregon but we were largely spared. However, in October however we had the worst storm since 1987; followed by the worst storm surge in 60 years; followed by persistent flooding in Scotland and Southern England over December along with a second storm surge that destroyed Aberyswyth's sea front and caused extensive damage elsewhere.

Since then parts of the country have had continual flooding to the extent that by early January David Cameron was admitting this could be due to climate change; which was backed up by the MET office which called for attribution studies to prove it.

But then at the end of January it was suddenly all put down to not dredging rivers. If that's true, then failing to dredge the River Severn has lead to Jet Stream blocking patterns and our wettest January on record.

So, I decided to take a look at MET office rainfall anomaly images for both 2012 and the end of 2013. I'm picking selected months. Let's see them:
April 2012 vs 1961-1990 April 2012 vs 1981-2010
June 2012 vs 1961-1990 June 2012 vs 1981-2010
July 2012 vs 1961-1990 July 2012 vs 1981-2010
August 2012 vs 1961-1990 August 2012 vs 1981-2010
October 2012 vs 1961-1990 October 2012 vs 1981-2010
November 2012 vs 1961-1990 November 2012 vs 1981-2010
December 2012 vs 1961-1990 December 2012 vs 1981-2010
The above images are for 2012 and tell us some interesting things. Firstly, the three months April, June and July were exceptionally wet. You can see how blue the country is. Secondly, the comparison with 1961-1990 is almost always bluer than 1981-2010. This gives us an indication that the UK was wetter over these months in 1981-2010 compared with 1961-1990. That's because the corresponding months in 2012 are less wet when compared against the more recent range. Now let's look at the flooding in 2013:
October 2013 vs 1961-1990 October 2013 vs 1981-2010
November 2013 vs 1961-1990 November 2013 vs 1981-2010
December 2013 vs 1961-1990 December 2013 vs 1981-2010
Again, we see the same sorts of patterns. We can see how extremely wet October 2013 has been (compared with October 2012). We can also see how the rainfall pattern has been so much more damaging in December 2013 compared with 2012 even though December 2012 looks generally bluer. Finally, also note that November has been getting wetter according to the graph, since November 2013 is relatively dryer compared against the 1981-2010 range vs the 1961-1990, i.e. 1981-2010 was a wetter period.

Conclusion.

These images could tell us a couple of important aspects about climate change in the UK:
  • It's generally getting wetter for certain months in the year since the range 1981-2010 is wetter than 1961-1990.
  • We've been seeing some pretty bad weather: all those blue regions tell us it really has been getting worse.
  • Flooding can't just be due to a lack of dredging in the river Severn, because we're looking at pictures of rainfall, not flooding and these images easily explain why it's been so bad.

Edit

At the time of publication it wasn't possible to report the images for January 2014 as they hadn't been published by the MET. It is possible now. You can see the same trends are in effect: the anomaly for January 2014 is astonishing in both cases, but less so compared with the average rainfall over 1981 to 2010 (which implies that that period was a bit wetter than 1961 to 1990). In early March it should be possible to add the graphs for February rainfall (which won't be as extreme).

January 2014 vs 1961-1990 January 2014 vs 1981-2010

 Edit 2

And a day later the March 2014 data became available. Again, the same trends are evident. Firstly, Rainfall for the month is extreme - in fact more extreme than January and more extreme than I anticipated just yesterday as it covers Northern Ireland and Great Britain with the exception of the east coast and the North West of England. Secondly, rainfall is less extreme relative to the 1981 to 2010 period which means that that period is wetter. Truly astounding.


February 2014 vs 1961-1990 February 2014 vs 1981-2010

Monday, 30 December 2013

Slowtake QuickTake!


A Digital Preservation Story.

About 7 years ago a Manchester friend, Sam Rees gave me an Apple QuickTake 150; one of the earliest color digital cameras from around 1995, but he didn't have the right drivers so I've never known if it works or if it's just junk. A few months ago I tracked down the drivers on the Macintosh Garden website so yay, in theory I could test it!

But obtaining the drivers is only a small part of the problem. The QuickTake only works with Macs before 1998, and even if you have one you have to find a compatible media to transfer the downloaded drivers in the right data format. All this is challenging. The download itself comes as a .sit (Stuffit) file, which Modern Macs don't support as standard. When you decompress them you find that inside, the actual software and drivers are disk image files, but not in a disk image format that is understood by the older Mac I have (a Mac running Mac OS 9 could work, but my LCII only runs up to Mac OS 7.5.3).

In the end I used a 2002 iMac to decompress the .sit, because at least that was possible. The plan was to connect a USB Zip 250 drive to the iMac, copy the images to a Zip 100 disk, then use a SCSI Zip 100 drive on the LCII to load in the drivers.

However, I couldn't convert the floppy disk images to Disk Copy 4.2 format for my LCII, so I took a chance that simply saving the files in each floppy disk image as a set of folders might work.

Even getting an old circa 1993 Macintosh to work is a challenge. I'm fortunate in that I have an old, but still working SCSI Hard Disk. But, I still needed a special Mac to VGA monitor adapter to see its output (which I connected to a small LCD TV) and still had to spend some time hunting down the right kind of SCSI cable (D-type to D-type rather than D-type to Centronics) to hook up the Zip 100 drive.

After all this & the 30minutes it took to install all the QuickTake software (yes, just putting all the files in folders worked!) I was finally able to test it (no manuals, had to guess) and with a bit of fiddling was able to load wonderful fixed-focus VGA images from the camera in mere seconds (each image approx 60Kb). Opening and decompressing them took about 90s each on my slow LCII though!

Here's a picture of my family and our cats taken with the QuickTake 150 December 28, 2013. I used the 10s timer mode to take the photo, with the camera balanced on a book on an armchair - so apologies for the framing :-)
 


As you can see, the clarity of the image is actually pretty good. The native image required roughly 64Kb, which given an original 24-bit image means the QuickTake camera must have compressed images by about 14x.

When viewed on the LCII, the images appeared rather speckled due to the PhotoFlash software implementing a fairly crude dithering algorithm (simulated here using GIMP).

Thus ends a 7 year quest to test an Apple QuickTake 150 digital camera, thanks Sam!

Tuesday, 10 December 2013

Z80 Dhrystones

In the early 80s, my schoolmate David Allery's dad's workplace had a pdp-11/34, a minicomputer designed in the 1970s. All the reports at the time implied that a pdp-11 anything had absolutely awesome performance compared with the humble 8-bit computers of our day.

Yet decades later, when you look at the actual performance of a pdp-11/34, it seems pretty bad in theory. You can download the pdp-11 handbook from 1979 which covers it.

First, a brief introduction to computer processors, the CPU, which executes the commands that make up programs. I'll assume you understand something of early 80s BASIC. CPUs execute code by reading in a series of numbers from memory, each of which it looks up and translates into switching operations which perform relatively simple instructions. These instructions are at the level of regn=PEEK(addr), POKE(addr,regn), GOTO/GOSUB addr, RETURN; regn = regn+/-/*/divide/and/or/xor/shift regm; compare regn,regm/number. And not much else.

The pdp-11 was a family of uniform 16-bit computers with 8x16-bit registers, 16-bit instructions and a 16-bit (64Kb) address space (though the 11/34 had built-in bank switching to extend it to 18-bits). The "/number" refers to the particular model.

On the pdp-11/34, an add rn,rm took 2µs; add rn,literalNumber took 3.33µs and an add rn,PEEK(addr) took 5.8µs. Branches took 2.2µs and Subroutines+Return took 3.3µs+3.3µs.
That's not too different to the Z80 in a ZX Spectrum, which can perform a (16-bit) add in 3µs; load literal then add in 6µs, load address then add in 7.7µs; Branch in 3.4µs and subroutine/return in 4.3µs+2.8µs.

So, let's check this.

A 'classic' and simple benchmarking test is the Dhrystone test, a simple synthetic benchmark written in 'C'. A VAX 11/780 was defined as having 1 dhrystone MIP and other computers are calculated according to that.

If you do a search, you'll find the pdp-11/34 managed 0.25 dhrystone MIPs. To compare with a ZX Spectrum I used a modern Z80 'C' compiler: SDCC; compiled a modern version of dhrystone (changed only to comply with modern 'C' syntax) and then ran it on a Z80 emulator. I had to modify the function declarations a little to get it to compile as an ANSI 'C' program, but once it did I was able to ascertain that it could run 1000 dhrystones in 13 959 168 TStates.

The result was that if the emulator was running at 3.5MHz, it would execute 0.142dhrystone MIPs, or about 57% of the speed of a pdp-11/34. Of course perhaps a more modern pdp-11 compiler would generate a better result for the pdp-11, but at least these results largely correlate with my sense that the /34 isn't that much faster :-) !

Compiling SDCC Dhrystone

SDCC supports a default 64Kb RAM Z80 target, basically a Z80 attached to some RAM. I could compile Dhrystone 2.0 with this command line:

/Developer/sdcc/bin/sdcc -mz80 --opt-code-speed -DNOSTRUCTASSIGN -DREG=register dhry.c -o dhry.hex

The object file is in an Intel Hex format, so I had to convert it to a binary format first (using an AVR tool):

avr-objcopy -I ihex dhry.hex -O binary

SDCC also provides a z80 ucSim simulator, but unfortunately it's not cycle-accurate (every instruction executes in one 'cycle'). So, I wrote a simulated environment for libz80, which turned out to be quite easy. I used the following command line to run the test:

./rawz80 dhry.bin 0x47d


The command line simply provides the binary file and a breakpoint address. The total number of TStates is listed at the end.

The entire source code is available from the Libby8 Google site (where you can also find out about the FIGnition DIY 8-bit computer).

So Why Did People Feel The Pdp-11 Was So Fast Then?


By rights the pdp-11 shouldn't have been fast at all.
  1. The pdp-11 was typical for the minicomputer generation: the CPU (and everything else) was built from chains of simple, standard TTL logic chips, which weren't very fast.
  2. It was designed for magnetic core memory and that was slow, with a complete read-write cycle taking around 1µs.
  3. It was part of DECs trend towards more sophisticated processors, which took a lot of logic and slowed it down further. 
But (3) is also what made it a winner. It's sophistication meant that it was a joy to program and develop high-quality programming tools for. That's probably a good reason for why both the language 'C' and the Unix OS started out on a pdp-11.

By contrast, although early 8-bit microprocessors were built from custom logic and faster semiconductor memory, the sophistication of the CPUs were limited by the fabrication technology of the day. So, a Z80 had only 7000 transistors and an architecture geared for assembler programming rather than compiled languages.

And there's one other reason. The pdp-11 supported a fairly fast floating-point processor and could execute, for example, a floating point multiply in typically 5.5µs, something a Z80 certainly can't compete with.

Monday, 24 June 2013

Calgary Flooding Podcast Transcript

Hi folks,

I had just finished a facebook status on India's recent 60-year flooding event (just a fortnight or so after Central Europe's multi-century flooding event) when I discovered that 70,000 had been evacuated from Calgary because of yet another flooding event. 

Most media reports (but not this one) are bending over backwards to play down the the connection, but there's an awful lot of 'freak' weather, going on these days. Bob Stanford's interview on Anna Maria Tremonti's podcast about Calgary's flooding just a few days ago was a superb description of the state of play of the current state of  climate science and extreme weather events.

It's so informative, I thought I should provide a transcript.

Anna Maria Tremonti: 'Well Bob Sanford lives in Canmore too, but he's in Winnipeg this morning. He's been trying to make sense of these and other severe floods and he's come to a disturbing conclusion. Bob Sanford is the chair of the Canadian Partnership Initiative of the United Nations Water For Life Decade and the author of "Cold Matters - The State and Fate of Canada's Fresh Waters." ' Good Morning!"

Bob Sanford: " Good morning Anna."

Anna: "Well, what do you make of what we're seeing across Southern Alberta this morning?"

Bob Sanford: "Well, to scientists working in the domain of climate effects on water this is, really the worst of all possible outcomes. We built on flood plains because we thought we had relatively stable climate, the climate that we've experienced over the past century. We thought it would stay the same. We also thought that we had a good grasp of how variable we could expect climatic conditions to be based on what we've experienced in the past century.

And now we've discovered that neither assumption was correct. We do not have adequate means to protect development on flood plains; climatic conditions are more variable than we thought and that variability is increasing as climate changes and we've also discovered that our hydrologic conditions are changing."

Anna: "So what do floods like this tell us about what's happening with our water cycles?"

Bob Sanford: "Well if we put all of the data together they tell us that warming temperatures are altering the form that water takes and where it goes in the hydrosphere. Evidence that increasing temperatures are accelerating the manner and rate at which water is moving through the hydrological cycle is now widely enough available to allow us to connect the dots with respect to what's happening in Canada. So let's start very briefly in the Canadian Arctic. In the North and throughout much of the Canadian Boreal, water that's been trapped as ice in the form of glaciers and as permanent snow pack and permafrost is, is in decline. And the same sort of thing is visibly evident in Canada's Western mountains. There's now evidence that we've lost as many as 300 glaciers in the Canadian Rockies alone between 1920 and 2005. And the same thing that's causing our glaciers to disappear is (in combination with landscape change) changing precipitation patterns on the great plains.

And the same warming is causing water left on the land after the last glaciation in the great lakes region to evaporate. So, you might well ask 'where's all this water going?'  And one of the places it's going is into the atmosphere where it becomes available to fuel more frequent and intense extreme weather events such as the one that you had in Toronto in 2005 that cause $700m [Canadian] worth of flood damage to infrastructure, roads and homes. And you may remember that, in that year, that Calgary just dodged the same kind of bullet - well - not this time. And what we're seeing here is that rising temperatures and the increasing concentration of atmospheric vapour are making what were once predictable, natural events, much worse and what we've discovered is that the atmosphere holds about 7% more water vapour for each degree celsius temperature increase.

And what this tells us is that the old math and the old methods of flood prediction and protection won't work any more. And until we find a new way of substantiating appropriate action in the absence of this hydrologic stability, flood risks are going to be increasingly difficult to predict or to price, not just in Calgary or Canmore, but everywhere."

Anna: " So you're saying, then that there's more condensation in the air. Warm air can hang on to water longer and then - burst when it hits somewhere that can no longer hang on to it?"

Bob Sanford: "Well, warmer atmosphere is more turbulent and it carries more water vapour. And we're seeing that happening widely. We're also seeing in North America disruption in the Jet Stream which is allowing climatic events to cluster and remain in places for longer periods of time, resulting in more intensive floods and droughts. And we're seeing this as a result of the general warming in the atmosphere."

Anna: "And you've said that this is because of Climate Change. How do we know that this isn't just a fluke, an outlier?"

Bob Sanford: "Well, we know that Classius Clapeyron relation is one of the standard logarithms, or algorithms that we use in Climate Science. And we know that as the temperature increases we know what we can expect in terms of water vapour increases in the atmosphere and we're beginning to see some very interesting phenomenon associated with this. Things like atmospheric rivers. Great courses of water vapour aloft that can carry between 7 and 15 times the daily flow of the Mississippi and when these touch ground or are confronted by cooler temperatures that water precipitates out and what we see is huge storms of long duration and the potential for much greater flooding events."

Anna: "So, what you're saying is this part of a blot or pattern across North America."

Bob Sanford: "Well, unfortunately, this may be the new normal. I regret to say that everything we know about how climate affects the hydrologic cycle supports or suggests that events like this are likely to be more common. And the insurance industry has already warned us of a trend towards more intense and longer duration storms that cause more damage especially in areas of population concentration. And this is certainly what we're seeing in the Calgary Area."

Anna: "What are you hearing from people you know in Canmore?"

Bob Sanford: "Well, there's a great deal of concern about how long this event is going to last and, well we heard from residents there this morning on your show, there is a deep concern about how much damage there has been done to very expensive infrastructure, roads and bridges. So we're going to have to wait until the storm is over to determine exactly the extent of those damages."

Anna: "What should we be doing to address the situation you're describing?"

Bob Sanford: "Well, I think that it's important to recognise that the loss of hydrologic stability is a societal game-changer. It's already causing a great deal of human misery widely. So we're going to have to replace vulnerable infrastructure across the country with new systems designed to handle greater extremes and this is going to be very costly. We're also going to have to invest more in the science so that we can improve our flood predictions."

Anna: "As you look at what's unfolding across Southern Alberta - not surprising to you? Surprising? The residents there certainly are saying it was completely unexpected."

Bob Sanford: "Well, I don't know if it was entirely unexpected. We know that there's great variability in our climate naturally. But we also know that some of these influences are affecting the frequency of these storm events. And researchers at University of Saskatchewan's Kananaskis research centre  have predicted already that events of this sort will be more common.

No-one likes to be right on such matters, but it appears that these are going to be events that we're going to see more frequently in the future."

Anna: "Um-hmmm, that's a rather grim forecast. No pun intended."

Bob Sanford: "It is grim, but I think that if we accept what we see happening right in front of our very eyes is real then we can begin to adapt and begin to rethink about how we situate our homes and our infrastructure and flood plains. We can begin to think about how we're going to adapt to more extreme weather events it's not certainly outside of the domain of human possibility to do so and we should be acting toward that direction."

Anna: "Well Bob, good to talk to you. Thanks for your time this morning."

Bob Sanford: "Thank you."

Thursday, 28 February 2013

Raspberry Pi Plum Pulling

I'm disappointed that Eben Upton, Technical Director of Broadcom, that make the patent-protected, closed-source BCM2835 chipset inside the Raspberry PI has chosen to use his product to promote free-trade ideology in developing countries.
"..A less positive experience has been the impact of state-monopoly postal services and punitive tariffs - often as high as 100% - on availability in markets such as Brazil.
There, a $35 (£23) Pi will currently cost you the best part of $100 (£66).
I believe that these measures, aimed at fostering local manufacturing, risk holding back the emergence of a modern knowledge economy in these countries."

 I've just been reading 23 things they don't tell you about Capitalism, by Ha-Joon Chang (a Korean economist based in Cambridge, like Eben... perhaps they should meet up in a pub sometime?). One of his key points is how rich countries got rich by creating tariffs, which they then expect poorer countries to forgo under the pretense that it'll improve poor economies.

The problem is that it's one rule for us, and another rule for them. Broadcom, the BCM2835 and to an extent, the Raspberry PI is a product of this; it creates low-barriers to entry for wealthier entities like Broadcom and high-barriers to entry for less wealthy entities, like Brazil or ... me.

For example, the Raspberry PI (which I believe is a great idea) is roughly the same price as a FIGnition, in effect, about £25 compared with £20 for my product. Yet Raspberry PI is about 10,000 times more powerful (and complex) than FIGnition. That's because firstly, the Raspberry PI (Model B) is built in China (which reduces costs). 

Secondly, it's massively mass-produced (which reduces costs).

Thirdly, they have the sophisticated tools to both design the chipset and the PCBs - a hobbyist can't build one at all even if given the components, a perfectly steady hand and a microscope (an infinite barrier for a hobbyist).

Fourthly, they have access to all the design documents - Broadcom don't even publish them for the general public (another infinite barrier for hobbyists).

Fifthly, they have volume access to suppliers, at a much reduced (secret) cost, which the general public (like me) don't have access to.

So you can see, there are barriers I face that others don't. But this is OK for me - I wouldn't produce a Raspberry PI instead of a FIGnition anyway, because I want people to build computers that can be understood - and that means building rudimentary computers that can be reproduced by the general public. That's because I believe that in the same way we teach kids to read using phonics rather than War And Peace and we don't believe that books have superseded the alphabet; if we want kids to master technology, they really need to start at the basics.

Similar barriers though apply to developing countries. There are inbuilt barriers to them, starting with for example, the fact that the brightest people in developing countries end up in places like Cambridge. Developing countries therefore often want to protect their own markets, for example, Ghana likes to grow and eat their own chickens, but instead Europe dumps our excess chickens on them, so they can't develop their own market.

It's unfair. But it's doubly unfair for a protectionist company Broadcom that makes use of cheap Chinese labour to force open markets in developing countries for their own benefit. Surely, surely, if Broadcom really did want Brazil to have lots of cheap Raspberry PIs, they'd simply give the information to Brazil to procure their own production runs - ship the raw chips to factories in Brazil and let them make their own PIs. There would still be the chip trade barrier, but not the for the added value of board production and assembly.

Problem solved - and the PI would be used for its intended purpose, not for sticking in thumbs and pulling out plums. Please Eben, you're a nice guy (as I recall from the BBC@30 event); don't use the PI as a free-market political tool :-)

Monday, 11 February 2013

The Big Wedgie

This is just a short blog about an interesting development in the Arctic. I keep a frequent lookout on websites such as NSIDC's Arctic Sea Ice News and importantly Neven's Sea Ice Blog.

One of the big predictions for forthcoming years is the collapse of the Arctic ice cap which may happen as soon as in just a few years. This graph makes it quite clear what will happen:
It's a scary graph and implies that we'll have a September minimum of 0Km3 as early as 2015; an August ... October minimum of 0Km3 as early as 2016 and a July minimum of 0Km3 as early as 2017. That is, a rapid collapse of the Arctic sea ice. However, there are wide error bars, and so future predictions should be treated cautiously.

In the arctic much of the ice disappears every year (first year ice), but some remains (multiyear ice). The resilience of Arctic sea ice depends upon multi-year ice, because it's thicker. Most of the multiyear ice was lost in 2007 and has progressively depleted since then.


As you can see, most of the remaining multi-year ice (about 20% of ice >=4 years) clings to the North Coast of Greenland and islands North of Canada and the thinking is that any ice that clings on beyond 2016 or so will be there. This might not happen. Here's why. There's regular arctic ice being churned out from the Fram Strait, the sea between Greenland and Svalbard thanks to Arctic ocean currents that head up North round  from the Atlantic (the same currents that give the UK warm weather).  You can see it here:

It's thought by some Arctic observers that the multi-year ice is held in place at the top of Greenland by what's called the Wedge. This may get swept out through the Fram strait in just a couple of days. It'd be the big Wedgie for the Arctic and would have serious consequences for the remaining multi-year ice and whether the Arctic sea ice would in fact trend to around 1MKm3 or nothing at all. Here's a model of the process:

The reason why it can get swept out now, is because the ice is so thin elsewhere in the Arctic, on average, it's just over 1metre thick. Since we know that with thinner ice sea currents have more opportunity to influence Arctic sea ice and in recent weeks observers have noticed a number of cracks appearing in the Arctic sea ice, early than they would be expected (large cracks do occur in the ice, just not normally this early, here):
(It's a false-color image so you can see the contrast more easily, Greenland is bottom right, cracks are shown white against orange).
Of course, it might not actually happen - I'll post a comment in a few days if it does!