Sunday 27 December 2015

Capitalising the Cumbrian Floods

Naomi Klein's book: The Shock Doctrine (2007), makes the insightful, but surprising claim that free-market capitalists use economic and climate disasters to push forward neoliberal ideological policies. This is called #disastercapitalism . 

The idea is this: people normally resist free market solutions to all sorts of cultural norms. For example, the NHS in the United Kingdom is deeply valued to the extent that an amateur choir can achieve a Christmas number 1 in the pop charts by releasing a song that defends it.

What Disaster Capitalism does though is use the dislocation, caused by a traumatic event to railroad purported solutions to inject free-market answers while people are still reeling in confusion from the event. For example, in the case of climate change related events, you would think it would be the other way around: that people would rapidly form the association between global warming and and the current event and make strong calls for the government to turn more rapidly away from fossil fuels.

Surprisingly, although some people do this, by and large no such response occurs. Instead what happens is that people are preoccupied with the immediate need for relief and consequently suppress or ignore statements or opinions that relate the event to climate change. For example, after the Hebden Bridge flooding of 2012, they released a web page where they talk about "what work has been done since last Summer, and what’s also planned in the medium and long terms". There's no mention of climate anything. You can skip to the forum too. So, from the 2012 forum, topic "Floods Practical Solutions", of the 19 posts, there's one reference to climate ("Has anyone mentioned Treesponsibility? The group was set up to leessen (sic) the effects of climate change in the Valley by planting trees"). Another flood-related topic is no longer on the server. Of the other topics "Floods Sandbags" (no references); "Floods, The Politics" 20 refs: "Climate change certainly seems to be contributing to the problem, but it will be hundreds of years, if at all, before we will see the results of any effort we make to counter climate change."(i.e. let's not think about it right now); "Every Councillor and every senior Council Officer in Calderdale needs to read the Pitt Review of the 2007 floods and act on the recommendations. They should also understand where Climate Change is leading us." (which was written by the Green Party Candidate and brushed aside in the next post with "I'm not sure that right now is the time to be pushing 'I told you so' stuff"); "Only yesterday a report was mentioned on the BBC from the Committee on Climate Change that warns of more of these downpours in the future.". Another Green party comment "Our own Government and its agencies have to stop talking about flood threats in terms of one in 30 or 100 years. With climate change it could happen next month again". This was countered in the next post and in a couple of later posts: "As I said, there is no evidence to link the recent wet weather with man made climate change and for that reason the Green Party and other opportunists should not use terrible events such as the recent floods to progress their agendas."

Given that people are cautious to make any link, particularly during the events themselves, it becomes relatively easy for the government to exploit it. And indeed this is just what they've done. In response to questions about how the Government had made cuts to the Environmental Agency prior to the Cumbrian floods in early December 2015, the government then offered Council Tax and business tax cuts as relief.

Doesn't this strike anyone as being rather odd? As if anyone's need for flooding relief was somehow related to how much they pay in council tax? Isn't it also more likely that homes built closer to flood plains are likely to be in lower council tax areas and therefore there's less to claim back, though they're more likely to be flooded? And how is a business's profits related to the amount of damage they'd suffer? And isn't it likely that a business that was making more money would find it easier to cope with the cost anyway?

The only common factors in all of this is that:
  1. The 'flooding relief' potentially has greater benefits for Cumbrians who are better off, compared with those who aren't.
  2. The relief needs to be applied from across the country to those in need, not localised to the area in question. How is a community's ability to fund its own relief related to the extent of the disaster?
  3. Business taxes (and possibly Council taxes) pay for things like welfare, so in a sense the flooding disaster is being used to make cutbacks in welfare: although the rich across the country aren't paying for the flooding relief, the poor in the UK will have less money as a consequence.
And guess what, Liz Truss is planning to apply the same approach to the flooding in the North of England again - they have no idea if it works, but it's already policy.
“Of course, you’ve asked about funding and we’re looking at schemes similar to what we put in place in Cumbria to make sure families and businesses are supported,” said Truss.
 Disaster Capitalism to a tee.

Saturday 7 November 2015

Parallel CRC16 Collection 3

There's so many processors to write CRC16 algorithms on, and I feel like I'm on a roll!

In this section we look at some other popular processors: the MSP430 (another RISC-but-in-many-ways-not MCU); the venerable 8051 and the current king of MCUs, the ARM (Cortex M0).

The MSP430 is like a stripped-down pdp-11 with 16x16-bit registers and single-cycle instructions. But it's not great for computing 16-bit CRCs. This routine weighs in at 58 bytes and 28 cycles per byte, making it the same length and relatively the same performance as an H8. Only the 6502 is longer! The Msp430 has some features going for it, like it's ability to swap bytes and shift whole words in a single cycle, but the reason it's not as fast as a PIC or AVR is primarily because the MSP430 has no nybble operations and because sometimes its byte operations (which modify the whole word of the destination register, by zero-extending the result) get in the way. The MSP430's straight-forward <<5 is faster than the rotate right 3 times technique used on some 8-bit CPUs, because it doesn't need to save an intermediate and apply mask operations. But the neatest trick here is the >>4 . By zero-extending r4.b when moving into r7, we clear the upper byte, so '0' bits get shifted into bits 4..7 of r7. Then by zero-extending r7 again, we clear r7's original bits 3..0; and so we get a byte shift, in 1 cycle less than applying a mask.

Crc16Msp430:
    push r7
Crc16Msp430Loop:
    swpb r5       ;crc=(crc>
>8)(crc<<8)
    mov.b @r4+,r7 ;r7=new byte.
    xor r7,r5     ;r5=crc^=newByte.
    mov.b r5,r7   ;r7=crc and 255;
    rra r7
    rra r7
    rra r7
    rra r7
    mov.b r7,r7 ; clears bits 7 to 15 giving the >>4
    xor r7,r5
    mov.b r5,r7  ;Sign extend again.
    swpb r7      ;<&lt8
    add r7,r7
    add r7,r7
    add r7,r7
    add r7,r7 ;<<12
    xor r7,r5
    mov r5,r7 ;crc and 255 again.
    add r7,r7
    add r7,r7
    add r7,r7
    add r7,r7
    add r7,r7 ;<<5
    xor r7,r5
    Dec r6
    jne Crc16Msp430Loop
    pop r7
    ret ;


The 8051 version takes 31 bytes and 27 instruction cycles per byte - it's the shortest implementation so far (the PIC version uses at least 12-bit opcodes). On an original 12MHz 8051, this would have meant 27µs per byte, about 76% faster than a pdp-11.

Crc16_8051:
    push r3
Crc16_8051Loop:
    movx acc,@dptr
    inc dptr
    xrl a,r1
    mov r3,a
    swap
    anl a,15;
    xrl a,r3
    mov r3,a
    swap
    anl a,240 ;<<12
    xrl a,r0
    mov r1,a
    mov a,r3
    swap
    rl a ;<<5
    mov r0,a ;save
    anl a,31
    xrl a, r1
    mov r1,a
    mov a,r1
    anl a,240
    xrl a,r3
    mov r0,a
    djnz r2,Crc16_8051Loop
    pop r3
    ret


The Arm Thumb version. Using just the original Thumb instruction set, it can perform a CRC16 in just 18 cycles (46b), making it about 5.1x more efficient than a 6502 and nearly 7.4x more efficient than a 68000. This is mostly because the ARM thumb can perform multi-bit shifts in a single cycle. The processor is only let down when swapping the Crc16 high and low bytes (which is done just like in 'C', crc16=(crc16 >> 8) | (crc16 <<8)&0xffff. Later versions of Thumb provided zero-extend and byte swap instructions.

c65535: .word 65536
Crc16_Thumb:
    push {r3,r4}
    ldr r4,[PC,c65535]
Crc16_ThumbLoop:
    lsl r3,r0,#8
    lsr r0,r0,#8
    orr r0,r3
    and r0,r4
    ldrb r3,[r1],#1
    eor r0,r3
    mov r3,#240
    and r3,r0
    lsr r3,r3,#4
    eor r0,r3
    lsl r3,r0,#12
    eor r0,r3
    lsl r3,r0,#5;
    eor r0,r3
    and r0,r4
    subs r2,#1
    bne Crc16_ThumbLoop
    pop {r3,r4}
    rts


Finally, an AVR version. The webpage I based all of these on uses CRC code that operates on a byte at a time, so you wouldn't think there's any point in publishing one here, but my DIY FIGnition computer uses a more efficient one in its audio loading firmware:

;z^Data, r25:r24=Crc16, r26=Len.
Crc16Lo=r24
Crc16Hi=r25
BuffLo=r26,
BuffHi=r27
temp=r26
Len=r28

Crc16_AtMega:
    push r16
    push r30
    push r31 ;used to hold buff.
    movw z,BuffLo
    ldi r16,16
    ldi r27,32
Crc16_AtMegaLoop:
    mov temp,Crc16Lo
    mov Crc16Lo,Crc16Hi
    mov Crc16Hi, temp ; Swap hi and lo.
    ld temp,Z+
    eor Crc16Lo, temp ; crc^=ser_data;
    mul r16,Crc16Lo ; Faster than executing lsr 4 times.
    eor Crc16Lo, r1 ; crc^=(crc&0xff)>>4
    mul r16,Crc16Lo ;(crc&0xff)<<4
    eor Crc16Hi,r0 ;crc^=(crc<<8)<<4
                    ;(r0 wonderfully contains the other 4 bits)
    mul r27,Crc16Lo ; (crc&0xff)<<5
    eor Crc16Lo,r0
    eor Crc16Hi,r1
    subi Len,1
    bne Crc16_AtMegaLoop
    pop r31
    pop r30
    pop r16
    ret
In the above version (designed for a Mega AVR), we can save a cycle from the mov, swap, andi sequence by making use of the mul instruction. We can save 5 cycles in the <<5 code. This gives us 19cycles per byte, just 1 cycle longer than an ARM thumb (and the same length). Yet it's 7 cycles shorter than the other AVR version (53% faster!).

Tuesday 3 November 2015

Parallel CRC16 Collection 2


Hello folks,

I recently posted a collection of byte-parallel CRC16 implementations and thought I'd add a few more.

Firstly, a 6800 version. Since I don't have the original BSI volume, I don't know exactly how it was coded then, but I vaguely remember it being about 26 instructions long. The 6800 version is actually shorter and faster than my 6809 version despite the 6809 being a far better CPU with better CPI and more registers. That's because I end up having to use 0 page for speed and that's still faster than stack indexing:


Crc16_6800: ;a:b=CRC, X^data.
;uses page 0 temp0, temp1, Len.
Crc16_6800Loop:
eora 0,x
staa temp1 ;hi now swapped
lsra
lsra
lsra
lsra
eora temp1
staa temp1
stab temp0
lsla
lsla
lsla
lsla
eora temp0
staa temp0
rorb
rorb
rorb
tba
anda #31
eora temp0
rorb
andb #0xe0
eorb temp1
inx
dec len
bne Crc16_6800Loop
rts

Next, the Hitachi H8 version. In this version, R0=CRC, R1^data, R2=Len. R3 is a temp. The H8 is interesting, because it was part of a 1990s generation of Microcontrollers designed for high-level languages like 'C'. Hitachi claimed it was RISC, though it isn't due to the excessive number of instruction formats and addressing modes. It's more like a 16-bit micro-controller version of the 68000 or pdp-11. The performance looks decent at 57 ø-clock cycles, but it isn't really that great because a ø-clock cycle is half the XTAL clock frequency, making it 114 XTAL oscillations per CRC byte.

Crc16_H8:
push r3
Crc16_H8Loop: ;start by pretending to swap.
mov.b @r1+,r3h ;get the next byte.
xor.b r3h,r0h ;
mov.b r0h,r3h ;need a copy.
shlr r3h
shlr r3h
shlr r3h
shlr r3h ; >>4
xor.b r3h,r0h ;
mov.b r0h,r3h ;copy again.
shll r3h
shll r3h
shll r3h
shll r3h ; <<4
xor.b r3h,r0l ;Into what was the low byte.
mov.b r0l,r3h
rotr r3h
rotr r3h
rotr r3h
mov.b r3h,r3l
and.b #31,r3l
xor.b r0h,r3l
and.b #0xe0,r3h
xor.b r0l,r3h
mov.w r3,r0
subs #1,r2
bne Crc16_H8Loop
pop r3
rts

So, let's compare that with the PIC I originally coded it for, a PIC16C55. This isn't the actual code, I don't have rights to that, so I've just rewritten the equivalent algorithm, and anyway, the version I used would probably have calculated the CRC on the fly, as each byte was received. Surprisingly, the humble PIC cpu (which predates Commerical RISC CPUs by a decade), manages to implement the algorithm in 25 instructions and 25 instruction cycles per byte, which works out as 100 clocks per byte.

Crc16_Pic16C55: ;pretend we've swapped.
movf ind,w ;got the next byte.
incf fsr
xorwf gCrc16+1,f ;xor with lo byte.
swap gCrc16,w ;copy and swap.
andlw 15 ;this is the >>4!
xorwf gCrc16+1,w ;'lo' byte again.
movwf gTemp2
swap gCrc16,w ;copy and swap
andlw 0xf0
xorwf gCrc16,f ;this time into 'hi' byte.
rrf gTemp2,w ;get 'lo' mostly calc'd Crc.
movwf gTemp1 ;save in temp1.
rrf gTemp1,f ;Have to rotate into f, because
rrf gTemp1,f ;can't rotate w by itself.
movf gTemp1,w
andlw 31
xorwf gCrc16,w
movwf gCrc16+1 ;OK to overwrite, since new 'lo' byte in gTemp2
rrf gTemp1,w ;and again, because of the carry.
andlw 0xe0
xorwf gTemp2,w
movwf gCrc16
decfsz gCrcLen
goto Crc16_Pic16C55
retlw 0 ;done.

We've mentioned the 68000 CPU a couple of times, so let's compare it. Here, we can see how the 68000 is well suited to high-level languages, the straight-forward implementation closely matches the algorithm and is  only 19words, 38 bytes long, almost as short as a Z80! The timing on the other hand - yike, 134 cycles per byte, only slightly more efficient than a Z80. This is mostly due to the fact that a 68000 had a 4 clock bus cycle, but also because the 68000 isn't very good at byte manipulation. On the other hand, the 68000 has a decent decrement and branch instruction.

Crc16_68K:
move.l d2.w,-(sp)
bra.s Crc16_68KWhile
Crc16_68KLoop:
rol.w #8,d0 ;this time we have to rotate to begin with.
eor.b (a0)+,d0 ;
move.b d0,d2 ;crc&0xff just need byte ops this time
lsr.b #4,d2 ;..>>4
eor.b d2,d0 ;crc^= above calculation.
move.b d0,d2 ;crc&0xff (we'll shift out upper 8 bits).
lsl.w #8,d2 ;..<<8
lsl.w #4,d2 ;..<<4
eor.w d2,d0 ;crc^= above calculation.
moveq #0,d2 ;(Faster than a move.b and and.w #0xff)
move.b d0,d2 ;crc&0xff
lsl.w #5,d2 ;..<<5
eor.w d2,d0 ;done.
Crc16_68KWhile:
dbra d1,Crc16_68KLoop
move.l (sp)+,d2
rts

Finally, let's also compare it with the 68000's arch-rival, the 8086. Here we can see that the 8086 isn't so well suited to a high-level language, mostly due to the optimisations needed for the byte operations and judging single bit shifts vs shifting with cl. The length is the second worst at 54 bytes and the standard instruction timings for the 8086 would give a total cycle time of 80 cycles (as efficient as a 6809) but this fails to take into account how the prefetch queue empties when executing linear code whose instructions are < 4 cycles. This bumps it up to 112 cycles per byte, which means a standard 8MHz 68000 would be about 33% faster than a 5MHz 8086. Both of them, however, are much faster than a pdp-11/34, (2.8x and 2.13x respectively).

Crc16_8086:
push dx
Crc16_8086Loop:
mov dh,al ;2(4)
mov dl,ah ;swap to begin with. 2(2)
lodsb ;al=[si]+ 12(6)
xor dl,al ;3(5)
mov al,dl ;2(3)
shr al,1 ;2(1)
shr al,1 ;4(0)
shr al,1 ;4(0)
shr al,1 ;4(0), still faster than mov cl,4: shr al,cl
xor dl,al ;4(0)
mov al,dl ;4(0) upper byte.
shl al,1 ;4(0)
shl al,1 ;4(0)
shl al,1 ;4(0)
shl al,1 ;4(0)
xor dh,al ;4(0) the <<12
xor ah,ah ;4(0)
mov al,dl ;4(0)
shl ax,1 ;4(0)
shl ax,1 ;4(0)
shl ax,1 ;4(0)
shl ax,1 ;4(0)
shl ax,1 ;4(0) Perform a straight-forward <<5
xor ax,dx ;4(0)
loop Crc16_8086Loop ;17c. 2 bytes fetched = 4c, so 13c can refill BIU
pop dx
ret



Saturday 31 October 2015

Parallel CRC16 Collection

I first came across CRCs in 1996, when I was writing the audio software for the Heathrow Express. A couple of engineers from the company I worked for opened a dusty BSI volume; pointed me to a short algorithm for an ancient CPU (even by 1990s standards), the 6800 and told me to use that algorithm.

I had to translate it for the PIC and Hitachi H8 MCUs I was using for the project, but that wasn't a major hassle. The puzzle was why they seized upon that moment to insist I use a specific, and short algorithm. Did they think I was a mathematical dunce who couldn't implement something involving a few XORs? Were they just trying to impress me with their claim (I'm willing to accept it) that they actually wrote that algorithm and got it published in BSI standards? Were they just sticklers for standardized solutions?

Well, I was quite happy, because it was a bit of an education. It wasn't until about 5 years later I started to realize something was amiss with CRC16 algorithms. And what was amiss, was that no-one else seemed to be using byte parallel algorithms. Surely a BSI (and presumably US / ISO) standard would dominate? But here I was in other commercial environments where the references and implementations were all bit-serial or table-driven algorithms of this kind:


uint16_t crc16(char *addr, int len, uint16_t crc)
{
  int bit;
  while(num-- >0) { // For each byte.
    crc = crc ^ (*addr++ <<8); // xor into high byte.
    for(bit=0; bit<8;bit++)
      if ((int16_t)crc<0) // top bit of poly set.
        crc = ((crc<<1) ^ 0x1021); // xor with poly
      else
        crc<<=1;
    }
  }
  return crc;
}

I could see they were all highly inefficient, or worse still, wrong - some of the table-driven versions actually contained wrong entries as I found out last year when I wasted a day or so trying to figure out why a colleague's Crc algorithm didn't generate the same results as mine. And that was because he'd just copied and pasted the first implementation he'd seen on the internet - and that was even after I'd pointed him to this byte-parallel version and asked him to translate it to C#.

The reason I figure table-driven algorithms became popular is because they're easy on the brain. It's easy to grasp how a bit-serial algorithm relates to the CRC's polynomial and then easy to jump to a bit-serial algorithm that generates a table or just copy a table version directly. However, byte-parallel algorithms are, thankfully, making a comeback. Why? I guess because constrained MCUs are still used in a lot of applications and because cache-misses on table-driven CRCs are pretty costly on higher-end processors.

This leads me to - an alternative for a set of CRC algorithms published on MDFS.net, a wonderful site that has conversions of BBC Basic for every processor you'd ever want :-) (with the exception of an AVR). Here are equivalent byte-parallel versions for the same set of ancient processors and they run at least twice as fast:

First, Crc16_6502, which clocks in at 64 bytes and 92cycles per byte, over twice as fast as the bit-algorithm. In common with most of the 8-bit implementations, it's more efficient to swap the CrcHi and CrcLo nearer the end and instead perform the calculations on the 'wrong' halves of the Crc until then. The 6502 version also saves cycles by using y to represent both an index into the buffer, and the length of the buffer (which is incremented until it gets to 0). This means we have to adjust the buffer pointer and negate the length.


Crc16_6502: ;buff^buffer, y=len.
clc
tya
        eor #255
        adc #1     ;Negate length
        beq Crc16_6502End
adc buff ;buff-len
;Carry would mean we don't need to dec buff+1
        ;but since we really have 256-length in y,
        ;then we need to inc buff+1 instead (no-carry
;means we don't need to inc buff+1)
bcc Crc16_6502Loop
inc buff+1

Crc16_6502Loop:
lda (buff),y
eor CrcHi
sta CrcHi ;really CrcLo
lsra
lsra
lsra
lsra ;(Crc&0xff)>>4.
eor CrcHi
sta Temp1
sta Temp0 ;Copy low byte for <<5 later.
asla
asla
asla
asla ;(Crc<<12)
eor CrcLo
sta CrcHi ;this is the new CrcHi
asl Temp0
rola
asl Temp0
rola
asl Temp0
rola
asl Temp0
rola
asl Temp0
rola ;<<5
eor CrcHi
sta CrcHi
lda Temp0
eor Temp1
sta CrcLo
iny
bne Crc16_6502
Crc16_6502End
rts

Then there's a Z80 version. This is more straight-forward, since there are enough registers to handle the entire algorithm. It clocks in at 33b and 139 T-states per byte, making it the shortest version and only 51% slower than a 6502 at the same clock speed. Here we use c as a temp while we perform the crc hi and lo swap over the course of the first two shifts, so that they end up nicely in hl ready for when we do the <<5 near the end.

Crc16_Z80: ;(Z80 style, b=length, de^data, hl=CRC).
ld a,(de)
inc de
xor h
ld c,a
rra ;the Z80 doesn't have a fast
rra ;right shift so we
rra ;rotate and mask.
rra
and a,15 ;>>4,
xor c
ld c,a ;new low byte, gets shifted.
add a,a
add a,a
add a,a
add a,a ;<<12.
xor l
ld h,a ;new crc hi
rrca
rrca
rrca ;<<5
ld l,a ;save in c
and 31
xor h
ld h,a
ld l,a
and 0E0h
xor c
ld l,a ;done.
djnz Crc16_Z80

Next up, the 6809. Despite having several 16-bit registers, the 6809's accumulator architecture means we need to allocate 2 temporary bytes on the stack and we can't make use of 16-bit operations on D. I estimate the length at 45bytes and the speed as 80 cycles per byte, a little faster than a 6502.

Crc16_6809: ;D=CRC, X^data, Y=Len.
leas -2,s ;Allocate 2 temp bytes.
Crc16_6809Loop:
eora ,x+
staa ,s
lsra
lsra
lsra
lsra
eora ,s
std ,s
lsla
lsla
lsla
lsla
eora 1,s
staa 1,s
ldab ,s
rorb
rorb
rorb
tfr b,a
anda #31
eora 1,s
rorb
andb #0xe0
eorb ,s
leay -1,y
bne Crc16_6809Loop
leas 2,s ;Deallocate 2 temp bytes.
rts


Finally, the pdp11 (with Extended Arithmetic). The pdp11's adequate number of 16-bit registers and programmer-friendly instruction set makes it easy to implement the algorithm. Nevertheless, if run on a typical 1970s pdp-11/34 it would require 40 bytes of code and 47.56µs per byte, roughly equivalent to a 2MHz 6502 or a 3MHz Z80. Yet more evidence to demonstrate that the pdp-11 in the 1970s wasn't theoretically much faster than a humble late 70s Microprocessor. Check out my Z80 Dhrystones article to discover why this might be.

Crc16_pdp11:
;With Extended Arithmetic. r0=crc, r1^data, r2=len, r3=tmp
swab r0
clr r4
movb r4,(r1)+
xor r0,r4 ;no byte version.
mov r3,#-4
movb r4,r0
ash r4,r3 ;>>4.
xor r0,r4 ;crc^=(crc&0xff)>>4
mov r4,r0 ;need copy
mov r3,#12 ;adds 1.6us vs much more for swap /and etc.
ash r4,r3 ;crc<<12
xor r0,r4
mov r4,r0
mov r3,#5
ash r4,r3 ;<<5
xor r0,r3 ;done.
sob r2,Crc16_pdp11





Friday 10 July 2015

George Osborne, Emma Way fanboi

In 1920, the government decided to introduce a tax for cars for their road use, called the road fund. It didn't work, because it was inefficient at allocating government resources, the road tax returned a surplus each year, which couldn't be spent on other things. Worse still, it incited by poor behaviour by drivers as they began to think they owned the roads, as pointed out by Winston Churchill in 1926.

So, the reversion of VED into a road tax is regressive and penalises future cleaner cars [which currently represent just 0.5% of newly registered cars in the UK ] as well as potentially affecting cyclists.

The idea is to get all road users to contribute to the maintenance of our crumbling road system on the principle that you pay for what you use[1].

However, if the purpose of the tax was to pay for road use, then the levy ought to be determined by the weight of the vehicle x its size x the distance traveled per year; since these factors govern the vehicle's usage of the road in terms of its occupancy and maintenance.

I did some searching and found that an average UK car is 1500Kg, 4.7m long and travels 12700Km per year.

Let's look at what happens if we apply that thinking to different cars:

For example, my Smart FourTwo weighs 740Kg and is 2.7m long and I drive it about 5000miles per year. I should therefore pay 740Kg/1500Kg x 2.7m/4.7m x 8000Km/12700Km= 0.179 x £140 x = £7.11, i.e. much less than I currently pay. Great, I'd be for that!

A VW Golf owner who drives an average amount should pay: 1300Kg/1500Kg x 4.3m/4.7m = 0.793 x £140 = £111.01, i.e. significantly less than the new VED.

A Land Rover Discovery driving 16K miles/year: 2150Kg/1500Kg x 4.7m/4.7m = 1.81 x £140 = £254.01. An entirely fair contribution.

But because the Chancellor lumps 95% of cars as needing to contribute as much to roads; what he means is that: smaller, lighter, cleaner cars are subsidising the VED. That's UNFAIR - even by his own criteria.

But the reversion of VED to a road tax raises the question of whether cyclists should pay. Studies have shown that this would discourage people from cycling, if they had to pay a cycling tax. Worse still, it validates the attitude of people like Emma Way who bragged about knocking cyclists off the road:
"Definitely knocked a cyclist off his bike earlier - I have right of way, he doesn't even pay road tax #bloodycyclist. " [2]
Osborne's redefinition of VED means her justification for injuring someone on a bike is now rewarded - we're currently not paying road tax. Given that cycling is facing an increasing backlash and despite a decrease in incidents, expect cycling injuries to go up.

Expect also calls for zero-emission cars to be taxed (they're not yet) on the grounds they use roads and therefore cyclists to be taxed on this basis.

But of course cyclists won't be taxed on the basis of their use of the road. Consider an above average cyclist who travels 15Km/day in a working week (x48 weeks): 14Kg/1500 x 1.72m/4.7 x 1m/2m (width) x 720Km/12640 => 0.00009727982763 x £140 = £0.01 . Yep, I'd pay that. I bet the government won't introduce a bike tax of 1p/year.

And that's the other practical problem. When you look at this tax, and the implications of it as it currently stands, even by the chancellors criteria; it's regressive: penalising people more if they want to travel healthier and cleaner, subsidising people who damage the environment as much as possible.

Typical.

[1] http://www.mirror.co.uk/news/uk-news/george-osbornes-budget-2015-speech-6025464


[2] http://www.independent.co.uk/news/uk/crime/i-have-right-of-way-he-doesnt-even-pay-road-tax--driver-in-bloodycylists-storm-puts-her-job-at-risk-8625947.html

Monday 11 May 2015

Thrifty Business

In the run-up to the UK General Election 2015, I ended up writing so many similar posts on facebook on why Austerity doesn't work that I thought I'd turn it into a blog entry.

Nobel Prize-winning economist Paul Krugman has written an excellent piece in the Guardian about the Austerity delusion. He covers the historical developments over the past 5 years and provides evidence for a negative correlation between austerity and economic growth.

All I want to do is show why the common-sense argument of the analogy from thrift isn't correct.

 The thrift argument is that, if I'm in debt, I have to cut back on my own spending until everything's under control and I can (and have) paid it off.



[*]And that's the right thing to do, however, it's not the whole story. In reality what happens is that when you cut back to pay off your debt, the people you trade with lose income. In a sense, they're soaking up the imbalance of your finances, but it works, because they're only losing a little bit when you cut back by a lot. The key thing is that your income stays the same while your outgoings shrink (a lot), and their income shrinks (a bit) while their outgoings stay the same - so you're passing on your debt, they take a bit of a hit too.
And therefore, it won't work if everyone is simultaneously in debt. When everyone starts cutting back, everyone's income shrinks and so they become less able to repay their debts. That's what happens when governments enact austerity: the private purse shrinks, internal trade stalls (and this in fact is what happened - banks stopped lending and started calling in debts, which is why businesses failed) and if the government acts along with it by reducing its spending in accordance with its lower tax revenue, it just adds to the problem.

The right thing to do then, if private lenders won't lend, is for the government to provide the cash to keep the economy going - and that means public borrowing during a financial crisis. That's half of Keynesianism, in essence.

Keynesianism, relies on another insight - that cash isn't a substitute for goods as we normally think, instead, cash is a medium for exchange. That is, cash operates as a pipe for transporting goods; the size of the pipe is the amount of cash we have, it determines the rate we can exchange and trade; it's not a representation of what we have.

So, if we go back to [*], we can see that's what's happening in the example. Your original debt is distributed across the entire system. It's the capacity of your input pipes (income+loans), which is the same as your output pipes capacity (spending), which determines the sizes of others' input pipes (income). When you cut back, the whole network capacity shrinks. Which is to say your debt was distributed everywhere all along. It's a really powerful concept to get one's head around, I've barely scratched the surface.