Thursday, 24 November 2011

FIGnition HiRes Graphics, API: RFC.

This is a short post describing a proposed Bitmapped Graphics API for FIGnition.

Bitmap Format

A FIGnition bitmap (including the display) is stored as a grid of 8x8 bit tiles:



etc...
A bitmap itself is specified explicitly using a pointer to the bitmap and its dimensions in tiles: y*256+x.

In the case of the display the bitmap is 20x20 tiles, representing 160x160 pixels. A tile format is good for being able to efficiently copy bitmaps to the FIGnition's serial memory, because a single 8x8 tile only requires 11 SPI accesses compared with 32 SPI accesses if the target bitmap was a simple raster. Better savings are made if the target bitmap is larger, an 8x16 bitmap requires only 19 SPI accesses compared with 40 SPI accesses for a simple raster.

Video Prefetch

In the FIGnition's bitmap mode, program access to SRAM must be interleaved with video fetching from SRAM. Since the SRAM chip never provides a means of correctly interrupting an SPI access (you can't read it to find out what operation was last being used and what the last address was); we rely on the SRAM access being interrupted during Forth ROM access; program jumps or memory accesses frequently enough to be able to prefetch the data during the video scanning. This means you can't unroll loops too much in hires mode or video glitches will occur.

Video Prefetch fetches a whole 160 byte row of the frame buffer at a time and copies it to internal RAM, the RAM used by the normal video RAM in text mode. This operation requires 160µs every 512µs and thus is within the bandwidth of the memory system; leaving around 352µs of which 88µs is free for program execution (around 16µs for each of the 5.5 scans), enough for typically 11 instructions (at around 120KIPS on average, the raw average performance of FIGnition). It's likely therefore that program execution will be suspended during video scanning (currently it isn't) and this will reduce performance by about 0.5%.

The prefetching needs to prefetch a whole 160bytes, because the video scan must output it in raster order, displaying every 8th byte of the buffer per scan byte and repeating the same process starting one byte further for the next scan.

Therefore, we have to use a 2x 160b buffers for video scanning, leaving 728-320 = 408b free for a blitter cache called the Tile Buffer.

The Tile Buffer

In general, using serial SRAM to blit data represents a major bottleneck in the system. Therefore it's advantageous to use internal RAM as much as possible, in our case as a cache of tiles for our bitmap images (such as sprites) as well as using it as a cache for parts of the frame buffer when we write bitmaps to the frame buffer.

408 bytes provides room for 51 tiles. One tile (tile 0) will be used as a single tile cache for plot. The subsequent tiles are intended to be used by your program (1..50) and the tiles working from the end of the buffer, backwards (50..11) are to be used for caching the frame buffer. At maximum, only 40 can ever be used. For example, if you use a number of 16x16 sprites and no other larger graphic objects, in practice only 2 rows of 3 frame buffer tiles will be used, 6 tiles.

Even though there's only 51 tiles available, this doesn't mean FIGnition graphics are limited to 51 tiles; it just means you can only cache 51 tiles at any one point: changing the cached tiles won't change anything on the screen, because the frame buffer is stored independently. So you can cache a number of tiles; do some blitting; cache some other tiles; do some more blitting and it'll all work out correctly.

Blitter API

Here's the proposed Blitter API (the parameters may change a bit, it's the concepts that are important here).

The first aspect is support for tiles. There's a single function, to copy from SRAM to tiles:


bitmap dim dx dy tile# tile

This copies the bitmap whose dim is height*256+width with a shift of (dx,dy) pixels to tile tile#. Note, this means you can pre-shift tiles by up to 8 pixels in 2 directions. If the tile is shifted in x by more than 0 it'll take up width+1 tiles, and similarly if it's shifted in y by more than 0 it'll take up height+1 rows.

The blitter itself works a bit like emit; having a pen location where tiles are to be blitted next.


x y at ( in graphics mode sets the x y pen coordinate for the blitter)

dx dy clip ( clips from the pen coordinate to dx dy in the current frame buffer, blits outside the clip area aren't displayed).

tile# dim blt ( blits a single tile to the frame buffer in xor mode).

tile# dim tile2# dim2 x2 y2 2blt
( is a double-blit routine; blitting both tile# and tile2# to the frame buffer in order to eliminate flicker it uses an xor mode).

tile# dim xrep yrep blts ( is a background tile blit routine; copying a single tile to the frame buffer at the current position, it doesn't use xor).

This graphics system has a number of useful characteristics.

  • It splits the blitting into SRAM->Internal RAM and Internal RAM -> SRAM operations; thus reducing the contention for SRAM at any one point.
  • It increases the re-use of Internal RAM by being able to cache tiles. Thus, repetitive tiles can be easily reproduced using blts and repeated use of sprites is possible (e.g. with galaxians or space invaders where the same graphics are used multiple times).
  • It allows blitting to be treated like printing; the blit position is advanced tile by tile.
  • It allows characters to be displayed easily, we use use the address of a character from the character set in firmware as the tile to be displayed (simple tiling is just a cmove).
  • It's pretty minimal. You can get away with knowing only 3 routines and then later expand your knowledge to all 5!

I'm considering adding a built-in scroll routine too. Here, we'd fill tiles with the background data to be revealed by the scroll and then use dx dy scroll; where dx dy are values in the range -8 to 8. The problem here is that theoretically we'd need up to 80 tiles to do this, but we only have 51.

Performance Estimations

Some rough estimations follow, though they could be up to 2x slower or so than these estimates; be warned!

blts could operate at up to roughly 2us/byte or 500Kbytes/s, so we’d get a full-screen blts in 6.4ms. Tile/blt would be slower. 2x2 graphics would operate at roughly 2us / byte for the tile+40us? blt would operate at around 4us/byte, because each byte must be read and written once, so that’s 32*2+40 + 72*4+10 = 402us for a 16x16 sprite. 2blt would be a bit slower, but it probably won’t be worth it if it’s more than 50% slower than using blt (otherwise we might as well just use blt twice). So that’s 603us for a 16x16 sprite. Let’s say we need 12.5Hz for OK performance and we can’t use more than 70% CPU, so that’s: 603/.7 = 861µs for a 16x16 sprite, *12 = 10.8ms = about 92 16x16 sprites maximum/second.

Of course, the performance for blt can be better if some sprites are always being cached then we save 100µs / sprite, or 111/second.

I think these guesses are decent for the kind of architecture we have here, but the real question is: is this kind of performance good enough for 8-bit games? If not, we'll have to approach it differently.

Comments are very welcome!

Tuesday, 1 November 2011

FIGnition HiRes Graphics: The Rejects

This is an introduction to the High Resolution Graphics Support on FIGnition, somewhat documented as I develop it. Because I'm working on this at the same time, there's probably going to be a lack of actual graphical images explaining what I'm doing - for the moment I just want to give the gist of the development up until now.

FIGnition development started just over a year ago and as normal for all my developments I document my work in a journal as I go along. I first started my FIGnition journal with an article about Tiled graphics.

Tiled Graphics

Tiled graphics have an illustrious history, mostly because they were used extensively for Games Consoles when RAM was limited. The 6502-based Nintendo Entertainment System, NES used a sophisticated Tile-based system. Arcade consoles such as PacMan also used tiled graphics as is revealed nicely when you exceed the 256th level. Many AVR based games consoles use a form of Tile-graphics, because Tile graphics are a form of character-based graphics and so you can save RAM by only requiring a character RAM table, where each byte points to a graphical tile in Flash.

At the time I was going to use an AtMega328, which provides 2Kb of RAM, so I was willing to allocate 1.5Kb to video. In Tile mode, the Video display of 240x192 is divided into 16x8 pixel regions called tileRefs. There’s 15x24 of these:

This way I only need half the number of bytes for the text display. The rest of memory is divided into tiles each of which occupies 16b. There are 73.5 tiles. The last 0.5 of a tile contains meta-data for the game mode: The offset for TextField 1 (2 bytes); the offset for TextField 2 (2 bytes) + 4b spare.

Each tileRef is a single byte. Byte values in the range 0..127 are text tileRefs, they point to a pair of characters at offsetForTextField(tileRef.y/8)+tileRef*2. These characters are displayed like normal ROM characters, and can be true video or inverse. Thus we can cover the entire screen with characters, but we only need as much character data as necessary. Typically a game will need a status line at the top, requiring 2 tiles.

Byte values in the range 128..255 are graphics tileRefs, they simply point to a 16x8 bitmapped tile at offset: tileRef*16.

When I moved to using an AtMega168 I reimagined it at a new resolution: 192x192 = 12x24 tiles, using 288b leaving space for about 55 tiles.

The difficulty with Tile mode is that it's simply
very awkward. FIGnition is supposed to programmed by relative novices, so the graphics interface has to be pretty simple. Instead the graphics engine required on top of a Tiled mode requires the programmer to think about how many of the 55 tiles might be free and the underlying firmware has to keep track of allocating and deallocating tiles. So, in the end, despite the fact that it looks appealing (and I spent quite a number of hours working on the concepts), it was dumped and I wouldn't even implement a version of it if FIGnition used an AtMega328 (which is not planned).

Medium Resolution Graphics

Another option, of course is to use medium resolution bitmapped graphics. Here we'd simply use the existing video memory to provide a better graphical resolution, because the text mode only uses a maximum of 4-bits per byte for graphics.

We can work out the consequences easily: there's 728 bytes of Video+UDG memory, so we could support a sensible maximum of: 5824pixels, which at a 3/4 aspect ratio gives 88 x 66 pixels. We can see that although this would be better than the current 50 x 48, it would just give us some pretty boring graphics and wouldn't be worthwhile since no-one would want to use it.



High Resolution Graphics

In the end I decided to go straight to High-Resolution 160x160 pixels graphics - that is if you can call 160x160 high resolution, which Commodore did for their Vic-20 ;-)

The apparent problem is that there isn't enough internal RAM for this resolution: you'd need 3200 bytes. Now you could get enough RAM if you used main RAM, but there are two main issues you'd need to deal with:

  1. The main FIGnition RAM is Serial RAM and it's really slow, running at 1µs/byte maximum vs an effective 100ns/byte for internal RAM (because you need to process it using 2 cycle instructions at best).
  2. The serial RAM is used to run normal Forth programs, so you'd be interrupting access from Forth and the SPI-based Serial RAM simply isn't designed to be interrupted, because there's no way of reading the internal address register on the chip, which means that if the main code was changing the current address being read from, which involves sending a command and the Video interrupt routine ran, then there's no way for the video routine to know what the main code was trying to do and no way to restore the RAM properly to its previous state.
And it turns out both of these problems can be solved. Video is clocked out at 1 byte every 32 cycles and SRAM requires only 18 cycles to read a byte, therefore we can (in theory) read from SRAM and keep up with the video output.

Secondly, and this is really handy - Take note geeks! It turns out that we can interrupt Serial RAM access whenever we assert or deassert the SRAM's Chip Select Line. This is done by activating the PCINT1 interrupt on pin change. So, whenever we try to jump to another SRAM address and we should be fetching video, PCINT1 will be activated and we can interrupt the main code's use of SRAM without incurring any performance penalty at other times; and we don't need to remember what the old state of the SRAM was, because when we return from the interrupt, the main code will set the SRAM address correctly.

So, this is how it'll be done! Tune in to part 2 soon!

Thursday, 13 October 2011

The Duplo®code Fallacy

I've just been watching this BBC news article, which features luminaries from the UK industry talking about the woeful programming skills in the new generation of kids.

Let's make a case here: not only does the education system have computing wrong because we focus on ICT, but the programming industry has the wrong emphasis on computer science, because they believe in teaching kids using powerful environments on powerful systems; when what we need is dirt simple systems and environments.

The case is really quite simple. Firstly, let's consider the programmers in the article: Ian Livingstone (2:14), David Braben (4:00), Alex Evans (6:36). They all learned to program on simple computers. Look at the clip from Making the most of the micro (3:25): it shows a BBC micro, a computer ready to program as soon as you switch it on and a listing from a printer containing a hundred lines of code.

Secondly, we learned to program without the aid of a Computer Studies class: instead our parents bought computers for us and there was basically no access in schools. By the time we got to the class we already knew more about programming than what an entire 'O' level would tell us. The classes just helped us do more of what we enjoyed.

Thirdly, let's consider the opportunities kids have to program these days. You'd think, they would be 'better' than in the 80s, because for all the thousands of hours kids and adults spend on a desktop or laptop computer they're only 2 clicks away from getting into code, only 2 clicks away from Javascript. Or, you could drop into a terminal window and hack out a simple java/C/shell script program within seconds

So why don't they? If more powerful computers and sophisticated environments are what you need and this kind of thing is available now why are there far fewer kids learning to program? Think about it: programming is 2 CLICKs away today, but our generation had to wait until we'd gone home and finished our homework before setting up our puny, slow, memory starved computers with grotty low-res screens, crude languages and unreliable tapes before we could even start doing anything.

How can it be that computers are >10,000 times more powerful and yet an order of magnitude less appealing to program?

The clue has to be in the question: it's the power itself that acts as an obstacle. Look at the 3 programming clips: There's the BBC micro clip, there's no obstacle because the display is directly programmed and you have a nice listing so you can see everything - a few hundred lines of code. It looks fearsome, but it's nothing compared with the David Braben example (4:09). Here, the screen shows a relatively sophisticated environment: there's at least 5 different screen panels; multiple tabs to access different options; a massive screen and you can 'edit' roughly 15 lines of Duplo® code. This is on his 'simple' Raspberry PI computer, which contains 300 million lines of code.

It's that kind of gob-smacking contrast that should make us wake up: all that power and sophistication driving 15 lines of Duplo®code. The computing culture today uncritically equates power and complexity with being better, which means we try to solve problems like the lack of programming expertise by throwing powerful tools at it. It's the powerful tools that are the problem when it comes to learning this stuff, not the solution.

This is why:
  1. People are put off programming, because the tools we use are geared for other tasks. For example, I'm writing a blog instead of coding. It's easy to blog and there's lots of webby distractions, so the effort/benefit ratio of coding is much lower.
  2. Kids get put off programming because the environment is complex. I don't want to boot up an IDE and learn its arcane windows, menus, language, libraries, syntax and frameworks when I start learning. Instead I just want to get a kick out of doing something like making the computer display my name 1000 times: : hi 1000 0 do ." Julz is FAB!" loop ;
  3. Kids get put off programming because the many layers of software adds too much guff to the effort. If I view source on a web page to see Javascript I find it's wrapped in Html (because Javascript is built on browser technology) and outputting to a screen or getting data from a keyboard or mouse or whatever is just so much more effort than on the BBC micro where David Braben learned his skills. The guff affects the third coding example on the video at 8:37, they're editing SQLite database stuff - and that's exciting? Well, of course, it's more exciting than learning Excel!
  4. Kids will get put off programming because creating Duplo®code environments on top of sophisticated systems is patronising and deceptive. Any 7 year old will realise that the real computer is nothing like what they're learning and that the linguistic padding (e.g. the use of the 'green' colour in the language) is magical: that is what's really going on hides a wad of complexity you don't have access to.
  5. Finally, kids will get put off programming if there's a hierarchy of access. Even if I ever think Duplo®code is a real language, my programs will never really run on a real PC/Mac/Linux/Nintendo/XBox/iPhone and be distributed on an equal footing with everyone else's code. It's a world away from when we learned to program when our code ran on computers people really owned and we could have it published in magazines or distribute it ourselves on tapes. So, the kids know... in their hearts... Duplo®code is a waste of time.
It's great that there's initiatives to encourage kids to program today. But our plan is like teaching kids to read using War And Peace, but only letting them read the simple words while we do the rest and teaching them to write by letting them arrange paragraphs and chapters. Nobody in their right mind would teach kids to read or write in this way, but this is exactly what being proposed here.

The real lesson to learn is that we need simple, but real systems. Simple systems have no distractions; they're easy to access; they have a clean and simple syntax; they're not patronising and deceptive and it's egalitarian. Simple systems were the way we learned - that's why it worked.

Friday, 30 September 2011

Enfranchize the workforce

In a world that's in the middle (or possibly still near the beginning) of a recession/depression, the pressure is on employers to cut costs by reducing workers' pay and making redundancies and so the people who carry the consequences are those with the least power. To defend themselves, employees are increasingly looking to taking strike action.

Striking has a long history within the labour movement. Here's a proposal for a simple alternative for a workers' action. Instead of striking for pay increases during a recession, why not campaign for greater stakeholder influence? That is, it's understandable if there's no pay increase in a recession or depression, but it's not the worker's fault (nor I would argue is it necessarily the management fault either). But if the management can't maintain wages, is there any reason why they can't provide shares in the company?

Campaigning for shares (i.e. shares for individual members) would have numerous consequences.
  1. It'd mean that the workforce would potentially receive compensation for their loss of earnings, albeit deferred to when dividends would become available.
  2. It'd mean that the workforce would have more of a vested interest in the company's success.
  3. It'd mean that all levels of the workforce would have a stake in the company power base: the company would be taking a step towards being a cooperative.
  4. It'd be much easier for outsiders to sympathize with the aims of the workers: it's hard to justify penalizing your employees and simultaneously keeping them disenfranchised when the entire company's existence depends on them.
Enfranchise the workforce - what's there not to like :-) ?

Ref: Maverick.


Saturday, 3 September 2011

Accessing Ram on an Intel 4004

Hi folks,

I've recently been trying to work out how RAM is accessed on an Intel 4004, 4-bit Microprocessor. The most valuable information I've found is from:

http://e4004.szyc.org/iset.html

Whose website provides a javascript emulator for the 4004.



The 4004 is a seriously primitive device. One of its odd behaviours is that it doesn't really address RAM as such. What it does is provide a means of sending an address to a RAM chip and it expects each RAM chip to contain its own address register, this is done with an SRC ixr instruction, which outputs the contents of one of its 8 * 4-bit register pairs on its multiplexed address bus, selecting RAM if the upper 2 bits are [This bit I don't know] and ROM if the upper 2 bits are [Similarly Unknown].

Ram is organised as follows:

You can have up to 8 RAM bank chips which are selected using DCL nnn instructions which simply output nnn to CM lines 1..3 on the chip. (There's the default Ram Bank 0 which is selected when nnn=0, which activates CM0, and the rest nnn>0 which literally output to CM1..CM3, so you can attach a 3:8 decoder and decode the other 8 - though that doesn't quite make sense).

Next up: The Src instruction outputs a register pair as described above. The top 2 bits are ignored by the RAM chip itself. The bottom 6 bits select one of 64 addresses (4 registers of 16 4-bit 'characters'). You can then transfer between RAM and the Accumulator using RDM (to read from the RAM address), WRM ( to write Acc to the RAM). There's also ADM and SBM to add (with carry) and subtract (with borrow) the addressed RAM nybble to/from the Acc (and store in the Acc).

In addition, each 4004 RAM chip supports 4 'status characters', per register - bonus nybbles, so each register really contains 20 nybbles. You can access these using RD0 to RD3 and WR0 to WR3, so you can't index them via registers. It's all very strange.

But it gets stranger still. I/O is also accessed via the RAM and ROM chips which each provide 4-bits of I/O (RAM is output only) which you can access via RDR (read ROM port), WRR (write ROM port), and WMP (to write to the RAM output port).

Anyway, this blog explains why you can access 5120 bits of RAM: (8 RAM banks selected via DCL * 4 registers * 20 nybbles/register) * 4-bits / nybble = 5120 bits.


Tuesday, 2 August 2011

You can't have it both ways

My cousin (and president of Birmingham Uni's Atheist society) Benjamin reposted a YouTube video which argues that Christians don't have the objective morality they can often claim to have, they have a subjective morality like everyone else.

OK, so here's a bit of a response, as per these debates I'd be very surprised if it got anywhere, but here goes!

The real problem with the argument is that it fails to distinguish between a command and a moral principle, so for example when the antagonist says: "Today's the sabbath, they're supposed to be put to death if they worked (Exodus 31:15)" the Bible verse is talking about a command, not a moral principle.

The underlying command: that of having a day of rest is still valid (Ex 20:10). Similarly, Lev 20:10 the underlying principle of not committing adultery is still valid. Similarly, Deut 21:18-21 and Prov 20:20, honouring your parents is still a moral principle. So it turns out that all the moral points the verses refer to are still valid, and so the argument here against objective morality doesn't stand.

In which case the Christian's argument is valid insofar that the New Testament maintains that the O.T morality is correct, though the way our failure to maintain it isn't: that's what the New Covenant in Jesus is about.

What's not quite right is the Christian's statement: "The old laws were made in the context of a very different culture and time period." What's happening here is that DarkMatter2525 is injecting the antagonist's definition of subjective morality here as an explanation for the difference between the Old and New Covenant.

In reality, the difference with the New Covenant has nothing to do with a change in the culture and time period. It's to do with the fact that (a) the Old Testament demonstrates that Israel couldn't keep the morality embodied by the law by their own efforts (and neither can we), Rom 3:2-21 (b) Jesus did keep it and because we can be 'in him', God's agreement with Israel is shared with us: Rom 8:3-4.

The key thing here in Rom 8, goes to the heart of what morality is about, since the natural question to ask is why the command in Ex 31:15 is not the same as the principle in Ex 20:10 when both appear as commands. That I'll leave for another time.

-cheers from julz

For reference, here's the transcript:

(Calm expression) "Morality is subjective, the perception of Morality depends greatly on the on context of the culture and the time period. As such it can be uniquely defined and subject to change."

(Angry) "NO, morality is objective, morality comes from God and God doesn't change. A sin is a sin no matter when or where. What's wrong today was wrong yesterday and what's wrong here is wrong everywhere."

(Calm)"Did any of your friends or family do any work today?"

"Yeah."

"Today's the sabbath, they're supposed to be put to death if they worked (Exodus 31:15). And according to the Bible you should condone the killing of adulterers and witches, disobedient children.. (more Bible verses: Lev 20:10, Deut 21:18-21, Prov 20:20, Lev 20:9, Ex 21:15)"

(Shock)"Oh my God! What's That?" (points right, antagonist follows)

"What - What?" ( Christian steals the "Subjective" heading from his antagonist and replaces his old "Objective" heading with it)

"Uh, I guess it was nothing. Anyway, that was the Old Testament, there's a new covenant with Jesus Christ. The old laws were made in the context of a very different culture and time period."

"Hey, you just stole my word."

"I don't appreciate these accusations, why the Hell would I..."

"Hey, Look out!"

"Yeah, right, like I'm gonna faaeeuugghh." (gets eaten by Alien).

Tuesday, 19 July 2011

FIGnition oxo game! (noughts and crosses/ tic-tac-toe)

After my doom and gloom blog about the arctic, I thought I'd write a bit of a journal on how to write a simple noughts and crosses game. So, this'll be technical, and fairly involved as it'll include code, but at the same time will give a bit of insight into FIGnition, Forth and simple strategy games.

FIGnition is a real 80s-style computer. It's got enough memory (8Kb) to write simple programs and enough built-in storage to develop them in. The keyboard's somewhat awkward, but I'm getting used to it (I designed it).

The Computer History Museum is running an event called Hacker's delight, and as I'm exhibiting, they've asked me to produce a version of noughts and crosses to be demoed there. Demoing the development is as important as playing the game.

I started by looking at a few versions of tic-tac-toe. It's possible to write a recursive min-max algorithm, but I took my original cue from an online version of Not Only 30 Programs for the Sinclair ZX81. There are some insights into playing the game here.

Firstly, how to number the board. The board is numbered not in the order:


1|2|3
-+-+-
4|5|6
-+-+-
7|8|9


But

1|2|3
-+-+-
8|0|4
-+-+-
7|6|5


And it's done this way to make it easier to analyse the board. Opposite corners have a difference of 4 and subsequent diagonals have a difference of 2. In fact it's best to represent the board like this, because it reflects the real structure of a oxo board: if you rotate it by 90º it's still the same.

So let's first convert the ZX81 game. The ZX81 version simplifies the game by making the computer go first, by placing an 'X' in the centre. It then follows the following strategy:

  • In the first move (A=1), the computer plays the opponents move+1 (so, if the user played to middle, the computer plays to the next corner and if the user plays to the corner, the computer plays to the next middle).
  • In all other moves, if the user fails to block, then the computer plays opposite of the previous move and wins (i.e. because the user failed to block).
  • Otherwise, for the computer's third move if the user had played to the middle of a row in its first move then we step 1 back from our previous move and win. (That's our other winning case). That's because the computer's second move always creates a dual two-in-a-row where the previous location is its other winning choice.
  • Otherwise for the fourth move then we backwards by 2 and it's a draw.
It's a really simple strategy, which if you note, doesn't involve responding directly to the user's moves, except to record whether they first played on an odd or even square or whether they last blocked the computer's move.

In Forth we can simplify it by representing the moves as a table, one set for when the person played to an even-numbered square and the other set for when the person played to an odd-numbered square:



cdata compMoves 1 c, 2 c, 7 c, 0 c, 1 c, 2 c, 3 c, 6 c,

And then the algorithm simplifies to:
  • If the user didn't block or we haven't played yet, then pick the next move from the table adding it to our current position and if the move was '7' we win, else if it was a '6' we draw.
  • Otherwise we move to the opposite side of the board to our last move and win.
In Forth this is:


: compPlay ( movnum comp h -- m c f )
dup opp + brdRange = ( m c h h+4=c )
>r over = r> or if
over compMoves + c@ dup >r + r>;
7 =
else
opp + 1
then
;


An entire oxo game in approximately 9 lines of code! The entire game is listed on the FIGnition website and also here. It takes up approximately 5 screens and 554 bytes.

( Simple oxo )
: 2drop drop drop ;

: .brdLine
cr ." -+-+-" cr ;

: . Board
." 1|2|3" .brdLine
." 8|X|4" .brdLine
." 7|6|5" ;

4 const opp
64 15 + const (o)
64 24 + const (x)

: cdata <build does> ;

0 var board

cdata posConv
0 c, 0 c, 1 c, 2 c,
5 c, 8 c, 7 c, 6 c,
3 c,

: pos2xy posConv + c@
3 /mod 1 << swap 1 << swap ;

: place ( pos ch -- f )
over 1 swap << board @ swap over or 2dup =
if ( pc old nu )
2drop 2drop 0
else
swap drop board !
swap pos2xy at emit 1
then ;

: range? (val lo hi -- val | 0 )
rot swap over <
>r swap over > r>
or if drop 0 then ;

: humPlay
0 begin drop
begin
key 49 57 range?
dup until
48 - dup (o) place
until
;

: brdRange 1 - 7 and 1+ ;

cdata compMoves
1 c, 2 c, 7 c, 0 c,
1 c, 2 c, 3 c, 6 c,

: compPlay ( mv c h ..)
2dup opp + brdRange =
>r over = r> or if
over compMoves +
c@ dup >r + brdRange
r> 7 =
else
opp + 1
then
over (x) place drop
; ( .. -- mv c f )

: init 0 board ! cls .brd
;

: win? 5 0 at
?dup if
." I WIN!" key drop 1
else
over compMoves + c@ 6 =
?dup if
." DRAW!" key drop 1
else 0 then then ;

: oxo
init humPlay dup 1 and
4 * swap dup
begin
compPlay win?
0= while
swap 1+ swap humPlay
repeat
2drop ;


In a future post I'll look into a more sophisticated (2Kb!) version of noughts and crosses: where the person can start first, where I use real UDGs for a full-screen display and where the computer can LOSE ;-) !

Monday, 18 July 2011

Arctic Smashed


I've been doing some simple analysis of the arctic sea ice extent data in order to generate my own prediction of the summer low. I'm now almost entirely convinced that the 2011 record will be smashed in 2011.

I've based it on the Jaxa arctic data from 2002 to 2011, which I imported into a gnumeric spreadsheet. You can see from the initial image that the SIE for 2011 is below the record-breaking year 2007 and has been
every day since mid-May. In itself, this should be cause for alarm, but let's go back to 2007 to see why it's even worse than you might think.

2007 was a record-breaking year, but it didn't look that way until it lurched into free-fall in the last 10 days of June, due to a 'perfect-storm' of (AGW-induced) weather conditions, which set the scene for a record loss in late September. So the fact that 2011 has been lower since May isn't conclusive.

So, what I did was take the Jaxa data an import it into a spreadsheet. My first estimates were based simply on the ice loss from previous years after July 16 bolted on to July 16, 2011. There I discovered that the SIE minimum for this year would be about 4.16million KM2 - a new record, but not by much, a mere 80,000Km2. What's scary though is that simply bolting on the curves means that we'd get a record for 7/9 of the previous 9 years.












































July 16: 20117347656
Year:201020092008200720062005200420032002
Minimum: 481359452498444707813425453157817195315156578468860320315646875
July 16 802500083429698423438759250080798448401094902937588582818832969
Simple Prediction: 409312541992193549219391187450248444174218405890644264064077187
Average:4168333


But it's not a realistic estimate - it's likely to be an underestimation. That's because SIE curves tend to have some continuity between the current state (and conditions) and the future state. So I constructed a slightly better estimation. In this one, I calculated the slope of the ice loss from July 1 to July 16 and compared the ratio with the eventual ice-loss at the minimum SIE for every year.












































July 1 to July 16:1715157
Year:201020092008200720062005200420032002
July 1-July 16: 78156313798441221562169640611587501214375103125011087501210937
Ratio to min SIE5.113.244.042.972.983.544.153.553.63
July 1-16 Extrapolated:3.002E+053.503E+062.131E+063.973E+063.946E+062.989E+061.951E+062.976E+062.835E+06
Average:2.734E+06


With this, the best-case projection (i.e. maximum minimum SIE) will be 3.95million Km2 and the worst case will be 300,000Km2 (average 2.7million Km2). That's - stunning, and stunning doesn't even cover it: a 50% ice loss from 2007 in the
best case, a 93% ice loss in the worst case, 33% ice loss in the average case.

The question is then how reliable these estimates are. Well, I'd be the first to say, "not very". My projections can be skewed easily by a steep anomaly in early July 2011 which would project a much lower summer minimum than would be likely.

The irony is that 2011 has had a smoothly falling curve between May and mid-July and it was 2007 which had the sudden acceleration over July. So 2007 makes my 2011 estimate look higher than it's likely to be. So, I'll stick my neck out at this point and say I figure the arctic record will be smashed this year, it's likely to be around 20% lower than 2007, probably in the range of 3.5million Km2
, almost certainly lower than 4.0million Km2 and perhaps even as low as 3.0million Km2.

Friday, 1 July 2011

Flashy FIGnition!



I'm the designer and developer of the
FIGnition DIY 8-bit computer from nichemachines and I'd like to share a major milestone in its development with you.

FIGnition contains 512KB or 1Mb of raw Amic Flash storage. I've been working for quite a while on making the chips work like a proper disk, like USB memory sticks do and at last it appears to work and this is a momentous achievement!

Making Flash work like a proper disk is hard, because Flash memory is a development of an old technology called EPROM, which could only be written once and then required sitting under a UV light for 20 minutes to re-initialise it all (losing all the previous data). The only difference with Flash is that you can erase it electronically. Using Flash memory is like trying to write a book with an old mechanical typewriter and without tippex, but with a portable recycling unit. Every time you make a mistake on a page or need to change it you need to pick up a new clean page. Moreover, you're not allowed to change the orders of pages so you'd end up with all the page numbers in the wrong order. It's one redeeming feature is that you can take 16 or 32 consecutively edited pages and stick them through the recycling unit to give you new clean pages.

If everyone had to type like that I figure no-one would have ever bothered.

But that's how Flash is. On the (easier-to-use) Amic chips you can write a block of up to 256b at a time ( a page ), but you can't rewrite it; and you can recycle 16 pages at a time.

This FIGnition firmware is therefore so amazing it needs a blog of its own. As far as I know it's the smallest virtual flash disk software around at only 1.5Kb of compiled 'C' code (on an AVR Microcontroller). The firmware abstracts the Flash memory so that you can read and write 512b blocks to your hearts content; it remaps them to physical flash pages on the fly and recycles all the modified pages. The algorithm is so short you could port it to the internal Flash memory of a Microcontroller and use it as a proper disk and could be converted to work with as little as 4 original 28F010 devices (128Kb drive) (or 1 AMD 29F010 device (80Kb drive)).

To give you an idea of how good this is, it's worth comparing with existing uses of Flash chips. Most standard introductions to the embedded use of Flash memory tell you to create the memory image on the host and simply write it to the target. AmForth, for example uses internal Flash memory to store programs, but if you make a mistake you have to erase the lot again: you can't Forget definitions. Butterfly basic (for the MSP430) is similar, you can write and edit over individual lines of code, but once the Flash is all used up (even if there are reclaimable areas of flash) you have to erase the lot. UCLinux (a fairly complex embedded OS) can either use the rather large MTD flash driver or be restricted to the write-once blkmem driver. The original Psion SSDs on both the Organiser II and Psion 3 systems (both far more complex than FIGnition) used sequential, variable-length records and you had to copy the entire SSD to another one when it ran out.

VDsk FIGnition's VDsk flash system is also a bit of a 19 year personal dream. In the early 1990s I was working at a embedded company called Micro control systems where we developed solid-state storage systems using the then brand-new Intel 28F010 Flash memory chips and a bit later, the early PCMCIA flash cards. They were MS-DOS based systems and you could prepare MS-DOS formatted disks in a sort of write-once procedure. I spent quite a bit of time working on a truly general purpose flash disk system, which could have appeared at the same time as the first San-Disk Flash disks. But it was never in the company's commercial interest, so it could never be justified.

Later still, I worked at Teleca.com where we base-ported Symbian OS (Nokia and SonyErricon's Smartphone platform in the early 00's) to new phone hardware. The Flash drivers were good, but formidably complex! Inbetween I toyed with variants of simple Flash filing, either targetted at an old Apple IIc or ficticious embedded systems.

But now it's done - a simple, purging, embedded Flash Disk system which supports wear-levelling and is reasonably robust in the face of sudden power failures. It's been tested for over 60,000 block re-writes (30K on 2 different device types) with no errors. Enjoy.

Wednesday, 8 June 2011

It's Good To Ask Questions

On facebook my cousin Benjamin shared a reference to an article in an Irish newspaper about her deconversion to Atheism and asked "I'd like to see a theist's take on this article."

So, I thought I'd take him up on it. Hope the response isn't too dry! First it'd be a good idea if you read her article - it's well written and gets to the point.

“Atheism Is the True Embrace of Reality”

Paula's article consists of two major points.

Firstly: she explores the variety of Christian beliefs and finds them completely inconsistent. Therefore you can't know anything about the truth of the existence of the Christian God.

Secondly: Atheism isn't a belief, but a rejection of beliefs not based on evidence.

The first point is essentially an argument from subjectivity. I.e. a subjective experience can't tell us anything about the existence of God. And that's kind of correct, a subjective experience isn't a basis for even determining God's existence, never mind his/her/its properties or character. And that's primarily because from an external viewpoint (which is what the observer has), the subject is simply another item within the reality we live in: a heap of well-organised DNA, generating sound waves.

The problem therefore with the argument is that it's the wrong way round. God, if he/she/it existed determines us and our reality, in a roughly analogous way to the way mathematical axioms determine valid mathematical theorems, or (going another step back) a mathematician determines the set of axioms, which then determine mathematical theorems. It's not possible to determine God's existence from subjectivity in the same way it's not possible to determine the historical existence of Euclid from the existence of parallel lines, not even if the parallel lines could say "I wouldn't be where I was today without Euclid!" ;-)

So, the problem remains, if subjectivity isn't a valid basis for believing in God's existence then what would the basis be?

So, onto the second point: "Atheism isn't a belief, but a rejection of beliefs not based on evidence". So, the questions I would raise are firstly: What is precisely meant by 'evidence' here? Secondly: Is evidence a sufficient basis for beliefs?

I suspect that what she means is scientifically attested evidence supported by a consistent rationale (although she doesn't really mention anything at all about the need for reason in connection with evidence in her article). What she doesn't mean is subjective evidence, i.e. "I went on a milkshake diet last week and lost 2Kg", even if the person could prove that they did go on a milkshake diet last week and lose 2Kg.

The issue in her part of the argument is that all her qualifying terms, such as: 'valid' (as in "a deity for which no valid evidence…") or "reality" (as in "one you have faced up to the reality that there is no evidence") end up being circular. What is 'valid' means (I presume): "scientifically attested" (i.e. attested, because repeated experiments have provided consistent evidence). What "reality" means is: the reality one can infer from valid evidence and reason, etc.

With her second point the problem again is that "evidence" can't be a sufficient basis for beliefs. Let's consider one of her statements: "there is no evidence to suggest there is another life after this one, it becomes all the more important to live this finite life to the full, learning and growing, and caring for others, because this is their only life, too.."

But this doesn't follow. If there's no life following this one, then why not just stomp on everyone else to get your own way? We're both going to die after all. Or putting it slightly differently: there's no life after this one, so why not just try and make as much money as possible? There's no life after this one so why not just party? There's no life after this one so why bother getting up in the morning? All these things, I think, follow just as well don't they?

Conversely, what if the evidence proves unpalatable ideas - should we change our beliefs? What if objective evidence in the end justifies genocide - that is that unless we bump off the weaker members of our society (e.g. religious people) then humanity is doomed. What comes first in that case?

So, as I see it, the nub of the problem is the question of what counts as a sufficient basis for anyone's beliefs on any kind. Subjective claims aren't a sufficient basis for knowing truth, but 'evidence' is a badly defined term and a complete minefield as a basis for constructing beliefs about how to live.

But it's good to ask questions isn't it?

Thursday, 21 April 2011

Carjacked Honeymoon

I guess most of you already know what happened to us in brief. I thought it would be a good opportunity to write down the current situation as it's too long for a facebook status.

Anyway, this is what happened. We arrived in Naples on April 18 in the evening, about 20:35. We picked up the hire car, a basic Fiat Panda without SatNav. It took us 90minutes and I had to get an overdraft extension just to be able to pay for the excess which hadn't been specified on the form. And then we started to try and navigate our way out of Naples in dark.

Except it just wasn't easy at all - the main highway to the A3 which would take us to Sorento was blocked so we ended up going round half of Naples and in and out of the airport for thirty minutes before we found a street which sign posted a turning for the autostrada.

While we were trying to figure out whether the turning would take us in the right direction a motorbike pulled up on our left; the back passenger got off and came up to the car. I can't remember too clearly what happened next as it was a bit of a blur. The guy was gesticulating and shouting and when I expressed my confusion I saw he had pulled out a pistol and cocked it into position (by pulling the top section back, it was an automatic).

I got out of the car and wandered over to the other side of the motorbike; I saw the other guy get in as Helen got out and the first guy started pointing at my wedding ring (a fairtrade gold ring, one of the first that could be bought that way) and when I refused put his hand in his pocket to pull out a gun...

but before he could do anything a car appeared round the corner and they both sped off, leaving us without our car, clothes, money, cards, Kindle, passports etc. They took everything except the clothes we were standing in; our rings and mobile phones, only one of which had any power.

So then we spent the rest of the night stuck with the police. The incident took place just outside a house, and the occupants let us in after they saw Helen calling for help. With the help of Yahoo Babelfish we were able to explain something of what happened; they called the police (5 officers arrived); and after 90minutes where precious few details were taken we went to the main police station to take down a statement, but just ended up waiting so long we were then escorted to a hotel for the night (pre-paid for by Helen's friend Mary), where eventually the hotel managers let us in (they wanted to see our documents... which of course we didn't have).

Mostly we've been spending the rest of the honeymoon trying to get enough things into place, but it's all so terribly involved. It took a whole morning to get a fairly simple statement down. We now have some changes of clothes, a temporary passport and a little money and we've spent 2 nights in our hotel. We have yet to sort out what happened to the car and other things it's probably not best to go into details with on this blog.

However, we are completely unhurt, we must emphasize that. Many people have been very helpful, we'd like to thank the British Consulate in Naples for their extended help along with the kind support of the staff at our hotel as well as other sympathetic individuals. Thanks for all the support from all our friends, I hope this blog helps you know where we're up to.

Much love from julz and Helen