Monday 30 December 2019

Monopoly Sim



Our family plays Monopoly every Boxing Day and the winner gets to keep our Monopoly trophy for a year with their name inscribed on a new lolly stick. I've never won, I should do something about that!

I wanted to simulate the game of monopoly to find out which locations are most likely. The distribution won't be even because GO TO Jail takes you back to Jail (or Just visiting Jail once you pay or throw a double to get out) and some of the other cards take you to different locations on the board (Go, Pall Mall, Marlebone station, Trafalgar Square or Mayfair.

Naturally, it's not good enough just to simulate the game, the quest is whether it's possible to do that on a 1K ZX81. We can do that by some analysis on what really counts for the simulation.

Firstly, obviously, to find out which locations are most popular, we don't need to draw a simulation of the board, simply listing the popularity of each location as a series of numbers will do. On a ZX81 we'd need to count the memory used by the screen for this. Assuming each location could have a popularity up to 255, then we need 4 locations per location: 160 bytes in total.


Secondly, we don't need to store the names of the properties; the value of the properties or rents since if we simply know the popularities, we can calculate the income we'd get from that and the card values.

Thirdly, we don't need to maintain any money, because we only care about popularity; this means we don't need to simulate the bank or money for passing GO and we don't need to simulate chance and community chest cards that involve money. So, the only cards that matter are those that take you to a new location.

Fourthly, we can note that there are only 2 Community Chest cards that don't involve money, a goto GO and goto Jail, which are also part of the Chance cards, so we can re-use two of the chance cards to simulate Community Chest.

Fifthly, we can assume that a play will get out of jail immediately by paying or by having a card - it makes little difference to distributions (though we could simulate runs where the play prefers to get out of jail using a double, this means they'd only move

This means the simulation is pretty simple:

  • We need an array of 40 locations to hold popularities.
  • We need to simulate two dice for moving parts: the sum of two random numbers 1..6.
  • We need to redirect the locations if we land on one of the 3 Chance or 3 Community Chest locations or Goto Jail.
  • The redirect locations are: 0 (GO), 10 (Jail), 11 (Pall Mall), 15 (Marlebone stations), 24 (Trafalgar Square), 39 (Mayfair) or back 3 spaces, but Community Chest is limited to the first two.
  • We could calculate going to jail if the last 3 rolls are matching pairs and this would mean an extra jail visit every 1/216 throws on average. Getting out of jail by throwing a matching pair wouldn't affect results because the throw only puts the player into the 'just visiting' location and thus doesn't affect where they land after leaving jail.
  • Periodically we should display the current popularities.
With all these rules apart from the even dice jail rule and using the normal ZX81 Basic space-saving rules we can squeeze the code into 1Kb.


Line numbers take up 21*5 = 105b. The main code takes up another 305b. M$ takes about 45 bytes variables take up about 6*4 = 24 bytes. The screen takes 160 bytes, making a total of 639 bytes, which is within the limit of a 1K ZX81.

Results

It's just a list of numbers representing the popularity of every location from GO to Mayfair:

The ZX81 is really slow when run in SLOW mode, taking 0.6 seconds per go, or about 84 seconds per round including the printing. About 4,000 runs were taken which took nearly an hour. We could speed this up by running in fast mode; removing lines 25 and replacing line 130 with GOTO 30. Then it would take 0.15s per run, so 4000 runs will take 10 minutes.

What the results mean:

Site Pop Site Pop
Old Kent 32 Whitechapel 41
King's Cross 29 Angel 40
Euston 43 Pentonville 36
Pall Mall 46 Electric 41
Whitehall 35 Northumberland 40
Marlebone 40 Bow St 47
Marlborough 51 Vine St 44
Strand 47 Fleet St 52
Trafalgar 58 Fenchurch 43
Leicester 44 Coventry 40
Water 47 Piccadilly 49
Regent 46 Oxford 39
Bond 32 Liverpool 45
Park Lane 29 Mayfair 51

In this run there were 4173 throws, so there would have been 19 extra go to jail's due to three successive pairs (boosting Electric to Strand by about 7%).

We can see that the four most popular places are Trafalgar, Fleet St, Marlborough and Mayfair and at the other end of the scale, Park Lane, King's cross, Old Kent Road, and Bond Street are significantly less likely. It's understandable that Marlborough would be likely, but I would have thought Bow Street would have been equally likely (both 5/36 probability), but Trafalgar was unexpected - except that it's an actual card destination. We know that Chance was hit (47+70+54 = 171) times without the player being redirected (11/16), so in total it was hit 248 times and therefore 15 of these involved jumps to Trafalgar (lowering the hit rate to 52). A similar number of hits redirected to Mayfair (51-15 = 45 still leaving it more popular than Park Lane).

The sample size is about 4173 across 40 locations, about 104 hits per location, so the variance will be fairly significant. Therefore I haven't deduced too many details from the results.

Conclusions

The main conclusion is this: it's perfectly possible to solve an interesting question by simulation on an old computer with minimal memory. Even on a 1Kb ZX81 there was spare space. It's worth considering what the minimal computing resources are to solve the same problem. It's certainly under 1Kb, so a basic Jupiter Ace or a 1Kb MSP430 should easily be possible along with a PIC16F54 or even maybe an ancient 1802-based ELF computer with the standard 256b of RAM.

Saturday 3 August 2019

Danger! Danger! Devaluation!

Right now, Brexiters keep claiming that devaluation is good for the economy. They're almost certainly wrong.

Intuitively, a fall in the pound is a bad thing: anything we buy from overseas goes up in price and since a high proportion of what we buy comes from overseas, our cost of living will go up.

But then Brexiters jump in and say "Devaluation is great, because it means more exports and that will make the economy grow." As a general rule, because Brexiters often distort the truth, you should be skeptical. So, are they wrong here too?

It's a good question. Recently I was at my Dad's house and we were watching a TV show called "Factory Wars" on the Yesterday channel. I was surprised to find a bloke from The Institute of Economic Affairs. Why was an economist from a lobbying organisation with secretive funding (hint, Big tobacco and the climate denying US groups like the Heartland Institute) on a history show?

Basically he was using it to push free market ideology. He claimed that the UK escaped the recession in the 1930s because it implemented austerity unlike the US which implemented a Keynesian New Deal. This was completely different to how I'd understood these economies in the 1930s: on the basis that austerity empirically doesn't work; its easy to explain why, and it was the Keynesian New Deal that empowered the US to manufacture the armaments that helped the allies overcome the Nazis.

It's not coincidence that the IEA offered a completely different explanation than what is normally understood: they're based at 55, Tufton Street where Vote Leave was also based. So, again we have another ideologue whose views we should suspect.

So, I checked out what really happened in the UK's economy in the 1930s? Economics Help is really useful. Basically, the crash of 1929 caused a lot of hardship in the UK, but in the mid to late 1930s we dropped the gold standard; which lead to a deflation of the pound and this lead to a mild economic recovery in the south, but the North still had it tough.

So, it looks like Brexiters might be a bit right, and the IEA were misleading as normal. But is it?

Devaluation Model

Actually we can work this out ourselves, because it's easy to model. In our model (for a business, but it can apply at a larger scale) we consider just three variables:

  1. Profits
  2. Internal costs (labour minus rises in the cost of living due to buying overseas goods, maintenance of equipment etc). We assume this is a constant.
  3. Imports (including the increases in the cost of living for employees from overseas goods).

Case A

In the first scenario, we consider a company with reasonably high import costs (50%), a profit margin of 30% and the rest is internal costs (20%). If the pound deflates by 25%, then imports go up to 50*1.25 = 62.5%. So, now our profits are 30-12.5 = 17.5% profits. This means that the company needs to sell 30/17.5 = 71% more goods, but the domestic market will buy less, and the overseas market will only find its products are 25% cheaper (due to devaluation).

So, the question is: is it like overseas customers will buy 71% more, if it's 25% cheaper? This seems unlikely to me.

Case B

Let's consider the second case: profits are 70%, internal costs 20% and imports are 10%. Imports increase by 25% => 12.5%, so we now need to sell: 70/(70-2.5) = 3.7% more. OK, so in this case our product is 25% cheaper, and we only need to sell 3.7% more for us to make more profit, this seems plausible.

So, we have to ask ourselves, what kinds of businesses are like this? Mass market manufacturing (cars, aircraft, smoke alarms) will rely a lot on imports and will make a relatively small profit. Apple, for example has margins between 20% to 30%. Dyson is also about 21%. DELL has an operating profit of just 1.1%. Companies producing consumer items such as smoke alarms have fairly low margins which explains why they would move to cheaper EU countries.

What companies are like case B? Services and financial companies are like that. However, financial companies are likely to move out of the UK due to Brexit, so this means service companies would benefit.

However, to some people, UK manufacturing companies can look good: namely, if you hold a lot of offshore wealth, then UK manufacturing companies struggling under devaluation look like a good buy. In other words, disaster capitalists with offshore accounts, such as people like Jacob Rees Mogg and a large proportion of Conservative party members and MPs.

Epilogue

Brexiters are wrong: it's just a front for their disaster capitalist mission. We can summarise our model and convey why in a simple table:



So, why did the UK not crash due to a double-whammy of austerity and devaluation in the 1930s? That's pretty simple, being the biggest power bloc at the time meant that although most of its food (91%?) was imported along with a significant proportion of goods, a great deal of this was essentially an internal market, with the added advantage that many resources, particularly coal were internal.

Saturday 6 July 2019

Let the EVs Take the Strain

A recent BBC article says EVs won't solve congestion problems. It's yet another negative headline about EVs to follow from yesterday's negative EV headline where they said EVs were falling in sales for the first month in whatever (when in fact BEV sales had gone up 67%). They even go to the trouble of showing a picture of a rare EV, an 8 year old early prototype Smart ED TwoFour, rather than - say EVs hundreds of times more popular, to get across the idea that EVs are toys. Next week, watch out for the Tesla-bashing article ;-) and no mention of how sales of real EVs in the EU, the US and globally are rocketing.

Similarly, this article uses a bit of truth to hide a bigger lie. In fact EVs will go quite a long way to solving congestion.

Car Use is Falling

Car use is already going down in some parts of the UK, mostly because in London, they're not needed much and elsewhere because insurance for young drivers is prohibitive.
But actually, the nature of EVs will themselves radically change our vehicle usage, primarily because they have so few moving parts and batteries last much longer than originally expected (and will get several times better), to get sufficient wear and tear we're going to have to drive them much more often.

ICE Drives Congestion

The problems we see with urban vehicles are problems relating to ICEs themselves. For example, you can't have a filling station at everyone's house - it's far too dangerous and far too expensive! ICEs force us to place filling stations as widely as can be tolerated and because the effort taken to fill up (compared with plugging in an EV); this in turn forces infrequent filling; large tanks and very long ranges.

But long ranges themselves have the side effect of increasing our journey lengths which impacts everything: distance travelled to shop, to our workplaces, to schools and hospitals and all this increases traffic.

EV Transformations Will Blow Our Minds

EVs will change this radically. We'll have to share cars to get the wear and tear out of them and because charging will become ubiquitous (think every forecourt where your car might hang around); we'll need cars with much shorter ranges on average than even the first generation of EVs: think 10KWh or even 5KWh for the majority of cars and in turn two person EVs will dominate for the vast majority of journeys. But in turn, because we can charge easily, we can expect journeys to shorten too.

Remember in this model, people don't own their cars as much.

Why will people choose tiny, 'under-capacity' cars? It's simple, they'll be much cheaper to build, sell and drive! My Zoe (22KWh) gets about 4 to 4.5 miles per KWh at maybe 12p/KWh. Given a gallon of petrol (4.5L) = 4.5*£1.25 = £5.63, I get 5.63/0.12*4.5 up to 210mpg running costs.

But a Renault Twizy (a 2 person EV with a 6.7KWh battery) will get 6 to 8 miles per KWh, equivalent to 300 to 400mpg running costs.

Given a typical day's travelling in the UK is only about 10 to 20 miles, about 3KWh, that's only half a Twizy's battery. And considering the sheer number of charging points there will be, the average needed journey between charges will only be 5 to 10 miles, just 1.5KWh.

On that basis, a future EV with a 5KWh will seem ample, even though right now, all the talk is about 50KWh to 100KWh batteries.

So, EVs will go a long way to reduce congestion in themselves owing to the different driving model.

Friday 12 April 2019

Plottastic ZX80!

A guide to plotting pixels in Basic on a ZX80!


Introduction

Both the ZX80 and ZX81 support 8x8 pixel character-only displays and contain 16 graphic characters that can be used to plot fat 4x4 pixel pixels on a two-by-two grid within each character:

These are called Battenberg graphics after the cake of the same name 😀

On a ZX81 it's easy to use these characters to plot (crude) graphics on the screen, the computer provides a PLOT x,y and UNPLOT x,y for this purpose. But with half the ROM on a ZX80, it's much harder - so I wondered, how much harder is it to plot pixels? It turns out it's pretty formidable!

Challenges


  • The ZX80 doesn't have any PLOT or UNPLOT commands.
  • The screen on a ZX80 is like that on a ZX81, when you run a program, the screen first collapses to a minimum of 25 newline characters and expands as you display text. However, on a ZX80, unlike the ZX81, you can't print anywhere on the screen as there's no PRINT AT function, this means we'll have to poke onto the screen.
  • The memory map on a ZX81 has the display immediately after the program, but on a ZX80, the display comes after the variables and the editing workspace which means that it'll move around just by creating a new variable or possibly by performing input (which is a potential problem).
  • Ideally, to convert from pixel coordinates to Battenberg graphics you'd want to map the odd and even x and y coordinates to successive bit positions to index the character.


  • But, unlike the ZX81, the character set on a ZX80 doesn't have the characters in the right order to display Battenberg characters. Instead they contain gaps; the last 8 codes are generated from the first 8 but in reverse order; and some of the first 8 are taken from what ought to be the inverse characters!

The Solution

The solution really comes in a few parts. Firstly, the easiest way to be able to map an ideal pixel value to its Battenberg character is to create a table in RAM, by putting them into a REM statement (the ZX80 has no READ and DATA commands so it's hard to put a table of data into an array, but the first REM statement in a program is at a fixed location). However, even this presents a problem, because only 8 of the graphics characters can be typed. The easiest way to handle that is to write a program which converts a different representation of the characters into the correct ones.

So, on the ZX80, first type this:
After you run it, you'll get this:
Even this was tricky; I had to use character codes above 32, since symbols such as +, -, /, ", etc get translated into keyword codes for the Basic interpreter. The above program illustrates an interesting feature of ZX80 Basic that differs from ZX81 and ZX Spectrum Basic in that AND and OR operations are actually bitwise operations rather than short-circuit boolean operations. Thus P AND 15 masks in only the bottom 4 bits.

Once we have generated the symbols we can delete lines 10 to 30.

The next step is to actually write the code to generate pixels and plot them. Once we know how, it's actually a bit simpler. Firstly, we fill out the screen with spaces (or in my case, with '.'):
This gives us 20, full lines of characters. Because it's going to be difficult to plot a pixel by reading the screen, figuring out the plotted pixels then incorporating the new pixel; I cache a single character in the variable P and its location in the variable L. All my variables a single letters to save space if you try to run it on a 1Kb ZX80. The idea is that if the new print location in L changes, we reset P back to 0, otherwise we incorporate the new pixel.

Next we start the loop and calculations for plotting, in this case a parabola. We loop X=0 to 63 and calculate Y on each loop (it's a parabola that starts at the bottom of the screen):

Finally we perform the pixel plotting and loop round.

This involves saving the previous location in K so we can compare it later; then calculating the new location based on X and Y (since each character position is 2 pixels, we need to divide X and Y by 2 and since there are 32 visible characters and an invisible NEWLINE character on every screen line we must multiply Y/2 by 33). Note, a quirk of ZX80 Basic means that multiplies happen before divides, so Y/2*33 would calculate Y/66!

The pixel bit calculation in line 50 makes use of ZX80 bitwise operators, we want to generate 1 if X=0 (so 1+(X AND 1) will generate either 1 or 2) and then we need to multiply that by the effect of the Y coordinate, which is 1 on the top line, and 4 on the bottom: 1+(Y AND 1)*3 will do that. Hence this will generate the values 1, 2, 4, 8 depending on bit 0 of X, Y.

We must POKE the location L plus an offset of 1 (because the first character of the display is a NEWLINE) and also we must add the location of the display file (it turns out that these calculations don't make the display file move around). We poke it with the pixel value we want indexed into the right character code from the REM statement. Finally we loop round X.

This code generates this graph:
It looks reasonable even though it's generated with integer arithmetic. It's the first pixel plotted graph I've ever seen on a ZX80!

More Examples

My original reason for doing this was to see if I could generate sine waves using a simple method I once found for a Jupiter Ace. Here's the program and the result:
It looks fairly convincing, especially as it's all done with integer arithmetic. Because the code generates both the sine and cosine value, it's easy to turn this into a circle drawing program which produces the following:

It looks a bit odd, that's because the algorithm doesn't quite generate sine and cosine curves. What it's really doing is computing a rotation matrix, but only one of the terms is computed for sine and cosine each time. Hence, the circle looks a bit like an oval with a slight eccentricity.

My graph drawing algorithm has one very serious limitation. Because it doesn't read the screen in order to compute a new pixel, if the graph goes over the same part of the screen twice, the second pass will muck up what was there before. Doing the correct calculations is possible using some table lookups and bitwise operations, though it would slow down the graph generation. I didn't bother, because I only wanted to generate simple graphs.

Conclusion

The ZX80 and ZX81 have very similar hardware, but a number of design decisions in the ZX80's firmware made drawing circles much harder than you might expect. With a lot of effort it is possible to generate some simple graphs 😀