Nested IF Statements & Braces

Edge Testing

Data Type Conversion in Java

Color by the Numbers

Packed NumbersImprecise Calculation

Fixed-Point Math

Faster Games

Learn Programming in Java -- Site Map for this whole tutorial

The Itty Bitty GameEngine -- Overview

Your Own Java Game -- Step by step tutorial to build a simple "Pong" game

ClassGameWgt-- The visual components of a GameEngine game

OverridingGameEvent-- The programmatic components of a GameEngine game

So why worry about it at all? Just this: finding a missing or extra
parenthesis or brace in a large program is one of the hardest things to
do (without help from the compiler, which is not forthcoming in **BlueJ**).
Most compilers won't even give you an accurate error message, just that
somewhere, far, far away from the actual error, the compiler figures out
that this is utterly *wrong* -- and says the first thing it can think
of. Some development environments (IDEs) at least let you click on a parenthesis
or brace, and it will show you what it thinks is its matching pair. **BlueJ**
sometimes doesn't do that, but they draw colored boxes around the blocks
enclosed by braces, so if the colored box goes past where it should, that's
a clue.

Most good programmers indent the contents of a brace-enclosed block, which gives you a visual hint where the braces should start and end. The compiler could check that, but they don't. The C tradition is to put the starting and ending braces on lines by themselves, but I prefer to use that valuable screen real estate for actual code.

Anyway, braces are not required around a single statement that is the
body of a loop or under the control of an IF or ELSE,
but many teachers tell you to put them in anyway. The one place where you
*should*
put the unnecessary braces in is for nested IF statements,
if there's an ELSE involved. Technically, the ELSE
always belongs to the nearest unclosed IF, so if you
have some code like this:

and ifif (firstly)if (secondly)DoSomething();else DoMore();

However, you might need to consider how it decided that there is a collision,
in case you need to see if it will be detected again (on the next frame),
or if you want to move things far enough apart to prevent that. Basically
the top-left corner of one widget is compared to the bottom-right corner
of the other widget for overlap. If there is separation, either bottom
to top or right to left, or just touching but not overlapped, then there
is no collision. Then the opposite two corners are similarly compared.
The top-left corner of a widget is obtained by the `GetPosn()` call.
The bottom-right corner is the sum of the top-left plus the widget's containing
rectangle dimensions from `GetSize()`. Overlap is determined by
subtracting (negate then add) the bottom-right from the top-left and seeing
if either half is negative: if both halves are non-negative, there is no
overlap. You can do this kind of testing yourself, if the GameEngine tests
don't fit your requirements. Look at the `Collider()` method in
the `JavaGame` class to see how I did it.

Say you write a couple lines in Java like this:

what do you think will happen? Try it! Did you expect it to print "boolean A = false, B = !A; // int A = 3, B = A+1;if (A=B) System.print("Oops!");

The point is, you should write code as if your compiler forces strong typing on you, because you will make fewer mistakes.

The topic here is data conversion. In Java, a char is just another number. Indeed it is, in the underlying hardware, but everything is just numbers. The whole point of a strongly typed high-level language is to make a difference between characters and numbers, so the compiler can help us catch mistakes in usage. So I have an explicit type conversion method for changing strings of characters into numbers. I also have another one for changing numbers into strings, but Java is inconsistent about allowing that, so I got lazy. Consider this program fragment:

Now you really should not think of concatenation as "adding" because it's fundamentally different. If the last two lines above are separated from the declarations, you might be tempted to think that whatever number is in variableString abc = "123", def;int xyz = 123, uvw;uvw = xyz + 4;def = abc + 4;

Would you believe a printout "System.print("The value of 2+3=" + 2+3);

except now the compiler complains. You can't even saydef = 4 + abc;

Go figure. Always do your conversions explicitly in Java. The compiler won't always help you remember, but you might save yourself some grief. But, like I said, I got lazy in this program. Or rather, my compiler always converts any primitive type to String when asked to (because it is so useful, as the Java people well knowdef = 4;

Pure red is `0xFF0000`, pure green is `0x00FF00`, and
pure blue is `0x0000FF`. Add these up for different combinations.
Some color picker tools let you choose or see the RGB
numerical value for a color, so you could pick off those values when you
want those colors. GameEngine's color chooser only lets you choose color
by the number. Perhaps a future version might give you sliders or a color
wheel, but there are more pressing improvements before that even gets considered.
"Real programmers code in hex." I normally don't argue that way, but this
is an exception.

Oh did I mention? GameEngine does not use the graphics processor (GPU) on your computer, everything is pure Java, so you can read it like as if you wrote it yourself. But pure C or Java runs about ten times slower than the GPU, so we need to be careful (see "Faster Games" below).

Often the numbers we need to use in a small environment never get bigger than the size of the environment. GameEngine can have maybe a thousand widgets on screen before it gets too slow to play, and the screen on the largest computer you would ever play this game on is maybe 4K pixels wide, so we are talking numbers that never get bigger than a dozen bits. I program everything in 32 bits (it's slightly faster than 64 bits) but nobody makes 32-bit computers any more, they are all 64. Java has descriptors for different sizes of numbers, but you can't mix them in large arrays, you need to use classes and objects, and those take time, a lot more time than just packing two or more smaller numbers into 32 bits. The hardware has instructions for packing and unpacking powers of 2 chunks into 32- and 64-bit numbers (and any compiler worth its salt knows how to use that hardware), so the packing is almost free.

But not quite. So when I pack a vertical + horizontal pair of coordinates into a single integer, I might want to compare the location of the ball to the corner of the screen without unpacking the numbers for a tiny increase in speed. Raster-based graphics were designed to be accessed in TV scan order, which is the same as (western) text order in books, doing the whole top line left to right, then advancing to the next line down, which is called "row major" because the whole row is stored in memory before advancing to the next row (see "Row Major Ordering"). Again, (western) number systems are "Big Endian" (the big end of the number is encountered first when scanned sequentially in normal reading), whereas math is normally computed "Little Endian" so the carries propagate correctly. People who grew up in Big Endian hardware do their row-major pixel array dimensions with the row (vertical coordinate) in the most-significant end of the integer, and the column in the lower half of the number. In Java this looks like:

The low half part only works for positive coordinates. When you have values that could be negative -- for example, the ball rolled off the top of the screen -- then you need to consider how negative numbers are stored (or else just use the built-inint pack(int row, int col) {returm (row<<16)+col;}

int getrow(int coord) {return coord>>16;}int getcol(int coord) {return coord&0xFFFF;}

The point is, the hardware choices were made 50 years ago when computer
chips had thousands, not billions, of transistors on a single chip. So
what do two's complement signed numbers look like? Let's count up from
0 to 15 in binary, then (reading the table backwards), down from -1 to
-16 (which is the same as zero):

0 | 0000 | -16 |

1 | 0001 | -15 |

2 | 0010 | -14 |

3 | 0011 | -13 |

4 | 0100 | -12 |

5 | 0101 | -11 |

6 | 0110 | -10 |

7 | 0111 | -9 |

8 | 1000 | -8 |

9 | 1001 | -7 |

10 | 1010 | -6 |

11 | 1011 | -5 |

12 | 1100 | -4 |

13 | 1101 | -3 |

14 | 1110 | -2 |

15 | 1111 | -1 |

There are some interesting observations. If you ignore the signs on the right column, the numbers in that column plus the number in the left column of the same row always add up to 16. That's because this is a 4-bit table. If there were only one bit, then the numbers would add up to two, which is why it's called "two's complement".

Then you can see that the middle line, +8 is the same binary number as -8. That is true of any size number, the middle value is a one on the left end and otherwise all zeroes. We define that left bit to be the sign bit, so (in this 4-bit system) the only positive numbers are from 0 to 7; the others are all negatives, and you cannot have a more negative number than -8. Well, you can, but it's no longer recognizable as negative. When you count up past 8, it flips over to negative. If you stacked two copies of this table, one above the other, you'd see that -1 comes just before zero, which is as it should be.

Anyway, if you want to pick out the low half of a packed number, GameEngine has a utility that makes a 32-bit signed number from it. Some Java compilers are smart enough to back-substitute this into the calling code, so it's almost as fast as the unsigned version:

int SignExtend(int coord) {return (coord&0x7FFF)-(coord&0x8000);}

One more observation, to make the negative of
any number, you complement (flip 1->0 and 0->1) all the bits, then add
+1. So for example, the negative of +3 [0011] => [1100]+1 = [1101] = 13.
It even works for zero: [0000] => [1111]+1 = [10000]
= [0000] (after we discard the excess carry out).

Now let's try packing two (signed) 4-bit numbers into a single 8-bit integer. We try a few values to see what happens:

There are several interesting facts to learn from this exercise. First, the sign of the upper half is the sign of the whole. So to do a sign check on that upper half, you can just test the sign of the whole packed number. You can test the signs of both halves at once by masking out only the two sign bits, using two copies of the most negative number in the half-range (in our case -8) packed together [10001000]=136 then logical-AND that to any packed number, and if the result is zero, then both halves are non-negative; if it is 136 (the mask value), then both halves are negative.[+3,+6]becomes [0011,0110] = [00110110] = 54[+3,-6]becomes [0011,1010] = [00111010] = 58[-3,+6]becomes [1101,0110] = [11010110] = 214 = -42[-3,-6]becomes [1101,1010] = [11011010] = 218 = -38

To test only the lower half for negative just pick out the lower sign
bit, which for 16-bit halves would be `(num32&0x8000)==0` for
positive or `!=0` for negative. It's less intuitive than using `SignExtend`,
but substantially faster (especially if your compiler optimizes badly).

The surprising fact here is that packing the negatives of two 4-bit numbers does not create the negative of the 8-bit number that the two positives formed, it is off by 16 (one unit in the least-significant bit of the left half). You can think of it as because a negative 8-bit number like -6 has ones in all four bits of its left half (-1) which must be accounted for, but that doesn't help much with understanding it. It just is what it is.

Anyway, so if you add a packed number to packed negatives of its parts,
like `[+3,+6]`+`[-3,-6]` = 54+218 = 272 = +16, you don't
get zero. For the math to work correctly you must add the two halves separately,
then put them back together. For a compare (subtraction) to work correctly,
accurately all the way across, you must subtract the two halves separately,
then put them back together. You can safely do full-word adds and subtracts
when you know that the lower-half result has the same sign as the original.
lFor example, if you subtract a the ball size (known to be two small positive
numbers) from the game board size (two large positive numbers), the result
is slightly reduced but no sign change in a single 32-bit operation.

You have Java code for packing and unpacking these two numbers on a
16+16 basis (see "`pack`" above), which is simple
enough to do in-line. GameEngine has a couple utility functions, AddPair
and PairNeg, for adding the 16-bit halves of two numbers together then
repacking the result so you get the correct sums, and for taking the negative
of the two 16-bit halves of a number then repacking the result, which you
can use to take the difference of two packed values. For equality testing,
of course you can just compare the packed values: if both halves are equal,
then the packed values will also be equal. Testing one for greater than
the other doesn't make sense, because one half might be greater one way,
while the other half might be greater the other way.

But sometimes a tiny error in the left half just isn't important, see
"Imprecise Calculation" next.

Anyway the ball position is an approximation, rounded to the nearest
pixel, sampled at 10 frames per second. It's all approximate. So now we
want to reverse the direction of the ball on the screen. It's all an approximation,
so 0.25% of a pixel more or less per frame will not even be visible. So
I look at the logic for finding the negative of a number,
which is to flip all the bits to their opposite, zeroes for ones and ones
for zeroes, then add +1. If I leave out the +1, there's no carry out of
the lower half into the upper half (crosstalk is worse than imprecision),
so it's off by 0.24 %, big whoop-de-doo. Flipping any number of bits all
at the same time is a single hardware instruction, that every computer
except the original PDP-8 minicomputer does in one cycle, and Java has
a bitwise operator "`^`" to call up this hardware instruction, so
I
can do that for the upper or lower half, whichever direction I want to
reverse. If you wanted to do it exactly, you'd write something like this:

which would be four or five operations at the hardware level instead of one (the difference is probably insignificant, like the difference in valueif (velo<0) velo = (velo&0xFFFF)-(velo&-0x10000); // upper halfif (...) velo = (-velo)&0xFFFF|(velo&-0x10000); // lower half

which inverts the top 16 bits as before, then adds +1 to those top 16 bits, which is two machine operations, but this does not work for the lower half, because the carry out of the +1 messes up the upper half. The point is, by trying different ways to do things, you can improve on its performance and size, sometimes at the risk of insignificant degradation in accuracy. But it's much simpler (and insignificantly slower) to get the half you want and do the math naturally, so that's the way Pong now works.if (velo<0) velo = (velo^-0x10000)+0x10000; // upper half only

Then there are fractional numbers which almost never can be exact, as
we can see in the next topic.

The velocity vector of sprites is the easiest to understand. Each 16-bit
half of the velocity integer is a fixed-point value with 8 bits of fraction
and 8 bits of sign+integer part. Another pair of 8-bit values is maintained
for each sprite, which represents the current fractional position (the
`GetPosn()`
method returns only the integer position of any widget on the screen).
To this fraction is added the whole 16-bit velocity part for vertical or
horizontal, then the integer part of the sum is shifted (right) 8 places
and added to the regular (integer) position, and the remaining 8-bit fractional
part is saved for the next frame. This isn't rocket science, but it does
require some careful thought. Fortunately, GameEngine does it for you.
It is inline code in the sprite widget, not a subroutine you can call for
your own fractional math, so if you want to do the same kind of thing in
your own game, you need to write your own code to do it. It's easier to
get the fixed-point value converted to floating-point (`double`,
all done in the method call) do the math, then convert it back (again inside
the method where you don't see it).

Rev. 2020 July 13