How do you split the check when you dine out with friends? Equal shares, to make it easy on the server, or broken down by individual, to ensure supreme fairness? In either case, you probably don’t give a second thought to how you *represent* the final number. Unless you’re an extraterrestrial visitor (unlikely) or just exceptionally pedantic (guilty!), decimal numerals are your go-to choice for communicating quantity. The concept of counting ones, tens, hundreds, and thousands is second nature from childhood—even for complex calculations like splitting the check or computing a good-for-karma-but-not-extravagant tip.

Why is this important to game math? Because decimal counting and computation are so fundamental to our basic understanding of math, it’s easy to forget that computers represent numbers differently. High-level programming languages usually insulate us from the details, but it pays to understand what’s happening under the covers, because knowing how the hardware works helps us to write better code.

Having two hands with five fingers each, it’s probably no coincidence that we tend to count by 10, and groups of 10, and groups of groups of 10. This *base-10* approach means that only 10 unique numerals are needed to represent any integer quantity. In our positional number system, we simply add extra digits to count larger and larger groups, when necessary.

A number’s base, or *radix*, is the maximum count of items that may be represented by each digit. It can be indicated explicitly by a subscript, as above, or left off to signify a default radix of 10. The radix is also a scale factor: as numbers get bigger and more digits are added, each new digit counts groups that are *radix* times larger than those counted by the previous digit. In base-10, this is intuitive: we have ones, tens, hundreds, thousands, etc. For other bases, though, the “tens place” or “hundreds place” of a number would be replaced with the “twos place” or “twenty-fives place,” for example.

You can see how quantity may be expressed an infinite number of ways, since we can choose any radix we want. The value 3475_{10}, for example, looks very different when rewritten for each alternative radix above: 3475_{10 }= 6623_{8} = 102400_{5} = 110110010011_{2}.

Conveniently, the mechanics of basic arithmetic are the same regardless of radix. This means you follow the same steps to add, subtract, multiply, or divide base-6 numbers, for example, as you would for base-10 numbers. Sometimes the choice of radix makes a particular operation simpler, such as multiplying a decimal number by 10 (just add a 0 to the end). In those situations, that operation will be simplified in a similar way for *every* radix:

Practically, this means that just as you can tell at a glance when a (base-10) number is divisible by 10, or 100, or 1000, you can tell when numbers in other bases are divisible by their radix or one of its powers.

There are other useful shortcuts—like dropping the last digit to divide by the radix and throw away the remainder—as well as esoteric ones. For example, replacing low digits with 0 effectively rounds down to the nearest multiple of the radix raised to some power (useful for snapping coordinates to a grid). Conversely, replacing high digits with 0 computes the modulus over the radix or one of its powers (useful for clamping values to a range without requiring a bounds check). You can even estimate logarithms simply by counting digits.

The point is, how we represent numbers has implications for how we manipulate them. We are conditioned to think in base-10 for physio-historical reasons, but the choice of radix is actually rather arbitrary. Counting by 10 may be *handy* for us (see what I did there), but it’s not particularly convenient for computers, which do their counting with bits instead of fingers. Each bit, being either OFF or ON, is equivalent to a digit of 0 or 1, so base-2 (binary) arithmetic is perfectly suited to representing integer values for a computer. To store the number 13, for example, would require four bits:

Those four bits describe the minimum amount of information necessary to convey the value 13. (Note that eight-bit bytes are typically the smallest unit of storage directly accessible by a computer, though, so 13 will be stored in memory as 00001101_{2. }Your hardware may be optimized to deal with even longer bit sequences.)

Knowing that computers work in binary, we begin to see the benefit to certain “magic” numbers, or at least why they’re so common in computer code.

Of course, all powers of 2 are automatically interesting by virtue of the fact that they have only a single bit set. As mentioned earlier, this simplifies range checking when they are used as upper bounds. For example, suppose we want to tile an image to fill the screen (with no stretching or clipping). If the image happens to be the same size as the screen, we can simply visit its pixels in order, copying each one to the screen location having the same (x, y) coordinates. Usually, though, the tile image is much smaller than the screen—we want to repeat a patterned background, for example—so we have to transfer the image data more than once, adjusting the destination location as we go.

We could accomplish this with pseudocode like:

```
for( int x = 0, u = 0; x < screen.width; x++, u++ )
{
for( int y = 0, v = 0; y < screen.height; y++, v++ )
{
// Copy the pixel value from the image to the screen.
int color = image.getPixel( u, v );
screen.setPixel( x, y, color );
// Make sure we don't go outside the image boundary.
if( u >= image.width )
u = 0;
if( v >= image.height )
v = 0;
}
}
```

This approach keeps track of the image source (u, v) and screen destination (x, y) coordinates, incrementing both in tandem and adjusting (u, v) as necessary to make sure it always refers to a valid source pixel.

If the image dimensions happen to be powers of 2, however, we can eliminate the bounds checking. We can take advantage of the fact that the coordinates for every image pixel will occupy the bits below the dimension value, and *only* those bits—the higher bits will always be 0.

That means that for any screen coordinate, clearing the upper bits to 0 will guarantee the resulting value is a valid image coordinate. The bitwise *AND* operator is tailor-made to help us here. It compares two operands and preserves the bits that are set in both, effectively allowing us to “mask off” bits we don’t care about.

Adjusting our pseudocode, two inexpensive Boolean operations replace two comparisons and potential branches:

```
// Assumes image dimensions are powers of 2.
int widthMask = image.width - 1;
int heightMask = image.height - 1;
for( int x = 0; x < screen.width; x++ )
{
for( int y = 0; x < screen.height; y++ )
{
// No boundary checking necessary.
int color = image.getPixel( x & widthMask, y & heightMask );
screen.setPixel( x, y, color );
}
}
```

The *AND* operator is great for clearing bits. The *OR* operator sets them. Other functions invert bits, exchange them, and move them left or right. These operators are *fast*, in terms of CPU cycles, and can be combined in lots of interesting ways to achieve some really useful results. Here are a few examples.

**Using OR to combine bits**

The bitwise *OR* operator (often denoted by a vertical bar, |) preserves set bits from both operands. If a bit is set in either number, it will appear in the result.

Though similar to simple addition (they are actually equivalent if the operands don’t have any set bits in common), the *OR* function allows individual bits to be set without consequences.

```
// Ensure that x is odd. We could do this with simple addition,
// but would have to make sure x wasn’t already odd.
x |= 1;
```

Together with *AND*, the OR operator is one of the most commonly used bitwise functions. These two operators turn any integer into cheap and efficient storage for multiple boolean flags.

```
// Define some power-of-2 flags (numbers with a single set bit).
int VISIBLE_FLAG = 1;
int MOVING_FLAG = 2;
int DAMAGED_FLAG = 4;
int TURBO_FLAG = 8;
. . .
int status = VISIBLE_FLAG | TURBO_FLAG;
// Check the ship’s status and disable its engine if damaged.
if( status & DAMAGED_FLAG )
status &= ~MOVING_FLAG;
```

**Using XOR to combine o nly bits that aren’t shared**

The bitwise XOR operator (often denoted by a caret, ^) preserves set bits from *either* operand, but not both. If a bit is set in only one of the operands, it will appear in the result.

This seemingly obscure operation is actually extremely powerful. It finds lots of use when dealing with signed numbers, such as when comparing the sign of two values, or finding the maximum or minimum without branching:

```
// TRUE if x and y are both positive or both negative.
bool haveSameSign = 0 <= (x ^ y);
. . .
// Find the max/min values of x and y, no branches.
int max = x ^ ((x ^ y) & -(x < y));
int min = y ^ ((x ^ y) & -(x < y));
```

It can also be used to compute the absolute value of an integer, or even exchange two values without using an intermediary temporary variable:

```
// Swap x and y in-place (no temp variable).
x ^= y;
y ^= x;
x ^= y;
```

**Shifting bits left or right**

The bitwise shift operators (often denoted with double angle brackets, << and) move all bits to the left or right by the amount requested. Shifting signed numbers preserves the sign, although special unsigned versions of the operator are also sometimes available. Bits shifted past either end of the integer are lost.

As mentioned earlier, shifting a number left is equivalent to adding 0 to the end—essentially multiplying by the radix. Shifting by 5, as in the first example above, is like multiplying by 2^{5} = 32 (verify by noting that, indeed, 79 x 32 = 2528). Similarly, shifting right is the same as dividing by the radix, so the second example shows that -28928 ÷ 2^{6} = -452 (the signed is preserved).

Technically, using shift operators to scale values by 2 or its powers is always faster than integer multiplication by the same amount, although most applications probably won’t see a significant speedup in practice.

Even though it’s easy to ignore how computers actually store and manipulate numbers, understanding the basics gives us an advantage when creating mathematical programs and games. The hardware representation of integers is fundamental to how computers function, and is the starting point for more exotic number forms like floating- and fixed-point.

We’ll explore those number types next as we consider speed, precision, and the trade-offs we’re willing to make for different kinds of games.

-peter (@peterdotgames)