Debugging Over the Years

Perfection

The objective of the games programmer is to get the most out of the computer by writing the fastest code, to achieve the best possible results. Naturally the code has to do what we want, but, and this may surprise you, we don't usually get it right first time. I used to say the program development is a process of repeatedly getting it wrong until finally we get it right. So we spend 99.999% of our time getting it wrong, and figuring out why. The computer demands 100% accuracy and ruthlessly punishes mistakes. The punishment can be in a number of different forms. The computer can simply say "No." if we get the syntax of the language incorrect, or mis-spell a variable name, what'd we call a compilation error. If we do manage to get the source code to compile correctly and actually get to run the program it might just put a graphic two pixels too far to the left, or it may self-destruct. We have to learn to spot the mistakes, and you never ever run out of new ways to get it wrong.
 

Semi-Compiled

Computer languages are like milk, they can be fully compiled, or assembled, semi-compiled, or interpreted. Languages like BASIC can be syntax-checked as you type them in, and they are compiled and executed as the program runs. This is good for programs to be able to run on different platforms, but not especially fast.
 

Assembled

Assembler is very close to the native machine code of the computer, so completely platform-specific, as fast as you can get, but these days as processors have gotten more devious, not terribly practical any more. All my 8 and 16 bit games were written in assembler. I did write a PC movie display routine in 8086 but it wasn't fun, I might as well explain why. The CPU is reading the code, figuring out what the instructions are, then doing them. They are pipelining the instructions and can be working on 2 consecutive instructions, maybe more. Thing is, some instructions take longer than others to decode and execute, and a short quick instruction following a long slow instruction may execute first. That's not very clever if the second instruction depends on the first. We have to write the code in such a way that we avoid such consecutive dependencies. We can just put no-operation (NOP) instructions between all the others, if we can afford the time, which we couldn't back then. 
 

Full-fat

The other type of language we use is a fully compiled language, such as C, or C++. COBOL was another such language. This means that we write our program, then run a compiler and linker to turn the program into machine code, or more likely point out a bunch of syntax errors. These days the editor can also do some syntax checking as we go along which does help to reduce typing errors and wrong names. Way back in the day we had to run the compiler on our COBOL programs and then wait for our printout of the compilation errors to arrive on a trolley hourly. More of the trolley later.
 
There are many other types of language too, including the web-based markup languages that are usually sent from the interweb to your computer, then executed on your computer to display web-pages. Many languages are write-only, once you've written the program, you can't tell what it does or how it does it.
 

COBOL Days

My programming history goes back to 1979 when I was working on a mainframe. This computer occupied an entire room, the disk drives were as big as washing machines, and data was stored on massive tapes as well as disks. The computer had 8MB of RAM. Relational databases were still being designed and we had to share 6 terminals between 40 of us. There was no computer on my desk, just a pencil and a coding pad. We mostly wrote the entire program on coding sheets, handed it over to the typists to enter, oh and we still had punch cards so every line of the program was then "printed" onto a card by knocking holes in the cards. These could be fed back in and stored on disk. We'd then book a session on the terminal, run the COBOL compiler on the program, and then wait for the print-out to arrive with a listing of the program, and all of the compiler errors. Since our fast and friendly typists were not programmers, they weren't going to syntax check our programs, so we would have a lot of errors first time. We had to be working on 3 or 4 programs a day to make this any way time-efficient.
 
Once we fixed all the compilation errors, given that this may take a few passes and therefore a couple of days, we could then take a stab at running the program. Again, the program was never going to work first time, and usually it would "crash", i.e. stop at a particular line of code because we were doing something ridiculous. A common one was trying to do maths on a packed decimal variable that we had forgotten to initialise.
 
When the program crashes, we then have to wait for the hourly print trolley to deliver a printout of the entire contents of our program's memory, in hexadecimal and ASCII, and the address of where it had got to. Later they optimised that to reduce the printout size. It only took a few seconds to work out which line of the program had failed and why, then we'd lob half a mile of fan-fold paper in the bin and make the correction on a terminal session and repeat.
 

Tracing

Whilst we didn't have a real-time debugger (more on those later), we had got a couple of handy features in the compiler. We could use the command "READY TRACE", and the program would print the names of all the routines it was passing through. If you had a lot of loops that could produce a mountain of paper again, so we might not choose to switch that feature on at the beginning of the program. Your only other tool really was to print out any useful variables and other information as the program was running. Since we had a lot of time sitting at our desks, we would spend a lot of time checking the code over by eye. COBOL is a bit more readable than C so it's not so bad. It really hones one's observation and logic skills. Bear in mind that we're not writing graphical programs here, we're number-crunching, updating files and producing reports.
 

Failure is not an Option

It's vital when writing a program that as well as making sure you get the right results out of your routines, that you make sure that you're not accidentally doing anything else. This means looking for "spare" lines of code that you forgot to take out. It's also vital to ensure that all of the error checking is tested. Sometimes that means simulating errors so that you check that the program deals with error situations. Mostly that wouldn't happen in an 8-bit game because we're totally in control, no 3rd party routines being called, and we don't have anything that can produce an error. What can you do about an error anyway?
 

Real-time Debugging

In order to explain what a real-time debugger is, I'm now going to fly forwards in time to the days of 32 and 64-bit multi-tasking computers, you'll see why in a moment. As computers have become more complex, the need for better visibility and control has increased. It is no longer acceptable for a program to just stop and print out which line of code it had gotten to. We want to be able to examine the RAM, look at the variables, and possibly patch it up temporarily so we can continue the test. All this is possible due to the CPU chip design which allows a program to be stopped at any desired point. This can be done from another program, called a real-time debugger, that is monitoring your program. You can even monitor a program running on a different computer, more on that later.
 

Break Points

Since your program is segregated from the Operating System and the program that is monitoring it, as well as all of the other programs running, it's not so likely that you'll lock the entire machine up if you make a mistake Your program thinks it's the only program running, but it isn't. You can watch your program run in the editor so you see your source code rather than the machine code that the computer is really executing, and you can place break points in the program, markers that will stop the program if it executes the line of code that you have marked. Typically the debugger is changing your program to pass control to the debugger by substituting a single instruction for your code. It then has to remember what your original code was so it can execute the right code later. You can liberally spray these break point markers to, for example, check that every route through a function has been tested. You just remove the break points once they're successfully tested. Sometimes you have to temporarily code additional tests so that breakpoints only occur when you want.
 
Real-time debuggers are integrated with the code editors and the compiler output so you can just type in variable names and it will show what type they are and what the value is. You can look at areas of memory to make sure your data hasn't been corrupted, and you can see all the routine and function names in the code. You wonder how you ever managed without.
 

Back to the Amiga

Wind back then to the Amiga days. The 68000 CPU was indeed equipped with instructions to allow debuggers to intercept the code and see what's going on. Since we were writing games and had taken over the screen, formatted it how we wanted and were controlling the colours, it would not be possible for a 3rd party program realistically to run on the Amiga and  write to the Amiga screen. We were therefore compiling our programs on a PC, and then passing the program and the graphics through a connecting cable (can't remember if it was serial or parallel) to an expansion box plugged into the side of the Amiga. This would allow us to see inside the Amiga without the risk of being affected by anything that goes wrong in the Amiga.
 
On the Amiga we were writing in 68000 assembler rather than a higher level language. There weren't any alternatives at the time, and we weren't looking for any. The processor was running at 8MHz (now the clock speeds on PCs are 2,500MHz, and multi-core!). Writing in assembler gives you the fastest results, and we had 16 32-bit registers, not unlike the mainframe architecture of old, nor the 32-bit PC architecture. At this time we couldn't patch the code, you just had to reassemble everything, but at least you can edit the program and look at it side-by-side with the debugger. The overall program size had to be less than 512K as that was the minimum memory size we supported. One of the complications of the differently-sized Amigas was that we had to write a memory manager to be able to allocate chip or fast memory for disk caches, and also push the program code out to fast RAM if it was available. It was therefore important to monitor whether RAM was being allocated and released correctly over multiple games, otherwise the program would run out and crash. You even had to be careful to ensure that the allocated memory didn't fragment over time, or you may again end up not having large enough areas of contiguous free memory.
 

Border Colours

A technique to monitor memory is to store extra helpful debugging variables in your program to monitor how many blocks of memory have been allocated so that you can easily alert if something has gone wrong. We used to use the rather crude method of setting the border colour to a different colours at different routines so that if it stopped and the colour was, say, light blue, we knew it had crashed in a plot routine. It also allowed us to see in real time which routines were taking longer as the border colour would be a mass of different colours, and as the game was synchronised to the display raster so it didn't run away too fast, the colours would be consistent from frame to frame. It was easy to see how much time we had left before the next frame, and you could easily see when it over-ran. Of course, there's no border colour on a PC. "There's no call for it, sir, you're the 83rd person I've told today!"
 

Atari STs

Before that, on the Atari ST, I seem to recall having two STs on the desk, and transferring a floppy disk from the compiling machine to the running one. Not as good as a transfer cable, but since we weren't debugging against the source code it was OK. We had the HiSoft Devpac and that still gave us some real-time debugging on the ST, as long as our program didn't take the machine down. Dominic Robinson had written a gaming OS and pretty much debugged it by the time I got there, so it was a smooth enough transition, though I still hate the Object-Oriented design methodology, which was all run-time because we were still working in assembler. Yes it was clever, but there aren't so many places you can use inheritance, and if you do you'll never figure out where the code is. It also had a square grid of object types against methods, which was clearly growing out of control and taking more and more RAM.

 

Commodore 64s

Taking another step back in time then, to the Commodore 64, for my last game I used a development kit running on the PC, again with the C64 on a leash. The program was compiled on the PC and fired into the C64, along with the graphics. That was 1989 and I am struggling to remember what it was like, I only used it for Intensity. It did allow me to write the largest 8-bit program I had ever written, there was 29K of code out of the 64K of RAM, that's about 15,000 lines of source code. It would have had 15K of graphics, and I used the last quarter of the RAM as the video area and had all 64K of RAM, no ROMs switched in. The rest of the space was level data and working buffers. I believe this development kit did allow me to see into the C64 memory, which helps, but there was no code tracing capability.
 
Morpheus and Alleykat were coded on a PC and I used a 6502 cross-assembler to create the machine code and we had a proprietary parallel cable to send the code over to the C64. The PC was one of the first generation PCs, with twin 5,25" flopping drives, a whole Megabyte of RAM, and an amber monochrome screen. Since we couldn't see the C64 memory from the PC and had no way of stopping the program, we extensively had to use the in game Pause feature. When you develop your main game loop, you need to be able to hold the game up so you can see what's going on. Everyone thinks the Pause Game feature is for loo breaks and taking that all-important phone call, but actually it was a vital development aid.
 

ABMon

The only debugging tool we had was my own "ABMon". This was a routine that got called up every frame (regardless of whether the game was in Pause mode). It simply displayed a 4-digit hex address, and a 2-digit value of the byte at that address on the top line of the screen. I needed numbers for a score display anyway, so I just tagged the letters A thru F on the end. I hooked up a couple of function keys to allow me to alter the first 4 digits up or down so I could dial in any address.
 

Zero Page

For space and speed efficiency on a 6502 chip you want all your variables in the first 256 bytes, what's called the Zero Page. I would write down all the variables I had assigned into this area and keep that paper close by. All the variables thus started with 00 and are together. Bear in mind that space is at a premium so there's no room for nice names of variables being fired down into the C64. Using this feature I could watch any variable in the game while the game was running or paused. Most variables were only single byte values. I think Steve did a version on the Spectrum that showed a number of bytes.
 
The first version of ABMon was actually on the Dragon 32 when I was converting Steve's Spectrum Seiddab Trilogy. It pained me that I couldn't see what was going on in there. I realised after writing the first version that you could also use it to alter the value of a byte at a particular location. That way if you found an unexpected value you could fix it and carry on testing. You had to be careful because mostly you needed to be paused to change values or the program would be setting the values back, and don't point ABMon at the machine code and play with the values!
 

Write It Down

At this time I also kept an A4 pad handy to note down bugs. I was fixing them in batches, more so in the earlier C64 games up to and including Uridium. This was because we hadn't got the PCs yet, and I had two C64s on the desk. One was for running the C64 Macro Assembler, using the 1541 "brick" flopping drive. Assemblies took 30 minutes, so I couldn't justify fixing one bug at a time and re-assembling. I'd wait until I had a sheet full of bugs, or I couldn't find any more mistakes. While it was assembling I'd use a second C64 for working on sprite or character graphics, or game maps. We used SpriteMagic and Ultrafont, which I'd bought from the local computer shop, back when we had such fascinating places.
 
Just to momentarily fly back to the days of the Dragon 32, I bought a multi-pass assembler program from the same local computer shop, and had that running on the Dragon 32 disk drive, which was running an early MSDOS variant, if memory serves, called DragonDOS.
The assembly time was pretty quick, and I coded in multiple blocks of memory to keep the code modularised, and also to be able to run the assembler from different locations to assemble all the blocks that had to be loaded all through the machine. The blocks had little jump tables at the top to get to the lower functions, a bit like DLLs today. It's not totally efficient but not bad, it worked pretty well to allow me to work on small files of source code. When you're working on the code on the target machine, it all gets destroyed when you run the game in, so a reset was in order. Lucky that the OS was all finished and in ROMs so you just flick the ON/OFF switch and you're back. Imagine doing a major re-boot between tests now. I know Azumi takes less than a minute, but the 8-bits were up in a second.
 

From Zero to Hero

So there we have it, if you followed all that. When we started we had no debugging capabilities whatsoever, and not even print-outs arriving on the hourly trolley. Just one print-out of the finished game, which we commented after printing because there wasn't space in the editor to store comments. I wrote the ABMon program to let me see what was going on inside the machine, and we relied on the programs to be small enough to memorise all the routines. Assembler is sufficiently unforgiving that if you make a coding mistake you can kiss your program good-bye, and you had to be able to know your program inside-out to be able to navigate the code, there were no fancy editors. Debugging your code could be a case of just staring at the code until you spotted the error.
 

Nametara Ikan Zeyo

There's no doubt that debuggers have come a long way. I'd still say that learning how to use the debugger and writing code that is fully debugged is one of the Dark Arts.
 
It is never to be... underestimated.
 

Now

I am now using Visual Studio 2013; and running windowed is manageable on a single screen system,. Running full screen I should be able to run the debugger on a second monitor. If all else fails I'll have to hook 2 PCs together. Might slow things down as they're in different rooms!

A Bit More About Colour.

My First job

A few years after I started work, in the early '80s we still had to book time on and share 6 terminals between about 40 staff. We had monochrome green-screen monitors on the shared terminals. Most of our time was spent at our desk waiting for the print-outs to be delivered. We would scribble notes and corrections on the latest code listing ready to type them in next time we got a 15 minute terminal session. Seems astonishing now. We had just got a (gasp) colour monitor on one system so you could type in red to tell people off! I believe it just had 8 colours available: white, cyan, yellow, magenta, blue, green, red and black - the old standards. Basically each of the 3 colour guns: red, green and blue could be ON or OFF, giving 8 combos.

Gary

I met up with a school friend who had been to Cambridge University and was then working for the same company. He was working on a radar display system where they were writing the software and designing the hardware. They had done some experiments to decide how many colours to support on the display. He told me that they did some colour fades on the screen, and once they got to 22 bits of colour, you couldn't distinguish the changes. By that, we're looking at maybe 7 bits of red and green, and 8 of blue. That seems to tally with common thinking that the human eye can distinguish up to 10 million colours.

Colour Blind

Now, I have red/green colour-blindness. I have less red receptors so if I see a brown or green I can't easily tell which it is, I can't see the red content so well if there's some green too. I expect my red sensitivity is down by a few bits. makes traffic lights fun at night, when you can't see the position of the lights within the framework. They used to handily write "STOP" across the red light. More recently they made the green colour a lot lighter so it's easy to tell. Since about 1 in 4 males have some form of colour-blindness, it's important not to use confusing colours in graphics. My colour-blindness actually helps when I'm making colour choices because I'm not going to pick a combination that I can't make sense of, and that other people would find confusing.

Steve N

I must just tell you the tale of when I was re-soldering my Dragon 32 analogue joystick. It had about 16 wires in it, not the more common 6. All the wires were different colours, and I picked out all the easy ones and happily connected them up. I was left with a little cluster of orange, red, brown and green wires. Being unsure I called over my friend Steve (not Turner) and asked him to confirm my choices before I soldered them up. After that I connected the joystick, but it didn't work properly. I got someone else to check the wiring, and he says: "Idiot, you've soldered the brown wire to the green!" Possible, but I did get Steve to check it. Again: "Idiot, he's colour-blind too, more so than you are!" Turns out Steve sees almost in black and white. Wish he'd have mentioned it earlier!

Dragon 32

Back to the plot. In the days of the Dragon 32, we had 4 choices of graphics modes. At that time the colours were hard-coded into the graphics chip, and the colour choices were pretty much that each of the red, green and blue colour guns were either ON or OFF. So in multi-colour mode we could have the two awesome palettes of: red, green, blue and yellow OR magenta, cyan, black and white. The pixels were also wider, so best to pick from the two hi-res modes (320x200 - not so hi-res these days), of black and green OR black and white. I had already bought Donkey Kong for the Dragon 32, and they had kindly implemented both multi-colour mode and hi-res modes on the one tape. That quickly made me aware that multi-colour mode was pretty horrific. For my 3 Spectrum conversions of 3D Space Wars, Seiddab Attack and Lunattack I chose black and white mode. Each pixel had a choice of 2 colours from a palette of 2!

Demo Game

I did write a Lunar Lander demo in Dragon BASIC: the language supported software sprites so I let it draw a background of red mountains against a blue sky and you could press the button while it was drawing jaggedy rocks to smooth out a landing area. Then when it finished you tried to land your green and yellow lander. Wonder if that's still sitting on a cassette somewhere?

ZX Spectrum

The Spectrum doubled up by allowing bright and normal versions of the 8 colours, except black. "How much more black can it be?" "None, none more black." Additionally each 8x8 pixel block could select background and foreground colours, but both had to be bright or normal. This improves text output no end, but is somewhat limited for games. So now, whilst each pixel is ON or OFF, it could be one of 8 2-bit colours, so we have a choice of 2 colours per character from a palette of 15 Things are looking up.

Commodore 64

When I moved to the Commodore 64, I started with a conversion of a bitmapped game, so I kept the C64 in bitmap mode. Like the Spectrum, the screen supported colour attributes for each 8x8 block, but the background colour was fixed across the screen, raster split interrupts notwithstanding. Fortunately we had an all-black background for Lunattack, and the hi-res mode was the same resolution, we used a single foreground colour for all of the screen, except the panel and screen border. The C64 colour choices showed a bit more imagination in their palette of 16. The 3 greys especially, I would use later. 

Gribbly

As soon as I could I switched to character mode, and multi-colour mode for Gribbly's Day Out. This involved widening the pixels of the graphics and sharing two additional colours from the palette of 16 so that in any one 8x8 square I could choose 4 different colours per fat pixel. You don't really want to be scrolling the colour attribute map so again they were all set the same. The colour attribute map was fixed in location so you couldn't prepare anything in advance, and you would need to alter the colours quickly in such a way that the display raster wouldn't overtake you, or you'd get colours flickering - best avoided for now.

Paradroid

By Paradroid I was fed up with the size of multi-coloured pixels, I wanted to get back to 320 x 200 resolution and finer detail. I quickly realised that I still needed to get more colours on the screen and had no choice but to set the colour attribute map according to the graphics and update that as well. My screen rebuild had to set up the screen characters and the attributes, all without being passed by the raster display. The plan for that is to run as close behind the raster as possible. You get a frame and a half to do it that way, and you need a fair chunk of that, which is why, with all the other game mechanisms going on, that we set it to run at 17 frames per second, It was only later when I did the Competition Edition that we realised that there was still spare waiting time that we could remove and get it running at 25 frames per second. I was as surprised as the next person... still am!

Atari ST and Commodore Amiga

Skip forward to the Atari ST and everyone's got to use bitmap mode because that's all there is. Now we have a choice of 16 colours per pixel and we can define our own palette with 3 bits of red, green and blue, giving 512 colours. That was not quite as many as the Amiga, which could give you 32 colours per pixel from a palette with 4 bits of red, green and blue, giving 4096 colours. The 5 bit-plane mode for 32-colours didn't get used in the early days because it compromised the CPU and took an extra 25% of time to plot into, time we didn't have. Later the Atari STE caught up some of the way by allowing 4096 colours, but... due to lack of expansion foresight, the bits weren't in the right order so we had to do some bit manipulation before sending the colours to the graphics chip to put the lowest bit at the front.

Arcades

Meantime, in the arcades the hardware was still a couple of steps ahead. They had two playfields and sprites, with independent palettes of 16 colours probably from a choice of 4096. Just being able to select different palettes at will gives you the meanies that can flash orange when damaged, white when hit. We didn't tend to see individual sprites fade in or out, so they didn't get semi transparency yet. They did also have X and Y flip bits on the character sets and sprites, which cuts down on the number of graphics frames. Little did we know they were about to implement scaling and rotation.

AGA Chipset

Things were just getting interesting with the Amiga A1200 and the AGA chipset. Suddenly we had 256 colours in one playfield, or 2 playfields of 16 colours, all from a palette of 10 million and we can do some really nice stuff. Then the consoles arrived and we all got squashed. 

PC

Step forward some years and someone had the bright idea of using their PC to display photographs. In that "Wouldn't it be nice if..." moment they realised that 4 bits of red, green or blue won't quite do it. I have seen some photos using the Amiga's hold-and-modify mode and whilst they're OK, it's not quite there. Before you know it, we have 8 bits of red, green and blue, no palette restrictions at all and the CPU can't possibly manage all this on its own. Actually we had a foray into 16-bit colours with 5 red, 5 green and 6 blue, or some such, but already we needed... hardware acceleration!

GPUs

Step forward some more years with progression in floating point co-processing and GPUs and the processing power has caught up with the pixels. My 970GTX card has about 1500 cuda cores, which I am given to believe means that it can process 1500 pixels at a time, and you can write Pixel Shaders that run your little programs per pixel to do lighting and other magical transformations. You could also use them to do your graphics in a palette of 256 and let it convert to 24-bit with lighting. They can also do 256 levels of transparency or additive lighting. I've even played a whole scrolling top-down race game coded in a Shader. Who does that?

HDTVs and Digital Cameras

Our HD colour TVs and Digital SLRs typically support these same 24-bit colours, which, as we recall from the beginning, is about all we can distinguish with the Human eye, we thought. Nevertheless the mighty electronics companies are now producing HDR Ultra HD TVs which use 30 bit colours. That's possibly because the 10 million colours we can distinguish are not necessarily placed linearly across the 8-bit colour-cube. Too late to just scale 8-bit data non-linearly into 10 with a lookup table in the TV, boys? Darker colours need a bit more definition, whereas the brighter colours just melt into each other. For now, our home computers will settle for 24 bits, and if it does go up; the humble programmer will be none the wiser, nor will the graphics artist. It doesn't strike me as practical to sit down and draw graphics with 10 million colours to choose from for every pixel. We have to render simpler designs with the computer and let it do the lighting. Imagine the same setup with coloured pencils: how big is your pencil case going to be with 10 million pencils in it? How long will it take you to find the one you want/like? 

The End

This switch from drawing to rendering sounded the death-knell for our beloved retro arcade games. Having chosen to render, and got the power to do so in real time, all of our games began trying to look photo-realistic rather than artistic. Suddenly painters have to become sculptors and programmers have to be mathematicians. That would appear to be why, despite the easier languages available in which to program (thanks Sheldon!), and the better tools for programming and debugging, writing games is slower and harder than ever.