dev, computing, games

📅October 4th, 2024

For me this has come up when people are describing or categorizing different computers, trying to compare capabilities or different things about them. They use terms like "8-bit computer" or "16-bit computer" or "32-bit computer" to categorize something. You get the gist. It's fine. Nothing wrong with that.

It does become a problem when we actively lie to ourselves and pretend like these are specific technical terms.

I've run into way too many 'smart' people who are deluded of this.

Bitness (of a computer) is a marketing term, not a technical term. It describes how you feel about a computer, what the vibe is, like what sort of capabilities it reminds you of, what time period it makes you think of. There is not some universal test, there is not some hard specific qualifier for bitness, and it is not a hard technical category. Not for the computer, and not even bitness of the CPU. It's useful for general conversation and marketing purposes, just not more beyond that.

If there were a technical definition for bitness of a computer, or say the CPU, what would it be?

One dude I talked to once said "well, it's technically pointer size". So what about the Super Nintendo. Its pointer has 3 bytes: low, high, bank. Except literally nobody says that, everyone calls SNES a "16-bit console". Or the classic example of the TurboGrafx-16, marketed as a "16-bit console". Except the pointer size is 8 bits, only the graphics adapter and its transport is 16bits.

Or consider the modern 64-bit Intel processors you use today. Do you think you have a full, viable 64-bit pointer? You don't, maybe you already know, only the lower 48 bits are viable. Check it yourself. In Intel computers available today you'll never have a program with two pointers that differ by the upper 16 bits. In fact I had to debug something that squirreled away metadata in the upper 16 bits, and just stripped them and fixed it up anytime it had to dereference. This fat pointer thing is something you can do today if you want, because we don't really have a full 64-bits of pointer.

I'm not hugely prescriptivist when it comes to language, like the point of words is so you can say things and have people understand you. So for defining words, I go with how the world uses it. In this case the world isn't agreeing on one consistent, technical category, so if you think there is one you're fighting a losing battle.

If you go by things like "pointer size" or "general purpose register width" to determine bitness of a computer, sometimes there's choices. For Super Nintendo or any 65816-based computer, not only does it have an 8-bit 6502-style legacy mode, it has 8-bit native mode and programs are constantly switching back and forth between 8-bit and 16-bit register widths and addressing. So if you go by register width you could call it an 8-bit or 16-bit computer. Or say for Win16 with segmented memory model on Intel x86, you have near and far pointers which are 16-bit or 32-bit respectively. Or for modern Intel-based CPUs you have viable pointer amount and the actual amount that is passed into a dereference. I know I'm mixing a bit of high and low level here. This is choosing different, but I think fairly reasonable interpretations of what it means to use a pointer.

I'm not saying any of this is a big deal I'm just saying there can be choices. And usually, when there's a choice, you pick the bigger one. Because, the bigger number sounds better. Because it's a marketing decision.

Describing a computer using "bitness" is super useful to get a general feel for the time period it came out. What order of magnitude of size is the system memory. What is the screen resolution and color depth. Could it have 3D acceleration. Could it have chiptune sound or something more.

It stops being useful if you have to write an emulator, write an operating system, write a device driver, write a compiler, even to write application code. Then, you'll want to know what's the actual behavior, what kind of pointers do you have, what can you do with them, probably width and interpretation of native types would be good to know.

Anyway, I did come across this other person, a different dude who was completely adamant that there was a hard technical definition of bitness of a computer. You don't want to be like this dude, since he couldn't tell me or articulate to anyone what it was-- he didn't know, he was just so certain that there was one.

October 4th, 2024 at 4:10 am | Comments & Trackbacks (0) | Permalink

📅July 12th, 2017

It's in the original RadioShack box and everything, plus instruction books and inserts. I can't believe it, they're practically like new...

This is a 6809-based machine which was first released in 1983. It runs BASIC and the form factor comes built in with the keyboard. It can save and load programs from cartridges (called "Program Pak"s). The name TRS stands for Tandy/RadioShack- the manufacturer Tandy produced this hardware, and Radio Shack distributed it. Remember when Radio Shack was two words?

As it happened I did not have one of these growing up, but as a child we had an Apple IIc which was from around that era. I have fond memories of learning how to program in BASIC on that machine, writing small programs and simple text games, and the task of having to figure out how to debug them. The TRS-80 really reminds me of that whole experience. Now, in more colors than my former binary color dipslay. I went and cleared the screen to green, blue, and magenta just to make sure.

Testing it out on gigantic CRT just because I had a spare jack there

July 12th, 2017 at 1:31 am | Comments & Trackbacks (0) | Permalink

📅June 26th, 2017

I installed Windows 98 to the "space heater computer"

Is it possible to install 98 to an Intel Core-i5 with 4GB of DDR3 RAM?

Turns out, yes. If you spoof it to only enumerate 1GB, plus a bunch of other sketchy edits to system.ini and config.sys.

 

June 26th, 2017 at 10:30 pm | Comments & Trackbacks (0) | Permalink