About:

As with everything so far, nothing paticularly new. It has all been done before - more or less - just not the way I would do it!

This brain-vomit is for a memory-bus interface for a system that relies on in-chip caching and off-chip non-volatle memory for storage. And nothing else besides!


Keep it simple, 'cause I'm stupid!

A fairly simple idea. Possibly there are good reasons that it wasn't done this way. Though just as likely there were no particularly good reasons beyond "it happened the way it did way because it did"! (There is actually an awful lot of that in technology, much like biology, or all evolutionary systems, I guess!).

Imagine a CPU, possibly even an SOC (System-On-Chip) or MCM (Multi-Chip Module) with an internal fast SRAM cache. And a primitive kind of MMU (Memory Management Unit), possibly under CPU-control rather than automatic if we are still way back in the 8-bit days.

This memory is divided into 4k blocks which are loaded in or written out to an external memory in 4k chunks over an 8-bit wide data bus (plus a few extra lines for signal control). Every read-request or write carries a header with a 32-bit block address for the source/destination.

In the 8-bit-era you would probably only use the two lower bytes, for a 16-bit block address covering 2(16+12)=256MB, which ought to be enough for anybody!

There is no evidence Bill Gates ever said the imfamous "One megabyte [of memory] should be enough for anyone" line. And the person who is believed to be the souce was saying it in a very specific context, namely the assumption that the computer system on which 1MB was enough would be superceeded long before 1MB of memory would even be affordable.

Likewise, my claim of 256MB is purely for an 8-bit computer system. And I have left space in the bus protocol for 2(32+12)=16TB, which is quite large for a normal home user (not a server, of course) even by 2020's standards. Also keep in mind this is for fast storage - basically for OS, Apps, and in-use data, not keping your media library in (though 16TB would hold 12x my current personal media library anyway!)

The astute, or just hopelessly geeky (Hi!), will recognised I have just described a primitive file system with 4k blocks, as plenty of 8/16-bit computers used initially. The more astute (and geeky) might be seeing reflections of an 8-bit implemation of a SPI-bus. As I said above: nothing particularly new in concept, just tweaks to implementation to suit my personal hardware-design preference quirks.

You could implement this in a pretty-bog-standard 8-bit system. If you leave it under CPU-control, it is basically bank-switching with 4k banks and an outside-CPU-memory-reaching DMAC (Direct Memory Access Controller). Custom chippery, but nothing at-all difficult for the time.

On a 32-bit system, you may as well make it the official off-chip memory bus and the on-chip stuff pretty standard MMU+CACHE+AE (Address Extension) arrangement. 4GB per-process, up to 16TB total memory. Bog-standard except there is no intermediate off-chip RAM between the CACHE and non-volatile storage.

This inherently favours small-memory systems, since CACHE memory is always constrained, and until quite recently solid-state storage was too. But I tend to work with small data sets anyway, so that is my preference. No reason there can't be other systems doing it other ways to suit other needs. Nature abhores a monoculture... or something!

In the 80's your off-chip storage options would be a bit limited, being ROMs or battery-backed SRAM, though a primitive form of FLASH memory could likely be cobbled together from 4k EEPROMs. The 4k blocks is the important bit here. You don't need the individual-byte-erasure of EEPROMS, which is where Flash came from (a technique to erase EEPROMS whole-blocks-at-a-time - the exact block size varied, and still does, I am just normalising on 4k blocks for simplicity. I like simplicity! TBH, such a system might even encourage the development of FLASH memory technology a bit earlier, since it would enstate an explicit use for fast whole-block erasure early-on).

While over the 1980-2010's, bus-widths grew and grew, to get more data around in parallel, and hence faster, since then busses have shrunk again (see ISA➔EISA➔PCI➔PCIe) as it was eventually found that fewer lanes are easier to keep timing-synchronised and hence really jack-up the speed, for far more ultimate bandwith than wide-parallel can practically manage. 8-bit busses are more-or-less the upper diminishing-returns limit for this, and fit nicely into our 8-bit origins, and well-matches out customary minimum-data-with of the 8-bit byte.

The Byte, or Binary Term, is not inherently 8-bits, nor has it historically been so consistently. It was the literal '8-bit era' that cemented the 8-bit Byte as the norm.

I would also dual-mode this bus, with an extra pin or two, to also serve as a conventional (possibly address multiplexed) memory bus specifically for interfacing to 8-bit peripheral chips. This has the added advantage of saving all the messing about with 8/16/32-bit data busses that happened in the 8-to-16/32 bit transition: its 8-bits all the way, baby! If - by the 1990's - you need more periperal bandwidth than an 8MHz 8-bit bus can give you, you probably can afford a different class of chip - I'm talking home/small-business workstations here, not big-iron servers!

The come-up is that you get a flat-(virtual)-memory to store all your data in via a database rather than a filesystem (very Smalltalk-ish!). This would be hidden behind the operating system, so not necessarily functionally much different to what we did/do anyway, but conceptually neater.