In the same year that Allin Cottrell and I published Towards a New Socialism(TNS), I also brought out a couple of computer science books. The two CS books might seem to have little to do with the book on socialism but they are linked by an intellectual history of research over the decade that led up to TNS.
Since around 1981 I had been thinking about the problem of socialist calculation from an engineering viewpoint and trying to come up with computer engineering solutions. The other books record part of that history. I bring it up now because I think that some of the engineering solutions that I worked on in the 1980s were sufficiently in advance of the then existing technology that they may still have some technical relevance to a future planned economy.
In particular, I will focus on the issues of the persistent store and the world wide address space machines that I was designing and prototyping.
The Data Curator
From late 1980 to the summer of 82 I was working on my PhD. The research topic was how to incorporate the concept of data persistence into high level compiled languages in such a way as to allow it to work in a distributed networked environment.
Because of this background and also because I was a Marxist economist, a fellow PhD student from China, a CPC member and still in those days a Maoist, asked if I would be willing to come to work in the planning ministry in Beijing and introduce some of the techniques we had developed in Edinburgh to that ministry. As it happened, despite me agreeing, my Chinese comrade discovered that although foreign CS experts were being invited to China, the planning ministry was out of bounds for foreigners.
But let me first explain the context of our research in Edinburgh in the early 80s and why it was seen to be relevant.
The dominant paradigm in computing in 1980 divided computer memory into two logically distinct types: Random Access Store (RAM) and Filestore. Data in RAM vanished when you turned the machine off, whereas the Filestore was persistent. The conceptual distinction reflected two different technologies, semiconductor DRAM and rotating magnetic disks. These two technologies are still with us, but persistent semiconductor store (SSD) is rapidly replacing the old rotating disks in new computers.
Today, both RAM and SSD are made of silicon, and both allow rapid random access. But the conceptual legacy of the old disk filing systems persists. Files have to be opened and accessed by read and write system calls, and any data stored in the variables of programme vanishes when the programme ends.
But this distinction between persistent and volatile memory can in principle be hidden by smart virtual memory technology. The EMAS operating system that we used in Edinburgh in the late 70s and early 80s and the MIT Multics system had a different concept. A user’s files were mapped into the address space of their programme and accessed as array data with a programme. Some early interpretive languages like APL and Smalltalk also had the notion of a ‘workspace’ in which the variables of a programme could persist between login sessions.
Although memory mapped filestore has since been adopted in Windows NT, Windows 10 etc, much current programming still takes place using the older read/write file paradigm.
Memory mapping of files only really works for array data types. Heap data: instances of classes, graphs, trees etc, is still non-persistent in most programming languages. A limited form of heap persistence is provided by Java serialisation, but this facility is not as consistent and orthogonal as that which was long ago available in Smalltalk.
Our research in the early 80s was designed to allow existing compiled languages like Algols or Pascal to have persistent heap store. Moreover, we wanted this persistent store to be larger than that provided on the relatively small 16 bit Alto processors on which Xerox Parc had prototyped Smalltalk.
In 1980 commercial personal computers were puny Apple IIs or similar 8-bit machines. There was no World Wide Web, and the ethernet was still just a research system in Xerox Parc. So Edinburgh CS dept developed its own workstation : the APM, and its own locally designed ethernet.
The Data Curator project succeeded in demonstrating that you could implement a distributed persistent object structured virtual memory for compiled languages. Its first demonstration was for a dialect of Algol from St Andrews: S-algol.
When I was asked to go to China I started thinking about how the technology of persistence could be applied to planning an economy for a country as huge as China. The first point that struck me is that we were going to need a much large scale of distributed computing than anything we had experimented with. If one looked forward to a future automated Chinese economy the planning system would need to coordinate data originating in hundreds of millions of computers. It would have to integrate this into a single vast shared database that could coordinate the whole of social production.
Persistent store machines
This implied that we would need a computer architecture with a very much larger shared virtual address space than those available on 1980s computers, which maxed out at 32-bit virtual address space. Furthermore, remember this was before the collapse of the USSR, and socialism still seemed to be winning on a world scale, it struck me as obvious that a design for socialist planning should in principle be extendible to worldwide planning, for a day in the future when the number of computers rivalled the numbers of people.
The population of the world, around 4 billion, in those days would already exhaust 32 bits if there was one computer per person.
The first machine architecture I designed, PSM had a 128 bit address space made up of a 48 bit host number, a 48 bit local object number and an offset of up to 32 bits within the object. That is to say, individual objects could be up to 4 Gigabytes in size.
Objects would migrate via a worldwide network from the source machine to any machine that had a copy of the Host/LON combination. This is very similar to the somewhat later concept of a URL used in the WWW, but with the difference that the identifiers were seen as being binary rather than textual.
The machine architecture seems, by modern standards, to have a rather sparse register set (shown below).
The sparse set of registers was heavily influenced by the ICL 3900 series. We were in close collaboration with ICL with the view to the PSM being a successor architecture to the 3900. The single 128 bit accumulator, already was there on the 3900. The segment registers we proposed were longer than those on the 3900. The intention was to prototype the PSM by microcode changes to an existing series 3900.
ICL supplied us with an early 3900 machine at Glasgow University to which the team had moved. But obstacles were placed in the way of accessing the microcode so the research platform at Glasgow was switched to a new machine that I will describe in a later post.
For socialist calculation to be feasible certain engineering problems had to be solved. It had to become possible to access what is now termed ‘Big Data’. It had to be possible to unify information across millions of computers. It had to be possible to have databases very much larger than could be addressed by the 32-bit computers available at the start of the 1980s.
Today the technologies to do most of these things are well-established thanks to the Web and cloud data servers.
It is arguable, however, that the computing paradigm on which all of this is built, that of volatile ram combined with a distributed filespace is less amenable to high-performance computation over distributed graph structures than the sort of address architecture our team was working on in Glasgow in the mid-1980s. With modern VLSI technology, it would be feasible to design a class of 128 bit addressed RISC processors. The ICL register architecture we proposed is clearly obsolete and would have to be replaced by something more modern.