The IBM System/34

The first job I had after college graduation was as a software engineer (junior programmer) at IBM’s development lab in Rochester, MN. I was assigned to work on the communications subsystem of the  System/34 minicomputer. All the code was written in assembler language. When I was in college I was taught that high level languages (at that time, Pascal, Lisp, and FORTRAN were the main ones) were the way of the future. When I got to IBM I was told high level languages are for wimps. Real programmers write in assembler language!

The System/34 had two processors, the MSP (Main Store Processor) and the CSP (Control Store Processor). IBM was an insular culture at that time with very little interaction with the rest of the industry. One result was that IBM did a lot of reinventing of wheels. Another result was a parallel set of jargon and acronyms. For example computers didn’t have memory, they had storage. Rather than disk drives there were DASD (direct access storage devices) – pronounced “dazzdy”. You didn’t boot a computer, you IPLed it (Initial Program Load). Thus the Main Store Processor handled programs in the main storage (memory) of the computer.

Basically the MSP ran application programs and the CSP handled I/O. The communications code I worked on ran in the MSP although lower layers of the protocol ran in the CSP. A second I/O processor, the Multi-Line Communications Adapter (MLCA) was added later so that the system could handle 4 modems simultaneously.

Like the CDC6500 I talked about in a previous post, the MSP ran at 1MHz although it was a much simpler processor and nowhere near as powerful. The CSP ran at the blazing speed of 4MHz, about the same as the original IBM PC.

I was responsible for the SDLC (synchronous data link contol) layer of the SNA (System Network Architecture) communications system. There were two versions of the SDLC, primary and secondary. The primary version was used when the System/34 was the host or master, such as when connecting to workstation terminals. Secondary SDLC was used when the System/34 was a slave or peripheral unit, such as when communicating with an IBM mainframe.

I fondly remember programming on the System/34. Since we programmed in assembler I became very proficient in hexadecimal. I could glance at a hex dump and recognize the instructions by opcode. When there was a bug the typical debugging procedure was to produce a core dump – the entire memory of the CPU. This was 16K to 256K, printed out in hexadecimal on green and white striped paper (dubbed watermelon paper). I would use a highlight pen or a red pen to underline various control blocks and pointers, tracking through the machine state until something odd was found. This typically was the point where a rogue pointer cause memory to be overwritten. We fixed bugs by producing a hex patch. You would replace an instruction in your program with a jump instruction, pointing to a patch area. All programs had a few hundred bytes set aside for patches. In the patch area you would construct a sequence of instructions correcting the bad behavior then jump back to the next instruction past the failure point.

The patches were typed into a keypunch, in hex, producing a set of paper punch cards. You added your cards to the patch deck, that was kept in a cabinet in the lab. When you booted (IPLed) the computer you were working with you first loaded in the latest operating system then ran the deck of patches through the card reader. Once a week the patches were collected and a new operating system version was created. The patch deck would go away, soon to start growing again with this weeks patches.

The operating system was loaded onto 8″ diskettes. Each diskette held up to 128K, so it only took a handful of diskettes to hold the entire operating system.

The System/34 had a control panel on the side that was a technological delight. There were a set of LEDs that showed various system status. The most dreaded was “machine check”. When that lit up it meant the CPU had crashed. Like the “check engine” light in cars, it offered no details other than “you’re screwed”.

There was also a set of 4 hexadecimal digit LEDs that displayed 16 bits of information, or one “word.” The hex digits could display an address in memory  or the content at that memory location. Under the LEDs were four 16-position rotary switches which could be used to create a 16-bit number. For example, to see a specific memory location you dialed in a 16-bit memory location then pressed a button which caused the LEDs to display the contents of memory. Or, to patch memory, you dialed in a memory location, set a toggle switch to “address” then pressed the button. Then you set the toggle switch to “memory”, dialed in the value you wanted to write into memory, then hit the button again. When the machine was running the LEDs flashed merrily, showing the current instruction address. It was mesmerizing and a delight to play with.

The System/34 was a profitable mid-range system. Later in the 70’s IBM Rochester produced another mid-range machine, the System/38. The 38 was a technological marvel with a totally different architecture than the 34. It was also completely incompatible, as far as software applications. In those days, it seemed every IBM computer that came out had a new, unique operating system. Due to the incompatibilities, a gap bridging machine, the System/36 came out. Later all the System/3x machines merged into the AS/400, which eventually morphed into the iSeries which is still in production.

The System/38 was quite sophisticated, which will be a subject for a future post. One of the main architects was Glenn Henry, an IBM Fellow who went on to be CTO of Dell Computer then founded a chip design firm, Centaur Technology, which produces low cost, low power Intel-compatible CPUs. At the end of 1979 both Glenn and I transferred to IBM’s Austin, TX lab. There was no connection at the time, purely a coincidence, but our paths crossed several times in subsequent years. He was main architects and managers of the RT PC project, IBM’s first RISC processor, Unix-based engineering workstation. I was a lead programmer in the operating system kernel group. When Glenn went to Dell, he hired me to manage a Unix software product for Dell servers. Both these projects will be subjects of upcoming posts. 

Tags: ,

4 Responses to “The IBM System/34”

  1. Douin Says:

    Hello, i read your post and it was a pleasure… When i starter to work, m’y frits Job was to repaire système 34 and 25 Heard after im so pleased to go back. Because i never had Time to learn and use thé ssp i try to fin an ssp emulator. Do you Know if somebody done that ?
    Thanks,
    Frederic
    From france

  2. Dave McGuire Says:

    Very nice write-up. I just got ahold of a System/34, as well as a System/32. The /34 is close to being able to IPL; I hope to have it up and running soon.

    In your description of the /34’s CE panel, I think you may have gotten it mixed up with that of the /36. The /34’s display is binary, just a row of individual LEDs, not hexadecimal displays.

    -Dave McGuire

    • excitom Says:

      Regarding the display, I believe there were both LEDs and hexadecimal displays. The LEDs were status lights for various things, and there were four hex digits associated with 16-position rotary switches. You could dial in a memory address with the rotary switches and see the contents of that location. Or, you could patch into memory a value from the rotary switches. Given that four hex digits is 64K, this only worked for the lower 64K of memory. Newer models could have up to 512K !

Leave a reply to excitom Cancel reply