|Manufacturer||Control Data Corporation|
|Release date||September 1964|
|Price||US$2,370,000 (equivalent to $22,360,000 in 2022)|
|Dimensions||Height : 2,000 mm (79 in)|
Cabinet width: 810 mm (32 in)
Cabinet length : 1,710 mm (67 in)
Width overall : 4,190 mm (165 in)
|Weight||about 12,000 lb (6.0 short tons; 5.4 t)|
|Power||30 kW @ 208 V 400 Hz|
|Operating system||SCOPE, KRONOS|
|CPU||60-bit processor @ 10 MHz|
|Memory||Up to 982 kilobytes (131000 x 60 bits)|
The CDC 6600 was the flagship of the 6000 series of mainframe computer systems manufactured by Control Data Corporation. Generally considered to be the first successful supercomputer, it outperformed the industry's prior recordholder, the IBM 7030 Stretch, by a factor of three. With performance of up to three megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
The first CDC 6600s were delivered in 1965 to Livermore and Los Alamos. They quickly became a must-have system in high-end scientific and mathematical computing, with systems being delivered to Courant Institute of Mathematical Sciences, CERN, the Lawrence Radiation Laboratory, and many others. At least 100 were delivered in total.
A CDC 6600 is on display at the Computer History Museum in Mountain View, California. The only running CDC 6000 series machine has been restored by Living Computers: Museum + Labs.
Main article: Control Data Corporation
CDC's first products were based on the machines designed at Engineering Research Associates (ERA), which Seymour Cray had been asked to update after moving to CDC. After an experimental machine known as the Little Character, in 1960 they delivered the CDC 1604, one of the first commercial transistor-based computers, and one of the fastest machines on the market. Management was delighted, and made plans for a new series of machines that were more tailored to business use; they would include instructions for character handling and record keeping for instance. Cray was not interested in such a project, and set himself the goal of producing a new machine that would be 50 times faster than the 1604. When asked[when?] to complete a detailed report on plans at one and five years into the future, he wrote back that his five-year goal was "to produce the largest computer in the world", "largest" at that time being synonymous with "fastest", and that his one-year plan was "to be one-fifth of the way".
Taking his core team to new offices nearby the original CDC headquarters, they started to experiment with higher quality versions of the "cheap" transistors Cray had used in the 1604.[when?] After much experimentation, they found that there was simply no way the germanium-based transistors could be run much faster than those used in the 1604. The "business machine" that management had originally wanted, now forming as the CDC 3000 series, pushed them about as far as they could go. Cray then decided the solution was to work with the then-new silicon-based transistors from Fairchild Semiconductor, which were just coming onto the market and offered dramatically improved switching performance.
During this period, CDC grew from a startup to a large company and Cray became increasingly frustrated with what he saw as ridiculous management requirements. Things became considerably more tense in 1962 when the new CDC 3600 started to near production quality, and appeared to be exactly what management wanted, when they wanted it. Cray eventually told CDC's CEO, William Norris that something had to change, or he would leave the company. Norris felt he was too important to lose, and gave Cray the green light to set up a new laboratory wherever he wanted.
After a short search, Cray decided to return to his home town of Chippewa Falls, Wisconsin, where he purchased a block of land and started up a new laboratory.[when?]
Although this process introduced a fairly lengthy delay in the design of his new machine, once in the new laboratory, without management interference, things started to progress quickly. By this time, the new transistors were becoming quite reliable, and modules built with them tended to work properly on the first try. The 6600 began to take form, with Cray working alongside Jim Thornton, system architect and "hidden genius" of the 6600.
More than 100 CDC 6600s were sold over the machine's lifetime.[when?] Many of these went to various nuclear weapon-related laboratories, and quite a few found their way into university computing laboratories. Cray immediately turned his attention to its replacement, this time setting a goal of ten times the performance of the 6600, delivered as the CDC 7600. The later[when?] CDC Cyber 70 and 170 computers were very similar to the CDC 6600 in overall design and were nearly completely backwards compatible.
The 6600 was three times faster than the previous record-holder, the IBM 7030 Stretch; this alarmed IBM. Then-CEO Thomas Watson Jr. wrote a memo to his employees on August 28, 1963: "Last week, Control Data ... announced the 6600 system. I understand that in the laboratory developing the system there are only 34 people including the janitor. Of these, 14 are engineers and 4 are programmers ... Contrasting this modest effort with our vast development activities, I fail to understand why we have lost our industry leadership position by letting someone else offer the world's most powerful computer." Cray's reply was sardonic: "It seems like Mr. Watson has answered his own question."
Typical machines of the 1950s and 1960s used a single central processing unit (CPU) to drive the entire system. A typical program would first load data into memory (often using pre-rolled library code), process it, and then write it back out. This required the CPUs to be fairly complex in order to handle the complete set of instructions they would be called on to perform. A complex CPU implied a large CPU, introducing signalling delays while information flowed between the individual modules making it up. These delays set a maximum upper limit on performance, as the machine could only operate at a cycle speed that allowed the signals time to arrive at the next module.
Cray took another approach. In the 1960s, CPUs generally ran slower than the main memory to which they were attached. For instance, a processor might take 15 cycles to multiply two numbers, while each memory access took only one or two cycles. This meant there was a significant time where the main memory was idle. It was this idle time that the 6600 exploited.
The CDC 6600 used a simplified central processor (CP) that was designed to run mathematical and logic operations as rapidly as possible, which demanded it be built as small as possible to reduce the length of wiring and the associated signalling delays. This led to the machine's (typically) cross-shaped main chassis with the circuit boards for the CPU arranged close to the center, and resulted in a much smaller CPU. Combined with the faster switching speeds of the silicon transistors, the new CPU ran at 10 MHz (100 ns cycle time), about ten times faster than other machines on the market. In addition to the clock being faster, the simple processor executed instructions in fewer clock cycles; for instance, the CPU could complete a multiplication in ten cycles.
Supporting the CPU were ten 12-bit 4 KiB peripheral processors (PPs), each with access to a common pool of 12 input/output (I/O) channels, that handled input and output, as well as controlling what data were sent into central memory for processing by the CP. The PPs were designed to access memory during the times when the CPU was busy performing operations. This allowed them to perform input/output essentially for free in terms of central processing time, keeping the CPU busy as much as possible.
The 6600's CP used a 60-bit word and a ones' complement representation of integers, something that later CDC machines would use into the late 1980s, making them the last systems besides some digital signal processors to use this architecture.
Later,[when?] CDC offered options as to the number and type of CPs, PPs and channels, e.g., the CDC 6700 had two central processors, a 6400 CP and a 6600 CP.
The entire 6600 machine contained approximately 400,000 transistors.
The CPU could only execute a limited number of simple instructions. A typical CPU of the era had a complex instruction set, which included instructions to handle all the normal "housekeeping" tasks, such as memory access and input/output. Cray instead implemented these instructions in separate, simpler processors dedicated solely to these tasks, leaving the CPU with a much smaller instruction set. This was the first[when?] of what later came to be called reduced instruction set computer (RISC) design.
By allowing the CPU, peripheral processors (PPs) and I/O to operate in parallel, the design considerably improved the performance of the machine. Under normal conditions a machine with several processors would also cost a great deal more. Key to the 6600's design was to make the I/O processors, known as peripheral processors (PPs), as simple as possible. The PPs were based on the simple 12-bit CDC 160-A, which ran much slower than the CPU, gathering up data and transmitting it as bursts into main memory at high speed via dedicated hardware.
The 10 PPs were implemented virtually; there was CPU hardware only for a single PP.: pp.4-3 thru 4-4 This CPU hardware was shared and operated on 10 PP register sets which represented each of the 10 PP states (similar to modern multithreading processors). The PP register barrel would "rotate", with each PP register set presented to the "slot" which the actual PP CPU occupied. The shared CPU would execute all or some portion of a PP's instruction whereupon the barrel would "rotate" again, presenting the next PP's register set (state). Multiple "rotations" of the barrel were needed to complete an instruction. A complete barrel "rotation" occurred in 1000 nanoseconds (100 nanoseconds per PP), and an instruction could take from one to five "rotations" of the barrel to be completed, or more if it was a data transfer instruction.
The basis for the 6600 CPU is what would later be called a RISC system, one in which the processor is tuned to do instructions which are comparatively simple and have limited and well-defined access to memory. The philosophy of many other machines was toward using instructions which were complicated — for example, a single instruction which would fetch an operand from memory and add it to a value in a register. In the 6600, loading the value from memory would require one instruction, and adding it would require a second one. While slower in theory due to the additional memory accesses, the fact that in well-scheduled code, multiple instructions could be processing in parallel offloaded this expense. This simplification also forced programmers to be very aware of their memory accesses, and therefore code deliberately to reduce them as much as possible.[dubious ]
The CDC 6000 series included four basic models, the CDC 6400, the CDC 6500, the CDC 6600, and the CDC 6700.[when?] The models of the 6000 series differed only in their CPUs, which were of two kinds, the 6400 CPU and the 6600 CPU. The 6400 CPU had a unified arithmetic unit, rather than discrete functional units. As such, it could not overlap instructions' execution times. For example, in a 6400 CPU, if an add instruction immediately followed a multiply instruction, the add instruction could not be started until the multiply instruction finished, so the net execution time of the two instructions would be the sum of their individual execution times. The 6600 CPU had multiple functional units which could operate simultaneously, i.e., "in parallel", allowing the CPU to overlap instructions' execution times. For example, a 6600 CPU could begin executing an add instruction in the next CPU cycle following the beginning of a multiply instruction (assuming, of course, that the result of the multiply instruction was not an operand of the add instruction), so the net execution time of the two instructions would simply be the (longer) execution time of the multiply instruction. The 6600 CPU also had an instruction stack, a kind of instruction cache, which helped increase CPU throughput by reducing the amount of CPU idle time caused by waiting for memory to respond to instruction fetch requests. The two kinds of CPUs were instruction compatible, so that a program that ran on either of the kinds of CPUs would run the same way on the other kind but would run faster on the 6600 CPU. Indeed, all models of the 6000 series were fully inter-compatible. The CDC 6400 had one CPU (a 6400 CPU), the CDC 6500 had two CPUs (both 6400 CPUs), the CDC 6600 had one CPU (a 6600 CPU), and the CDC 6700 had two CPUs (one 6600 CPU and one 6400 CPU).
The Central Processor (CP) and main memory of the 6400, 6500, and 6600 machines had a 60-bit word length. The Central Processor had eight general purpose 60-bit registers X0 through X7, eight 18-bit address registers A0 through A7, and eight 18-bit "increment" registers B0 through B7. B0 was held at zero permanently by the hardware. Many programmers found it useful to set B1 to 1, and similarly treat it as inviolate.
The CP had no instructions for input and output, which are accomplished through Peripheral Processors (below). No opcodes were specifically dedicated to loading or storing memory; this occurred as a side effect of assignment to certain A registers. Setting A1 through A5 loaded the word at that address into X1 through X5 respectively; setting A6 or A7 stored a word from X6 or X7. No side effects were associated with A0. A separate hardware load/store unit, called the stunt box, handled the actual data movement independently of the operation of the instruction stream, allowing other operations to complete while memory was being accessed, which required eight cycles, in the best case.
The 6600 CP included ten parallel functional units, allowing multiple instructions to be worked on at the same time. Today,[timeframe?] this is known as a superscalar processor design, but it was unique for its time. Unlike most modern CPU designs, functional units were not pipelined; the functional unit would become busy when an instruction was "issued" to it and would remain busy for the entire time required to execute that instruction. (By contrast, the CDC 7600 introduced pipelining into its functional units.) In the best case, an instruction could be issued to a functional unit every 100 ns clock cycle. The system read and decoded instructions from memory as fast as possible, generally faster than they could be completed, and fed them off to the units for processing. The units were:
Floating-point operations were given pride of place in this architecture: the CDC 6600 (and kin) stand virtually alone in being able to execute a 60-bit floating point multiplication in time comparable to that for a program branch. A recent analysis by Mitch Alsup of James Thornton's book, "Design of a Computer", revealed that the 6600's Floating Point unit is a 2 stage pipelined design.
Fixed point addition and subtraction of 60-bit numbers were handled in the Long Add Unit, using ones' complement for negative numbers. Fixed point multiply was done as a special case in the floating-point multiply unit—if the exponent was zero, the FP unit would do a single-precision 48-bit floating-point multiply and clear the high exponent part, resulting in a 48-bit integer result. Integer divide was performed by a macro, converting to and from floating point.
Previously executed instructions were saved in an eight-word cache, called the "stack". In-stack jumps were quicker than out-of-stack jumps because no memory fetch was required. The stack was flushed by an unconditional jump instruction, so unconditional jumps at the ends of loops were conventionally written as conditional jumps that would always succeed.
The system used a 10 MHz clock, with a four-phase signal. A floating-point multiplication took ten cycles, a division took 29, and the overall performance, taking into account memory delays and other issues, was about 3 MFLOPS. Using the best available compilers, late in the machine's history, FORTRAN programs could expect to maintain about 0.5 MFLOPS.
User programs are restricted to use only a contiguous area of main memory. The portion of memory to which an executing program has access is controlled by the RA (Relative Address) and FL (Field Length) registers which are not accessible to the user program. When a user program tries to read or write a word in central memory at address a, the processor will first verify that a is between 0 and FL-1. If it is, the processor accesses the word in central memory at address RA+a. This process is known as base-bound relocation; each user program sees core memory as a contiguous block words with length FL, starting with address 0; in fact the program may be anywhere in the physical memory. Using this technique, each user program can be moved ("relocated") in main memory by the operating system, as long as the RA register reflects its position in memory. A user program which attempts to access memory outside the allowed range (that is, with an address which is not less than FL) will trigger an interrupt, and will be terminated by the operating system. When this happens, the operating system may create a core dump which records the contents of the program's memory and registers in a file, allowing the developer of the program a means to know what happened. Note the distinction with virtual memory systems; in this case, the entirety of a process's addressable space must be in core memory, must be contiguous, and its size cannot be larger than the real memory capacity.
All but the first seven CDC 6000 series machines could be configured with an optional Extended Core Storage (ECS) system. ECS was built from a different variety of core memory than was used in the central memory. This memory was slower, but cheap enough that it could be much larger. The primary reason was that ECS memory was wired with only two wires per core (contrast with five for central memory). Because it performed very wide transfers, its sequential transfer rate was the same as that of the small core memory. A 6000 CPU could directly perform block memory transfers between a user's program (or operating system) and the ECS unit. Wide data paths were used, so this was a very fast operation. Memory bounds were maintained in a similar manner as central memory, with an RA/FL mechanism maintained by the operating system. ECS could be used for a variety of purposes, including containing user data arrays that were too large for central memory, holding often-used files, swapping, and even as a communication path in a multi-mainframe complex.
To handle the "housekeeping" tasks, which in other designs were assigned to the CPU, Cray included ten other processors, based partly on his earlier[when?] computer, the CDC 160-A. These machines, called Peripheral Processors, or PPs, were full computers in their own right, but were tuned to performing I/O tasks and running the operating system. (Substantial parts of the operating system ran on the PP's; thus leaving most of the power of the Central Processor available for user programs.) Only the PPs had access to the I/O channels. One of the PPs (PP0) was in overall control of the machine, including control of the program running on the main CPU, while the others would be dedicated to various I/O tasks; PP9 was dedicated to the system console. When the CP program needed to perform an operating system function, it would put a request in a known location (Reference Address + 1) monitored by PP0. If necessary, PP0 would assign another PP to load any necessary code and to handle the request. The PP would then clear RA+1 to inform the CP program that the task was complete.
The unique role of PP0 in controlling the machine was a potential single point of failure, in that a malfunction here could shut down the whole machine, even if the nine other PPs and the CPU were still functioning properly. Cray fixed this in the design of the successor 7600, when any of the PPs could be the controller, and the CPU could reassign any one to this role.[when?]
Each PP included its own memory of 4096 12-bit words. This memory served for both for I/O buffering and program storage, but the execution units were shared by ten PPs, in a configuration called the Barrel and slot. This meant that the execution units (the "slot") would execute one instruction cycle from the first PP, then one instruction cycle from the second PP, etc. in a round robin fashion. This was done both to reduce costs, and because access to CP memory required 10 PP clock cycles: when a PP accesses CP memory, the data is available next time the PP receives its slot time.
In addition to a conventional instruction set, the PPs have several instructions specifically intended to communicate with the central processor.: pp.4-24–4-27
CRD d- transfers one 60-bit word from central memory at the address specified by the PPs A register to five consecutive PP words beginning at address d.
CRM d- similar to CRD, but transfers a block of words whose length was previously stored at location d.
CWD d- like CRD; assembles five consecutive PP words beginning at location d, and transfers them to the central memory location specified by register A.
CWM d- like CRM; transfers a block to data from location d. The central memory address was stored in register A, and the length was stored at location d prior to execution.
RPN- transfers the contents of the central processor's program address register to the PP's A register.
EXN- Exchange Jump transmits an address from the A register and tells the processor to perform an Exchange Jump using the address specified. The CP Exchange Jump interrupts the processor, loads its registers from the specified location and stores the previous contents at the same location. This performs a task switch.: pp.3-9–3-10
The central processor has 60-bit words, while the peripheral processors have 12-bit words. CDC used the term "byte" to refer to 12-bit entities used by peripheral processors; characters are 6-bit, and central processor instructions are either 15 bits, or 30 bits with a signed 18-bit address field, the latter allowing for a directly addressable memory space of 128K words of central memory (converted to modern terms, with 8-bit bytes, this just under 1 MB). The signed nature of the address registers limits an individual program to 128K words. (Later CDC 6000-compatible machines could have 256K or more words of central memory, budget permitting, but individual user programs were still limited to 128K words of CM.) Central processor instructions start on a word boundary when they are the target of a jump statement or subroutine return jump instruction, so no-op instructions are sometimes required to fill out the last 15, 30 or 45 bits of a word. Experienced assembler programmers could fine-tune their programs by filling these no-op spaces with misc instructions that would be needed later in the program.
The 6-bit characters, in an encoding called CDC display code, could be used to store up to 10 characters in a word. They permitted a character set of 64 characters, which is enough for all upper case letters, digits, and some punctuation. It was certainly enough to write FORTRAN, or print financial or scientific reports. There were actually two variations of the CDC display code character sets in use — 64-character and 63-character. The 64-character set had the disadvantage that the ":" (colon) character would be ignored (interpreted as zero fill) if it were the last character in a word. A complementary variant, called 6/12 display code, was also used in the Kronos and NOS timesharing systems to allow full use of the ASCII character set in a manner somewhat compatible with older software.
With no byte addressing instructions at all, code had to be written to pack and shift characters into words. The very large words, and comparatively small amount of memory, meant that programmers would frequently economize on memory by packing data into words at the bit level.
Due to the large word size, and with 10 characters per word, it was often faster to process a word's worth of characters at a time, rather than unpacking/processing/repacking them. For example, the CDC COBOL compiler was actually quite good at processing decimal fields using this technique. These sorts of techniques are now[when?] commonly used in the "multi-media" instructions of current processors.
The machine was built in a plus-sign-shaped cabinet with a pump and heat exchanger in the outermost 18 in (46 cm) of each of the four arms. Cooling was done with Freon circulating within the machine and exchanging heat to an external chilled water supply. Each arm could hold four chassis, each about 8 in (20 cm) thick, hinged near the center, and opening a bit like a book. The intersection of the "plus" was filled with cables that interconnected the chassis. The chassis were numbered from 1 (containing all 10 PPUs and their memories, as well as the 12 rather minimal I/O channels) to 16. The main memory for the CPU was spread over many of the chassis. In a system with only 64K words of main memory, one of the arms of the "plus" was omitted.
The logic of the machine was packaged into modules about 2.5 in (64 mm) square and about 1 in (2.5 cm) thick. Each module had a connector (30 pins, two vertical rows of 15) on one edge, and six test points on the opposite edge. The module was placed between two aluminum cold plates to remove heat. The module consisted of two parallel printed circuit boards, with components mounted either on one of the boards or between the two boards. This provided a very dense package; generally impossible to repair, but with good heat transfer characteristics. It was known as cordwood construction.
There was a sore point with the 6600 operating system support — slipping timelines. The machines originally[when?] ran a very simple job-control system known as COS (Chippewa Operating System), which was quickly "thrown together" based on the earlier CDC 3000 operating system in order to have something running to test the systems for delivery. However the machines were intended to be delivered with a much more powerful system known as SIPROS (for Simultaneous Processing Operating System), which was being developed at the company's System Sciences Division in Los Angeles. Customers were impressed with SIPROS' feature list, and many had SIPROS written into their delivery contracts.
SIPROS turned out to be a major fiasco. Development timelines continued to slip, costing CDC major amounts of profit in the form of delivery delay penalties. After several months of waiting with the machines ready to be shipped, the project was eventually cancelled. The programmers who had worked on COS had little faith in SIPROS and had continued working on improving COS.
Operating system development then split into two camps. The CDC-sanctioned evolution of COS was undertaken at the Sunnyvale, California software development laboratory. Many customers eventually took delivery of their systems with this software, then known as SCOPE (Supervisory Control Of Program Execution). SCOPE version 1 was, essentially, dis-assembled COS; SCOPE version 2 included new device and file system support; SCOPE version 3 included permanent file support, EI/200 remote batch support, and INTERCOM time-sharing support. SCOPE always had significant reliability and maintainability issues.
The underground evolution of COS took place at the Arden Hills, Minnesota assembly plant.[when?] MACE ([Greg] Mansfield And [Dave] Cahlander Executive) was written largely by a single programmer in the off-hours when machines were available. Its feature set was essentially the same as COS and SCOPE 1. It retained the earlier COS file system, but made significant advances in code modularity to improve system reliability and adaptiveness to new storage devices. MACE was never an official product, although many customers were able to wrangle a copy from CDC.
The unofficial MACE software was later chosen over the official SCOPE product as the basis of the next CDC operating system, Kronos, named after the Greek god of time. The story goes[when?] that Dave Mansfield called the University of Minnesota library and asked for an ancient word meaning "time". He wrote down "Kronos" instead of "Chronos". The main marketing reason for its adoption was the development of its TELEX time-sharing feature and its BATCHIO remote batch feature. Kronos continued to use the COS/SCOPE 1 file system with the addition of a permanent file feature.
An attempt to unify the SCOPE and Kronos operating system products produced NOS, (Network Operating System).[when?] NOS was intended to be the sole operating system for all CDC machines, a fact CDC promoted heavily. Many SCOPE customers remained software-dependent on the SCOPE architecture, so CDC simply renamed it NOS/BE (Batch Environment), and were able to claim that everyone was thus running NOS. In practice, it was far easier to modify the Kronos code base to add SCOPE features than the reverse.
The assembly plant environment also produced other operating systems which were never intended for customer use.[when?] These included the engineering tools SMM for hardware testing, and KALEIDOSCOPE, for software smoke testing. Another commonly used tool for CDC Field Engineers during testing was MALET (Maintenance Application Language for Equipment Testing), which was used to stress test components and devices after repairs or servicing by engineers. Testing conditions often used hard disk packs and magnetic tapes which were deliberately marked with errors to determine if the errors would be detected by MALET and the engineer.
The names SCOPE and COMPASS were used by CDC for both the CDC 6000 series, including the 6600, and the CDC 3000 series:
Main article: CDC 7600
The CDC 7600 was originally[when?] intended to be fully compatible with the existing 6000-series machines as well; it started life known as the CDC 6800. But during its design, the designers determined that maintaining complete compatibility with the existing 6000-series machines would limit how much performance improvement they could attain and decided to sacrifice compatibility for performance. While the CDC 7600's CPU was basically instruction compatible with the 6400 and 6600 CPUs, allowing code portability at the high-level language source code level, the CDC 7600's hardware, especially that of its Peripheral Processor Units (PPUs), was quite different, and the CDC 7600 required a different operating system. This turned out to be somewhat serendipitous because it allowed the designers to improve on some of the characteristics of the 6000-series design, such as the latter's complete dependence on Peripheral Processors (PPs), particularly the first (called PP0), to control operation of the entire computer system, including the CPU(s). Unlike the 6600 CPU, the CDC 7600's CPU could control its own operation via a Central Exchange jump (XJ) instruction that swapped all register contents with core memory. In fact, the 6000-series machines were retrofitted with this capability.
((cite web)): CS1 maint: archived copy as title (link)