Donal Spills the Beans
A Work in Progress
99
In 1976-77, when the IEEE-ACM professional scientist and engineers’ organizations were planning a special issue of their magazine on computer architecture, John Rollwagen, Cray Research’s CEO gave me the task of writing up the Cray-1. I authored Cray’s contribution to this special issue, and it was printed in 1978 without much change although Gordon Bell, who reviewed my submission expressed disappointment I hadn’t referenced his way of describing architectures he’d published in 1971. An academic would have done so. I had never written an academic paper and although I later owned a copy of Bell’s book on Computer Structures, it was unknown to me at the time. I’d been working since I was sixteen and hadn’t had time to get a degree because I got into computer programming in the UK, a portable skill, which eventually led to me being hired by Auerbach Publishers and Consultants where I covered the emerging market for high performance computing and supercomputers as an Editor. In this capacity, I visited the supercomputer startup, Cray Research Inc, which had an office in Bloomington, Minnesota near the airport and within sight of Control Data’s multi-story building which towered over the single-story office complex, Cray’s major competitor.
Seymour Cray’s team had been responsible for the CDC 6600 and 7600 lines of computers which had established CDC’s business success. It was from updating my boss Steve Callahan’s Auerbach reports on these CDC computers that I had learned about Cray Research. I talked to computer system directors at Babcock & Wilcox, Westinghouse, and other CDC customers. In the main, they all told me they were waiting to acquire Seymour’s next machine when it came out. Hence, given proximity by happen chance, I dropped in on the shabby clearly inexpensive office space that Cray Research occupied at the time.
During my short visit, I don’t recall sitting down but having a short stand-up meeting with Seymour Cray, John Rollwagen, George Hansen, and Noel Stone. George Hansen and Noel Stone were elderly gentlemen, investors in Cray while John Rollwagen was the much younger VP of Finance. Seymour himself, a dapper chap who looked you straight in the eye didn’t mince words. Perhaps I had blundered into a Board Meeting as my reason for being in Minneapolis was not to visit Cray Research but to participate in a marketing event being thrown by Unisys. I spent much time on the phone later with George Grenander, Cray’s sales engineer, who explained the architecture very well and enabled me to write a good technical, but readable, description of what was then the world’s fastest computer, the first short vector supercomputer, the Cray-1, forerunner of today’s vector GPUs.
Cray Research wanted reprints and ComputerWorld, a leading IT newspaper at the time, wanted an article based on the report. This was where a photo of the Cray-1 was captioned with the legend: World’s Most Expensive Love Seat. Steve Callahan came up with that memorable description. It was with that background, having joined Cray Research in 1976 as a technical writer/marketing sort of chap, that John Rollwagen assigned me the task of writing something for the Editors to consider. This non-academic paper has now been cited 1,197 times according to the ACM as of December 2025 (https://dl.acm.org/doi/10.1145/359327.359336).
The IEEE/ACM paper was well received because I’d had access to a transcript of Seymour Cray talking to an audience at Fort Meade, in Baltimore. I incorporated information gleaned from that lengthy document, a copy of which I was given by Janet Robidaux, Cray’s documentation and proposal manager. I found it fascinating because Seymour started off by describing how his career had progressed in terms of problems he’d encountered when designing machines to meet the US Navy’s requirements. This was before he and others had formed their own company, Control Data Corporation. An excerpt of the transcript follow:
Mr. Cray was one of the founders of CDC and served as director of that corporation from 1957 to 1965. He was a Senior Vice President at the time of his departure in 1972. While at CDC, Mr. Cray was the principal architect of the CDC 1604, 6600 and the 7600. He was responsible for detailed hardware design efforts; Mr. Cray has gained a reputation as a world leader in the development of large-scale computers systems. From 1950 to 1957 he held several positions at Engineering Research Associates (ERA) and its successor companies, Remington Rand, and Sperry Univac (UNIVAC). At ERA, Mr. Cray had design responsibility for a major portion of the 1103 computer (the first commercially successful scientific computer) and management and technical responsibility for the Navy Tactical Data System, which many of you may know. In 1968, he was awarded the W.W. McDowell Award by the American Foundation of Information Processing Societies for outstanding contributions in the computer field. In 1972, he was awarded the Harry H. Goode Memorial Award for contributions to the design of large-scale computers and to the development of multi-processing systems. Mr. Cray holds a Bachelor of Electrical Engineering degree and a master’s degree in applied mathematics from the University of Minnesota.
He is now with his own company, Cray Research Incorporated and he has just announced a new computer, the Cray 1 computer. Mr. Cray, we are honored to have you today.
"It is certainly a pleasure to be here with you today. I am both amused and confused to find myself here. I've spent a lifetime establishing a reputation about not talking to anyone. I don't know what happened that day when somebody called me, and I said yes. It was so out of character that I am a little amused and confused. It is, of course a bit of a different situation to start one's own company and feel the responsibility for doing different things so I accepted your invitation here with a great deal of apprehension and pleasure. I do hope that I can tell you a little about the things I have learned along the road and perhaps in discussion I can learn a few things from you.
I'd like to start off by giving you a verbal tour through my technical career. I hope I won't bore you along the road. I think by doing that I can give those of you, who are not all that familiar with my particular work, some ideas of what I do and don't know and also along the way I hope to tie together some of the evolutionary steps in the computers that I have designed and perhaps give you an idea of how we have gotten the particular machines that we have. I was very fortunate to graduate from the University of Minnesota at that point in computer technology history when there was, just the bare beginnings of an industry. It was in 1950-1951. At that time there were just a few, in fact I can only recall two, significant places in the United States that were doing anything related to computing. One was Eckert Mauchly when the Univac 1 was the thing and the other was Engineering Research Associates in St. Paul, Minnesota and they were busy building a secret machine. I had the good fortune to have an instructor at the University of Minnesota who pointed out this old glider factory down the road a piece where these people were working on this mysterious thing called a digital computer. I had no idea what I was in for there, I would certainly be on the ground floor of something and sure enough I have never ceased to marvel at the good fortune of that, because that turned out to be something I enjoyed for a lifetime. The operation of Engineering Research Associates at that time was very small and I would guess there were 50 or 100 people at most involved. Probably closer to 50. A few of the people, I believe, were from here, the original founders of Engineering Research Associates and then there were a few new college graduates who didn't know anything about a flip-flop or anything else, let alone Boolean Algebra. I don't recall being taught Boolean Algebra in school. One of my instructors is here though, Art Gannett. He was one of the instructors when I was attending the University of Minnesota. So here I was a new graduate in Electrical Engineering with a master’s degree in applied mathematics and I was assigned to a project, I believe, that was the forerunner of the 1103. At that time, I think it was called the Atlas II, Atlas 1 was, I think, was the 1101. Am I right about that Art? (Response of yes). The Atlas 1 had just been delivered just about the time I was employed by ERA so that was the machine of the day. The successor was just really starting. I don't know how many of you remember back to Atlas 1 or 1101 or whatever we want to call it. One of the things I remember about the 1101 was that it was Naval Task 13, Bureau of Ships and the 1101, of course it is indeed 13, and that is how the company established the 1101 series. I started by converting Task 13 into binary and starting off 1101 and then it went 1102, 1103. A nice way to start business.
The 1101 was a magnetic drum machine and the people had great sport in those days because it was such a challenge to mapping the drum. They had this marvelous device called an interlace plug board that you could screw into the machine and you could change the interlace on the drum from 2 to 4 to 8 to 16 and, depending on what sort of program you wanted to run, you could change the interlace and with any luck at all after you executed one instruction off the drum you could get it done before the next instruction came up and that way you could avoid a drum revolution between each instruction and that speeded things up immensely so there was a real challenge to get your operands and your instructions off the drum in some sort of pattern. You would just have to choose between powers of two on your drum interlaces. That was it. Indeed, it was the marvel of the day, and it used these powerful new vacuum tube circuits. They were a little shaky. The carries tend to get quite small towards the end of a long carry but they did work and so I arrived on the scene just as this machine had been delivered and indeed did work and enthusiasm was running very high and now, we were ready to charge into the future with 2nd generation machine called Atlas II.
Well, at that time there was a fellow called Bill Keller who had just arrived from RCA, and he was from the Cathode Ray Tube division of RCA. I kind of blame him for this, I may be doing him an injustice, but he had this thought of using electrostatic storage with these tubes he was familiar with, and this was going to be the great wave of the future because now you didn't have to wait for the drum to turn around, you could store information on the screen of the cathode ray tube and there was this wonderful concept about as the electron beam hit the screen of the tube you got more secondary ignition (?) than the original beam and you could read by putting a metal screen over the face of the tube; you could read the difference in charge and, depending upon whether there was a charge hole or not at that particular point of the screen, you got different kinds of signals and so, by golly, you could store 1.024 bits of information on the screen of the tube, if you were lucky. Now there were a lot of problems because, of course, the charge dissipated rather quickly and so in order to keep any information there you to keep running your beam around and refreshing these spots by reading them off before it was too late and putting them back in again with a nice new charge hold and then, of course, as we got into the details of it there were these problems that kept coming up, and one of them was called "the read-around problem". Well, the read-around problem amounted to, if you happened to address the cell next to one that was a little shaky, the splash would fill it up and you would then tend to degrade rather rapidly, so if you read around one of the bits on the screen, that poor fellow in the middle really caught it. It was really hard to read him out a little later on, so there were limitations on the use of the electrostatic storage because of the read-around problem. So, the refreshing rate had to be significantly more often than anticipated because of that problem. Well, there were a few machines successfully made with electrostatic storage. And they did indeed compute and it was quite a breakthrough at that time. I played a minor role, I think, in that machine. I was so busy learning the business, this strange thing called Boolean Algebra that I'd never even heard of before. All these numbers in the machine were in this strange kind of nomenclature and so I did a lot of reading. At that time the only things you could read were the "Whirlwind Reports" from MIT. There was a lot of good stuff there. As I recall in the ERA library there was this one shelf right in the middle of the library that had the Whirlwind reports. They were about 3 feet long and they were all big thick books. It was very hard to read them. They had to do with all the modern technology of the more powerful vacuum tubes than we had in Atlas 1. The gates were so much more powerful that the carries didn't get smaller as they propagated down the line and also there were some new conceptual things. There was this wonderful thing which was news to me "one's complement arithmetic". This carried through many years of not only ERA history but Remington Rand and Control Data. It all started from Whirlwind 1 ideas.
Anyway, the beginning of my work in this area was in transformer design. The real challenge there was in propagating pulses along the bit paths in the machine arithmetic sections. You had to hook up these vacuum tubes from the plate of one to the grid of the next. About the only way you could do this was to go through a transformer. There was a problem of winding transformers that could propagate narrow pulses and a narrow pulse in this case was a quarter of a microsecond. Well, a quarter of a microsecond was pretty darn small in those days and so I thought I better use everything I knew about mathematics. I spent weeks, no I spent months, doing LaPlace transforms, and all the things Art taught me in school. Working on how the windings of the transformers should be arranged, how far the wires should be apart, how much paper to put between the layers and I just worked and worked and worked while everybody else was soldering and putting things together. I was just making volume after volume of paper design for transformers, and then there came that disillusioning day when I was walking down the hall and I saw this little tiny room and this older fellow was in there winding transformers and he was just taking cores, Moly Permalloy 479, I do believe, I don't know where I got that out of my memory, but that was what the core material was. It was a very thin metal core material and it wound in layers, and it turned out to be an inch or so in diameter, sort of oblong and here he was, he was just throwing paper on and winding wires and if it didn't work, he changed a little bit. He didn't use any mathematics at all. He was doing just great, and they were using his transformers all over the company and they were very successful and that was the last time I used mathematics in computer design. From then on, I just threw the wires on and put in a little paper and if it didn't work right, I'd try a little more paper and a little more wire and it was ever so much faster. I'm afraid that influenced my whole attitude towards computer design from then on. Sometimes experimental beats the analytical.
After discovering I shouldn't make a career out of transformer design I went on and started learning about the logical part of the machine. Sure enough, that was fascinating too, especially the control section. So as Atlas II started coming along, I gradually worked my way up in the organization to the point where I was responsible for designing the control section of the machine, and I worked night and day because that was so exciting. I managed to reduce the vacuum tube count in that machine by a factor of 2 or 3 over the original design and I was a hero. That was really a breakthrough, and so I was promoted to the responsibility for the whole project. There I was in management already and only a year out of school. So, I thought, "I've really got it now". That project continued for several years and after building a few electrostatic storage machines we came upon another great discovery, Magnetic Cores. They were kind of tricky and it was almost as hard to read those as electrostatic tubes but not quite. They were at least discrete. Electrostatic storage, you know, there was just this big screen and as the power supplies changed a little bit the screen drifted, the spot drifted around the screen, and you could hardly find the same place twice. At least with the cores you had the wires going through them and it was easier to find them. That was a great breakthrough. It is really incredible looking back on that electrostatic storage because there were such large voltages involved. You know the large accelerating potential of the cathode ray tube and the signal was so small that you got out of that secondary ignition. It was like trying to read a very, very tiny signal on a great big noise. It's a marvel that those ever worked.
The cores were really quite a great breakthrough. We built very large memories in those days. We built 1,024 word memories out of those cores and that really made the company. So, at that point ERA sort of went commercial and built 1103 machines with 1,024 words of high-speed memory. I can't remember just what high speed memory was in those days. The cycle time--12 yes, it was 12 microsecond memory. That's right, 12 microsecond memory. We kept the drum too, so now we had the memory hierarchy. We had this high-speed core memory with 1,024 words and then there was the drum which was the second level you could move things back and forth on. Of course, now the interlace problems on the drum were gone, or almost so, because you could transfer your information from the drum to the core and run your programs out of there real fast. You didn't have to wait for a drum revolution to get your next instruction. Several years went by and the Atlas II machines were quite successful. Sure, enough other people at the agency wanted them and there were commercial sales. The company really started moving on. Then as things do happen when a little company is successful, along came Remington Rand and bought it out. Oh, what trauma! All of us who were so proud of our little company found ourselves working in this typewriter company. It was quite emotionally traumatic. For one thing, all the management people were now somewhere in the East Coast which none of us understood. It just wasn't the same sort of thing anymore, but we worked on never-the-less.
The next machine that came along, let me see if can get the sequence right here now, after the 1103 there were a few little projects but the next one I recall as being really significant was Bogart. Now Bogart was quite an experience for me because at this point in time I was beginning to feel like I understood about computers. I was getting a little nervy for a young fellow and I came out here and talked to a dear old fellow called Joe Eakes. Joe told me about this little computer he had in mind. It was just a little serial machine, and it would be real fun to build. He had in mind calling it Bogart. I went back to St. Paul and scratched my head a little about it. It was just too little and too simple. After a week or so I came back and told Joe, "No. You don't really want that. What you want is a little parallel machine! It's true it's 2 or 3 times as big and expensive as you really said you wanted but you'll like it anyway". He had enough faith in me to go along with that and we built this machine still using those little cores, Moly Permalloy, and as far as I know it was really the only magnetic machine that was built in that company and I'm not too sure about too many other companies. There were some serial machines with shift registers phasing logic that I worked out, but Bogart was sure one-of-a-kind. I thought it was a kind of a little jewel of a machine because it was so simple and so clean, and I believe there were 4 or 5 of them built. They all came here. No one else got involved because at that time the transistor had been discovered and again Joe Eakes told me as we were finishing the Bogart project, "You know, I hate to tell you this but what you're doing is obsolete. You'd better look at what they are doing over here at Philco because they are building these computers out of: "Surface Barrier Toasted Transistors". And I said, "What?" Surface barrier toasted transistors, I think that is what they were called in those days. They were very fragile devices and if you happened to walk across the floor and touch the leads on one it burned out because the surface barrier was so delicate. Never-the-less we did start some projects and the machine that we built...we built several small experimental machines, and they did indeed work very much nicer than the magnetic machines. But there were tragedies along the way because of that Surface barrier toasted deal. One of them I particularly I recall was we had this machine, we just completed it, we had whole bunches of lights and buttons on it that set the registers. Of course, the first thing that we did as we turned on the power, we went down the line and tried to set all the bits in the machine. Well, the ground was not quite right and every time we pushed a button, we burned out that bit. We went all the way through the machine and one-half hour later we discovered we had burned out every register in the machine because of the static charge induced by pushing the buttons. There were a few problems with transistors in those days. It kind of discouraged me, actually, I thought that they would never work but they did have potential and they were very fast. We did go on from there. Now, about that time the Navy was starting a real re-organization and it was called the Navy Tactical Data System. The idea was to put computers aboard ships and organize all the data gathering and analyzing operations. At the time, Remington Rand played a major role in establishing several projects. One of them, a system project for the Navy Tactical Data System. I was put in charge of that program at Remington Rand and did all sorts of marvelous things that went to my head. One thing I had never been able to get support from the company in the sense of "doing anything" to the building. The building was an old glider factory. The building was designed for making wooden gliders during World War II. I think they were intended to be pulled by airplanes across the English Channel for troops. It was just an awful building. None of the walls were finished. It was just bare 2 by 4's everywhere. These partitions had been thrown up in this big old glider factory, and then when NTDS came along, Navy Tactical Data System, there was money available, and we took one of the little buildings from the old glider factory and converted it into a computing center. Wow! We had a false floor; we had a ceiling with indirect lights. We even had red lights you could turn on. The reason was, here the Navy was going to give us a whole bunch of things including radars on our roof. We were going to take radar data off these things and put it in the computer, do all the analyzing and tracking and all those marvelous things that the Navy likes to do, so it was just a huge project.
The role of Remington Rand in this process was to design a computer that would be so versatile that it could be used in all the various positions aboard ship. It was called the NTDS Unit Computer. The idea was you would use 2, 3, 4, 10, 12 or however many you needed to do all these functions and hook them all together. This was in 1955 or so, I'm guessing. That was very aggressive thinking in those days, and nobody understood the software problems. The project did, in fact, go ahead and we build a computer that was very simple and really quite successful and the NTDS Unit Computer through its various successors is still being built and delivered today. I recall not too long ago, visiting in Australia, I happened to be involved in a government discussion and, lo and behold, the United States was selling the Australians a bunch of NTDS computers. They were the old ones. The ones I knew about. It was 10 years later or more. So, I know they are still around.
Anyway, I was having some personal problems about then because the company had gotten so big that it seemed almost impossible to get anything done. You couldn't get any prints made anymore in the print department, because there were all these rules. You had to get approval for this and approval for that and I was getting pretty angry about it because I knew what I was doing and I didn't need all this help from the company, so I quit.
I started, with a few other people, principally Bill Norris, a little company called Control Data. Now, as is quite often the case with little companies Bill Norris and several other people involved didn't have too much idea what the company was going to do. They thought they were going to make some desk calculators, maybe, or data collection devices or something and I said no. I didn't think we should do that. I thought we should build big computers.
So, it was a little company. And we didn't have much money. Total capitalization was six hundred thousand dollars. I thought we ought to be able to build the world's biggest computer. All we had to do was to find a customer with enough faith and I thought we could do that so sure enough, actually the people that should have been the most angry with us, Bureau of Ships, because we all kind of bailed out on their NTDS project at Remington Rand, they had a heart "as big as all get out". They did, in fact, give us a contract to build a little computer for the Naval Post Graduate School in Monterey, California. So, the 1604 computer was born.
It was an occasion to use the new improved transistors which had come of age in those few years. So, the 1604 computer was the first thing that I really had complete control over. It was my idea. I did it just the way I wanted. I didn't talk to anybody, and it turned out to incorporate most of those things I had accumulated over those years at Remington Rand. There was this one little problem though. Remington Rand sued us. They claimed that I took my brains with me and that should be worth 1 million dollars and they wanted their million dollars. My gosh, this was pretty appalling. There was all this talk about trade secrets. No one had ever mentioned trade secrets before. All the years we worked at Remington Rand we always worked on government contracts, CPFF government contracts. I just couldn't imagine, nor could anyone of the rest of us, how we could acquire trade secrets working in government contracts. That was a very interesting phase that I'll skip over. It was quite an experience to be spending half my time in Federal Court and the other half soldering on this 1604 computer. We were really pretty worried. Here was this big company trying to wipe us out. We had all this work going on to make a case that there were, in fact, no trade secrets. A person should be able to take their brains with them when they leave a company. There was this big breakthrough in the case. We had been giving depositions for Remington Rand and mine was the biggest. I had several volumes of depositions. There came the day when we were going to appear on the stand in court. In fact, the very day that I was going to appear I really was given a set-back because I saw the Judge and the lawyer for Remington Rand going off to lunch together and I thought, Wow! No way are we going to get out of this one. Of course, I didn't understand that these things are pretty common. It wasn't an hour later that the down became an up. I found out that the lawyer that was going to lunch with the Judge has just bought stock in Control Data. All of a sudden, we figured that things might be alright after all, (I hope there aren't any lawyers here) and sure enough, they were. We ended up settling that case out of court that day and we ended up paying very little money, but not enough to have any influence on the course of the company. So, the 1604 proceeded to be built and delivered to Monterey and then there was a whole family of machines that came from that.
Now I expect many of you know about the 1604 but it was a very straightforward machine. It was the first one that I was involved with that floating point. This was a pretty new concept then. It had floating point instructions. It was the beginning of my working in, really, solutions of mathematical problems as distinguished from what were the old agency kind of problems. Up until that point the machines were primarily logical. We did a lot of other things along the way and another one that comes to mind is a project called "Clip-Pin", which absolutely boggled my mind at the time.
I am sure that some of you remember Clip-Pin but attached to a 1604, which we thought was an awful pile of equipment, we hooked up 2 or 3 times as many equally large pieces of equipment which were primarily memory and streaming devices and delivered here which was the biggest collection of digital equipment, I'm sure, at that time. From then on it seemed like the sky was the limit and the test was to find out how big a machine you could build. I'm still working on that. To carry on, there was that day when the 1604 had really been successfully proven and several machines had been delivered and I got sick. I don't often get sick, and I was gone for a week. I stayed home lying-in bed feeling miserable. I think I had the flu and I got to wondering what is it you could do next? After a 1604 is there anything else? I thought it might be time to look at a little machine so I sketched out the characteristics of a little machine I thought would be sort of good to use with a 1604. It would be sort of a peripheral processor. In that week I really laid out the design of a little machine that was called the 160. It was a 1604 with the 4 lopped off. It turned out to be a successful little satellite and there were an awful lot of those made, as you know. It was again a nice little core machine. 1604 and 160 were really the first era of Control Data. Then along about, I guess it was, 1960 we began to think that maybe the floating-point aspect of the 1604 could be expanded upon and we got involved in discussions of problems with the Atomic Energy Commission. These were essentially simulations of hydrodynamic codes. The codes to simulate hydrodynamic events.
The 6600 was a project that Jim Thornton and I started in 1960. It was really a wholly new kind of thing in the sense that we were going to take a large number of peripheral processors like the 160 and interconnect them rather intimately with a central processor and see if we could get all the housekeeping chores out of the mainframe and into these peripheral processors. The 6600 was an attempt to do that in a conceptual way and we had these 10 peripheral processors and CPU all in one box. That was a project that was started in St. Paul, in Minneapolis actually. Again, it's strange how these things happen but Control Data, by that time was so big that it was difficult to work in Control Data and so there came this day when the 6600 was less than a year old, I believe, when I had a little talk with Bill Norris at a planning meeting. I told him that I had had it. I was going to get out. I was going to go back home to my little town in Wisconsin called Chippewa Falls and it looked to me either it was that or get out of the business. He was very understanding. I've been impressed so many times. I guess I didn't give people enough credit. He was very understanding, he said, "Fine, you just go ahead over to Chippewa Falls and you can build whatever you want to build in a building over there and keep on working", and so I did.
I went over, I bought 80 acres of land along the Chippewa River and built this little laboratory on one, and my house on the other, 40 acres apiece. I looked around for some people who wanted to move. Would anyone want to move to Chippewa Falls besides me? Well, indeed there were. There were about 25 people, maybe even 30 that did indeed want to move and work in the woods in Wisconsin. The 6600 program was completed in this new little laboratory in Chippewa Falls. There were a total of 7 of them manufactured there. The total population of that laboratory never went over 40. I thought that was indeed quite an accomplishment and of course it was done by sub-contracting all of the detail work, assembly work, the stringing of cores, all of the things that did not involve the check-out of the machines or system were subcontracted to local vendors. We developed in the area of Chippewa Falls and Eau Claire, Wisconsin three or four hundred people who were really working pretty much, on that machine but none of them directly involved in a management sense. It was quite workable in the sense that I wasn't responsible for that large a number of people and the technical work could be coordinated without actually getting large numbers of people involved. My threshold is very low on people. I just couldn't hack more than that.... you see I can't remember names. About 30 or 40 is my limit. To make the story not go over our coffee break time I'll go on and say that the 7600 was started there about 1965 and the intent, of course, was just to upgrade the 6600 by a factor of about 4 and keep somewhat of the same logical design. That project went ahead and was completed, I think, in 1969. I think in 1969 was the first delivery of a 7600 to Livermore, California as was the first 6600 in '64 perhaps. These two machines were really intended to be floating point calculators and were really intended for hydrodynamic simulation, and were very good at that. I sort of remember a few contacts with the Agency along the way and one that really seems to stand out in my mind, I can't remember the names of the people who were visiting, but as the 6600 was well established and several had been delivered someone from here came to visit and wondered if maybe this would be any good here. I said, "No, no way. It's a floating-point machine and I know that just isn't the sort of thing you do. Just forget it." Apparently, they believed me, and they went away, and I didn't hear anything for a couple of more years but somehow or another you did manage to get some of those 6600s and 7600s which, of course, were a good deal better. That really marveled me at the time. The 7600 program was, what I thought, was the most successful that I had been involved with and there were 11 of those made in Chippewa Falls. Which, again, I am pretty proud that the same 40 people made 11 machines and they did it in a very short period of time. I believe it was a year and a half. There were a whole lot of problems. We built the machines too fast and, of course, they weren't properly checked out and those early 7600's caused a great deal of trouble for the venturesome customers that got them. It took perhaps 6 to 8 months to recover from building so many machines so fast without really properly checking them out ahead of time.
I do believe that everyone survived in that operation and the 7600's went on to be a very successful machine. There was one more step in Control Data for me and that was the 8600 program. We are starting to come up to date now. The 8600 program was intended to get another factor of four in performance above the 7600. The greatest effort in that area was in packaging. It was clear, at that time, the real limitations on computing speed were not only the speed of the transistors but the physical dimensions of the machine. The 7600 being 11 feet on a side, the wire paths were so long that one could not hope to achieve a factor of 4 greater speed without reducing the physical size very dramatically so the 8600 was intended to be a box that was no more than three feet in any direction and involved crowding the components together into very, very dense packages. There was an aggressive program to make very large physical modules involving 18 printed circuit boards all interconnected, rather freewheeling, that is, right through the board any place you wanted to and a very small number of these modules would indeed fit into a very small cabinet. That program proceeded to the point where modules were built, and the electrical characteristics were pretty well established. Then a strange thing happened back in corporate headquarters. The company, I guess really Bill Norris, who is still the company, decided that they wanted to out of the big computer business and provide services instead and so the general emphasis and thrust of the company changed a bit. This was the late '60's or 1970, '71. About 1971 is when this really became obvious and so I was having trouble getting support enough to continue the project. It came again to the point where I decided even though I had my own little factory out in the woods that without financial support to proceed it wasn't much of a future.
In the spring of 1972, I left Control Data and started my own little company. Now that was indeed traumatic because there were those 40 people that I had moved off into the woods. I could see that the "handwriting on the wall" said that they weren't going to stay there without me. I was in no position to hire 40 people with my little company. Never-the-less I did proceed and started my own company with, I believe, 6 people. Six people that I took with me and since I had the building on one 40-acre piece and the house on the other 40-acre piece there was no problem of where to build Cray Research. So, I crowded it over on the other side of my house and built another little building which is only 8800 sq. ft. and is across the gulley from the other installation.
I started that in '72 and proceeded then with originally 6 and then 12 and now 25 people to develop, not the 8600, but a new machine which had about the same technical goals. There were several things that occurred in that period of time, the period of time in which we were working on the 8600. One of the facts of life that seemed to come home in addition to financial support of the corporation was that the packaging was indeed a bit aggressive. I think it is technically possible to build modules with 18 printed circuit boards interconnected the way we are doing it, but it was very expensive. The other thing that happened is that this project dragged out, primarily because of lack of funds, was that the semiconductor memory business had proceeded to the point where it was indeed competitive with cores. So, in establishing Cray Research and reviewing the technology of the day, the day being 1972, it was possible to make several new decisions as to what to build it out of. The two basic ones were to use integrated circuits instead of discrete components, about 10 years later than everyone else had decided to do that. But I thought it might be safe then. The other, to use semiconductor memories instead of cores. Well, that one was a bit aggressive. It was a whole new ball game with those two new assumptions. I was so brave that I decided to even change the architecture of the machine a bit and try implementing some vector techniques.
The status of large machines at the time was that there were three real pioneering efforts on vector machines. The oldest was Illiac IV, and Illiac IV, as most of you probably know, was an attempt to tie together a large number of processors and run them kind of like a player piano. In fact, simulate a grid structure physically with computers arranged in an array. That was one approach. There were competitive approaches. The other principle being streaming data. There were two machines at this period of time; 1970, 1971 I guess, were really the years when these machines were developed. There was one at TI, the Advanced Scientific Computer and there was one at Control Data, the Star machine, which Jim Thornton was in charge of at that time. He had left Chippewa earlier than I and gone back to Minneapolis and was proceeding on his own projects. The Star machine, which I understand is a good deal better than the TI machine, although I think essentially they are using the same concept, was one of streaming data in an organized way through arithmetic units and thereby avoiding the need to issue instructions one at a time and index them as the scalar machines all have done. My view of all three of these machines is that they really pioneered the road and got people to thinking about vector techniques and how to go about solving problems in that mode. It takes several years, of course, for people just to get their thinking reorganized and it was a good deal of a struggle for those that seriously considered these machines. Even more of a struggle for those that bought them. It was only in 1972, '73, I think that people began to understand what the whole business was about. Really all three of those machines, I think, are a bit marginal in their utility simply because the basic conceptual ideas weren't well enough established at the time the machines were built so that the engineers had a fair chance of implementing them. The principal problem in the streaming machines was that conceptually they involved streaming from memory to memory. The idea being that you would take a large stream of data out of memory, and you'd run it though some computing element and put them back in memory. The starting and stopping times of the streams was a very significant factor. The stream had to be of substantial length in order for the efficiency of the vector operation to be anything competitive with the scalar machine. That is, it would have to be of the order of a hundred or several hundred or even a thousand elements long before the machine really looked good. It is this essential point that I assumed needed correction. The thing that I'll describe in our next session is the machine design which I proceeded on then in 1972 which I would call a short vector machine. It is taking advantage of all those things learned over 3 or 4 years with the ASC machine and the Star machines, in that vector machines are nice, but you have to compromise in the sense of the general application not being ordinarily amenable to long vectors. Vectors are frequent enough in normal computation but not in the hundreds and thousands in terms of element length. The Cray 1 machine is intended to work well with short vectors and the cross over point actually between choosing to go scalar or vector is only 3 elements and that is at least an order of magnitude smaller than those earlier machines. I'd like to take a coffee break now and then show some slides and describe the machine I am currently building."
No comments:
Post a Comment