Classes of computers
Computers can be classified, or typed, in many ways. Some common classifications of digital computers are summarized below. For others see Category:Classes of computers.
Posted 4 months ago in Other.
4 Unique Visitors
First generation computers used vacuum tubes such as the 6J6 or specially designed tubes - or even mechanical arrangements, and were relatively slow, energy-hungry and the earliest computerswere less flexible in their programmability.Second generation computers used discrete transistors, and so were smaller and consumed less power.Third generation computers use Integrated Circuits, the main difference between hardware in computers of the 1960s and today being the density of transistors in each IC (beginning with Small Scale Integration chips like the Transistor-transistor logic (TTL) SN7400 gates with 20 transistors, through Medium Scale Integration and Large Scale Integration to Very-large-scale integration (VLSI) with over ten billion transistors in a single IC "chiphis illustration shows the development of computers, from huge room-sized computers in the 1940s and 1950s (left-most photo) to smaller personal computers over the 1980s to the 2010s.
This list of computer size categoriesattempts to list commonly used categories of computer by the physical size of the device and its chassis or case, in descending order of size. One generation's "supercomputer" is the next generation's "mainframe", and a "PDA" does not have the same set of functions as a "laptop", but the list still has value, as it provides a ranked categorization of devices. It also ranks some more obscure computer sizes.
supercomputer is a computer with a high level of performance compared to a general-purpose computer. The performance of a supercomputer is commonly measured in floating-point operations per second (FLOPS) instead of million instructions per second(MIPS). Since 2017, there are supercomputers which can perform up to nearly a hundred quadrillion FLOPS. Since November 2017, all of the world's fastest 500 supercomputers run Linux-based operating systems. Additional research is being conducted in China, the United States, the European Union, Taiwanand Japan to build even faster, more powerful and more technologically superior exascale supercomputers.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.
Supercomputers were introduced in the 1960s, and for several decades the fastest were made by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. The first such machines were highly tuned conventional designs that ran faster than their more general-purpose contemporaries. Through the 1960s, they began to add increasing amounts of parallelism with one to four processors being typical. From the 1970s, vector processorsoperating on large arrays of data came to dominate. A notable example is the highly successful Cray-1 of 1976. Vector computers remained the dominant design into the 1990s. From then until today, massively parallelsupercomputers with tens of thousands of off-the-shelf processors became the norm.
The US has long been the leader in the supercomputer field, first through Cray's almost uninterrupted dominance of the field, and later through a variety of technology companies. Japan made major strides in the field in the 1980s and 90s, but since then China has become increasingly active in the field. As of November 2018, the fastest supercomputer on the TOP500supercomputer list is the Summit, in the United States, with a LINPACK benchmarkscore of 143.5 PFLOPS, followed by, Sierra, by around 48.860 PFLOPS. The US has five of the top 10 and China has two. In June 2018, all supercomputers on the list combined have broken the 1 exabyte mark.
In 1960 Sperry Rand built the Livermore Atomic Research Computer (LARC), today considered among the first supercomputers, for the US Navy Research and Development Centre. It still used high-speed drum memory, rather than the newly emerging disk drivetechnology. Also among the first supercomputers was the IBM 7030 Stretch. The IBM 7030 was built by IBM for the Los Alamos National Laboratory, which in 1955 had requested a computer 100 times faster than any existing computer. The IBM 7030 used transistors, magnetic core memory, pipelined instructions, prefetched data through a memory controller and included pioneering random access disk drives. The IBM 7030 was completed in 1961 and despite not meeting the challenge of a hundredfold increase in performance, it was purchased by the Los Alamos National Laboratory. Customers in England and France also bought the computer and it became the basis for the IBM 7950 Harvest, a supercomputer built for cryptanalysis.
The third pioneering supercomputer project in the early 1960s was the Atlas at the University of Manchester, built by a team led by Tom Kilburn. He designed the Atlas to have memory space for up to a million words of 48 bits, but because magnetic storage with such a capacity was unaffordable, the actual core memory of Atlas was only 16,000 words, with a drum providing memory for a further 96,000 words. The Atlas operating system swappeddata in the form of pages between the magnetic core and the drum. The Atlas operating system also introduced time-sharing to supercomputing, so that more than one programe could be executed on the supercomputer at any one time. Atlas was a joint venture between Ferranti and the Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second.
The CDC 6600, designed by Seymour Cray, was finished in 1964 and marked the transition from germanium to silicontransistors. Silicon transistors could run faster and the overheating problem was solved by introducing refrigeration to the supercomputer design. Thus the CDC6600 became the fastest computer in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.
Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray-1in 1976, which became one of the most successful supercomputers in history.The Cray-2 was released in 1985. It had eight central processing units (CPUs), liquid coolingand the electronics coolant liquid fluorinertwas pumped through the supercomputer architecture. It performed at 1.9 gigaFLOPSand was the world's second fastest after M-13 supercomputer in Moscow.
Massively parallel designs
Main articles: Supercomputer architecture and Parallel computer hardware
The only computer to seriously challenge the Cray-1's performance in the 1970s was the ILLIAC IV. This machine was the first realized example of a true massively parallelcomputer, in which many processors worked together to solve different parts of a single larger problem. In contrast with the vector systems, which were designed to run a single stream of data as quickly as possible, in this concept, the computer instead feeds separate parts of the data to entirely different processors and then recombines the results. The ILLIAC's design was finalized in 1966 with 256 processors and offer speed up to 1 GFLOPS, compared to the 1970s Cray-1's peak of 250 MFLOPS. However, development problems led to only 64 processors being built, and the system could never operate faster than about 200 MFLOPS while being much larger and more complex than the Cray. Another problem was that writing software for the system was difficult, and getting peak performance from it was a matter of serious effort.
But the partial success of the ILLIAC IV was widely seen as pointing the way to the future of supercomputing. Cray argued against this, famously quipping that "If you were plowing a field, which would you rather use? Two strong oxen or 1024 chickens?" But by the early 1980s, several teams were working on parallel designs with thousands of processors, notably the Connection Machine (CM) that developed from research at MIT. The CM-1 used as many as 65,536 simplified custom microprocessors connected together in a network to share data. Several updated versions followed; the CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.
In 1982, Osaka University's LINKS-1 Computer Graphics System used a massively parallelprocessing architecture, with 514 microprocessors, including 257 Zilog Z8001control processors and 257 iAPX 86/20floating-point processors. It was mainly used for rendering realistic 3D computer graphics. Fujitsu's Numerical Wind Tunnelsupercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaFLOPS (GFLOPS) per processor.The Hitachi SR2201 obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface.
Software development remained a problem, but the CM series sparked off considerable research into this issue. Similar designs using custom hardware were made by many companies, including the Evans & Sutherland ES-1, MasPar, nCUBE, Intel iPSC and the Goodyear MPP. But by the mid-1990s, general-purpose CPU performance had improved so much in that a supercomputer could be built using them as the individual processing units, instead of using custom chips. By the turn of the 21st century, designs featuring tens of thousands of commodity CPUs were the norm, with later machines adding graphic units to the mix.