The very first attempt towards this automatic computing was made by Blaise Pascal. He invented a device which consisted of lots Of gears and chains and used to Perform repeated addition and subtractions. This device was called Pascaline. Later many attempts were made in this direction, we will not go in the details of these mechanical calculating devices. But we must discuss some details about the innovation by Charles babbage, the grandfather of modern computer. He designed two computers:



The difference engine: It was based on the mathematical principle of finite differences and was used to solve calculations on large numbers using a formula. It was also used for solving the polynomial and trignometric functions.

The Analytical Engine by Babbage: It was a general purpose computing device which could be used for performing any mathematical operation automatically. It consisted of the following components:

The Store: A mechanical memory unit consisting of sets Of counter wheels.

The Mill: An arithmetic unit which is capable of performing the four basic arithmetic operations.

Cards: There are basically two types of cards:

a. Operation Cards: Selects one of four arithmetic operating by activating the mill to peform the selected function.

b. Variable Cards: Selects the memory locations to be used by the mill for a particular Operation (i.e. the source of the operands and the destination of the results).

Output: Could be directed to a printer or a card punch device.






The basic features of this analytical engine were:
It was a general purpose programmable machine.
It had the provision of automatic sequence control, thus enabling programs to alter
its sequence of operations.
The provision of sign checking of result existed
Mechanism for advancing or reversing of control card was permitted thus enabling execution of any desired instruction. In other words, Babbage has deviced a condi- tional and branching instructions. The Babbage machine is fundamentally the same as a modem computer. Unfortunately Babbage work could not be completed. But as a tribute to Charles Babbage his Analytical Engine was completed in the last decade and is now on display at the Science Museum at London.
Next notable attempts towards computer were electromechanical and Zuse then used electromechanical relays that could be either opened or closed automatically. Thus, the use of binary digits, rather than decimal numbers started.

Harvard Mark I and the Bug

The next significant effort towards devising an electromechanical computer was made at the, Harvard University, jointly sponsored by IBM and the Department of UN Navy, Howard Aiken of Harvard University developed a system called Mark I in 1944. Mark I was a decimal machine.

Some of you must have heard a term called "bug". It is mainly used to indicate errors in computer programs. This term was coined, when one day, a program in Mark I did not run properly due to a moth short-circuiting the computer. Since then, the moth or the bug' has been linked with errors or problems in computer programming. The process of eliminating error in a program is thus, know as 'debugging'.

The basic drawback of these mechanical and electromechanical computers were:


Friction/inertia of moving components had limited the speed.
The data movement using gears and liner was quite difficult and unreliable.
The change was to have switching and storing mechanism with no moving parts and then the electronic switching technique "triode" vacuum tubes were used and hence born the first electronic computer.


First Generation Computers


It is indeed ironic that scientific inventions of great significance, have often been linked with supporting a very sad and undesirable aspect of civilisation i.e. fighting wars, Nuclear energy would not have been developed as fast, if colossal efforts were not spent towards devising nuclear bombs. Similarly, the origin of the first truly general purpose computer was also designed to meet the requirement of World War II. The ENIAC (the Electronic Numerical Integrator And Calculator) was designed in 1945 at the university of Pennsylvania to calculate figures for thousands of gunnery tables required by the US army for accuracy in artillery fire. The ENIAC ushered in the era of what is known as first generation computers. It could perform 5000 additions or 500 multiplications per minute. It was, however, a monstrous installation. It used thousands of vacuum tube (18000), weighed 30 tons, occupied a number of rooms, needed a great amount of electricity and emitted excessive heat. The main features of ENIAC can be summarised as:

- ENIAC was a general purpose computing machine in which vacuum tube technology was used.

- ENIAC was based on decimal arithmetic rather than binary arithmetic.

- ENIAC needed to be programmed manually by setting switches and plugging or unplugging. Thus, to pass a set of instruction to computer was cumber some and time-consuming. This was considered to be the major deficiency of ENIAC.

The trends which were encountered during the era of first generation computers were:

- The first generation computers 'control was centralised in a single CPU, and all the operations required a direct intervention of the CPU.

- Use of ferrite-core main memory was started during this time.

- Concepts such as use of virtual memory and index register (you will know more about these terms later) started.

- Punched cards were used as input device.

- Magnetic tapes and magnetic drums were used as secondary memory

- Binary code or machine language was used for programming.

- Towards the end due to difficulties encountered in use of machine language as programming language, the use of symbolic language which is now called assembly language started.

- Assembler, a program which translates assembly language programs to machine language was made.

- Computer was accessible to only one programmer at a time (single user environment).

- Advent of von-Neumann architecture.


Second Generation Computers


The second generation computers started with the advent of transistors. A transistor is a two state device made from silicon. It is cheaper, smaller and dissipates less heat than vacuum tube but can 'be utilised in a similar way as that of vacuum tubes. Unlike vacuum tubes, a transistor do not require wires, metal glass capsule and vacuum, therefore, is called a solid state device. The transistors were invented in 1947 and had launched the electronic revolution in 1950.

The generation of computers are basically differentiated by a fundamental hardware technology. Each new generation of computer is characterised by greater speed, large memory capacity and smaller size than the previous generation. Thus, second generation computers were more advanced in terms of arithmetic and logic unit and control unit then their counterparts of first generation. Another feature of second generation was M by this time high level languages were beginning to be used and the provisions for system software were starting.

One of the main computer series during this time was the IBM 700 series. Each successful number of this series showed increased performance and capacity and reduced cost. In these series two main concepts MO channels, an independent processor for Input/Output and Multiplexor, a useful routing device were used.




Third Generation Computers


A single self contained transistor is called discrete component. In 1960's, the electronic equipments were made from the discrete components such as transistors, capacitors, resistors and so on. These components were manufactured separately and used to be soldered on circuit boards which then can be used for making computers of the electronic components. Since computer can contain around 10,000 of these transistors, therefore. the entire mechanism was cumbersome.

Then started the era of micro-electronics (small electronic) with the invention of Integrated Circuits (ICs). Use of ICs in computer defines the third generation of computers.

In an integrated circuit the components such as transistors, resistors and conductors are fabricated on a semiconductor material such as silicon. Thus, a desired circuit can be fabricated in a tiny piece of silicon rather than assembling several discrete components into the same circuit. Hundreds or even thousands of transistors could be fabricated on a single wafer of silicon. In addition, these fabricated transistors can be connected with a process of metalisation to form logic circuits on the same chip they have been produced.





An integrated circuit is constructed on a thin wafer of silicon which is divided into a matrix of small areas (size of the order of a few millimeters squares). An identical circuit pattern is fabricated on each of these areas and the wafer is then broken into chips (Refer figure 3). Each of these chips consists of several gates, which are made using transistors only, and a number of input and output connection points. Each of these chips then can be packaged separately in a housing to protect it. In addition, this housing provides a number of pins for connecting this chip with other devices or circuits. The pins on these packages can be

provided in two ways:


In two parallel rows with 0. 1 inch spacing between two adjacent pins in each row. This package is called dual in- line package (DIP) (Refer Figure 4(a)).
In case, more than hundred pins are required then pin grid array (PGA) where pins are arranged in arrays of rows and columns, with spacing between two adjacent pins of 0. 1 inch (Refer Figure 4(b)).




Different circuits can be constructed on different wafers. All these Packaged circuit chips then can be interconnected on a Printed-circuit board to Produce several complex electronic circuits such as computers.

Initially, only a few gates were integrated reliably on a chip and then Packaged. These initial integration was referred 10 as small-scale integration (SSI). Later, With the advances in micro electronics technologies the SSI gave way to Medium Scale Integration where 100's of gates were fabricated on a chip. Then came Large Scale Integration (1,000 gates) and very large integration (VLSI 1000,000 gates on a single chip) at present we are going to the era of Ultra Large Scale Integration (ULSI) where 100,000,000 components are expected to be fabricated on a single chip. According to this, expected projection is that in near future almost 1 0,000,000,000 components will be fabricated on a, single chip.

What are the advantages of having densely packed Integrated Circuits? These are:

Low cost: The cost of a chip has remained almost constant while the chip density (number of gates per chip) is ever increasing. It implies that the cost of computer logic and memory circuitry has been reducing rapidly.

Greater Operating Speed: The more is the density, the closer are the logic or memory elements, which implies shorter electrical paths and hence the higher operating speed.

Smaller computers-better portability

Reduction in power and cooling requirements.

Reliability: The integrated circuit interconnections are much more reliable than soldered connections. In addition, densely packed integrated circuits enable fewer inter-chip connections. Thus, the computers are more reliable.

Some of the examples of third generation computers are IBM system/360 family and DEC PDP/8 systems. The third generation computers mainly used SSI chips.

One of the key concept which was brought forward during this rime was the concept of the family of compatible computers. This concept was mainly starred by IBM with its system/360 family.

A family of computer consist of several models. Each model is assigned a model number, for example, the IBM system/360 family have, Model 30,40, 50,65 and 75. As we go from lower model number to higher model number in this family the memory capacity, processing speed and cost increases. But, all these models are compatible in nature, that is, program written on a lower model can be executed on a higher model without any change. Only the time of execution is reduced as we go towards higher model. The biggest advantage of this family system was the flexibility of selection of model. For example, if you had limited budget and processing requirements you could possibly start with a relatively moderate model such as Model 30. As you business furnishes and your processing requirement increases, you can upgrade your computer with subsequent models such as 40, 50, 65, 75 depending on your need. Here. you are not sacrificing investment on the already developed software as they can be used in these machines also.

Let us summarise the main characteristics of the family. These are:


The instructions on a family are of similar type. Normally. the instructions set on a lower end member is a subset of higher end member, therefore, a program written on lower end member can be executed on a higher end member, but program written on higher end member may or may not execute on lower end member.
The operating system used on family members is the same. In certain cases some features can be added in the operating system for the higher end members.
The speed of execution of instruction increases from lower end family members to higher end members.
The number of I/O ports or interfaces increases as we move to higher members.
Memory size increases as we move towards higher members.
The cost increases from lower to higher members.
But how was the family concept implemented? Well there were three main features of implementation. These, are:


increased complexity of arithmetic logic unit
increase in memory - CPU data paths
simultaneous access of data in higher end members.
The PDP-8 was contemporary computer to systern/360 family and was a compact, cheap system from DEC. This computer established a concept of minicomputer. We will explain more about the term mini-computer later.

The major developments which took place in the third generation can be summarised as:


IC circuits were starting to find their application in the computer hardware replacing the discrete transistor component circuits. This resulted in reduction in the cost and the physical size of the computer.
Semiconductor (Integrated Circuit) memories were starting to augment ferrite core memory as main memory.
The CPU design was rude simple and CPU were made more flexible using a technique called microprogramming (will be discussed in Block 2).
Certain new techniques were introduced to increase the effective speed of program execution. These techniques were pipelining and multiprocessing. These win be discussed in Block 4.
The operating system of computers were incorporated with the efficient methods of sharing the facilities or resources such as processor and memory space, automatically.
13.5 Later Generations


As discussed earlier with the growth in micro-electronics the IC technology evolved rapidly. One of the major milestone in this technology was the very large scale integration(VLSI) where thousands of transistors can be integrated on a single chip. The main impact of VLSI was that, it was possible to produce a complete CPU or main memory or other similar devices on a single IC chip. This implied that mass production of CPU, memory etc. can be done at a very low cost. The VLSI-based computer architecture is sometimes referred to as fourth generation computers. Let us discuss some of the important breakthroughs of VLSI technologies.

Semiconductor Memories:

Initially the IC technology was used for constructing processor, but soon it was realised that same technology can be used for construction of memory. The first memory chip was constructed in 1970 and could hold 256 bits. Although the cost of this chip was high, but gradually the cost of semiconductor memory is going down. The memory capacity per chip has increased as: 1K, 4K, 16K, 64K, 256K and 1M bits.

Microprocessors:

Keeping pace with electronics as more and more component were fabricated on a single chip, fewer chips were needed to construct a single processor. Inter in 1971 achieved the breakthrough of putting all the components on a single chip. The single chip processor is known as a microprocessor. The Intel 4004 was the first microprocessor. It was a primitive microprocessor designed for a specific application. Intel 8080 which came in 1974 was the first general purpose microprocessor. It was an 8 bit microprocessor. Motorola is another manufacturer in this area. At present 32 and 64 bit general purpose microprocessors are already in the market. For example Intel 486 is a 32 bit processor, similarly Motorola's 68000 is a 32 bit microprocessor. Pentium which is announced by Intel in 1993 is a 64 bit microprocessor. Figure 5 gives the families of INTEL & MOTOROLA microprocessors.





The VLSI technology is still evolving more and more powerful microprocessor and more storage space now is being put in a single chip.

The contemporary computers are characterised in fourth generation. Some researcher classify them in fifth generation. The boundary between fourth generation and fifth generation is still very blurred, therefore, many researcher still believe that we are in fourth generation.

One question which we have still not answered is: Is there any classification of computers? Well-since quite sometimes computers have been classified under three main classes. These are:

Microcomputers

Minicomputers

Mainframes

Although with development in technology the distinction between all these is becoming blurred. Yet it is important to classify them as it is sometimes useful to differentiate the key elements and architecture among the classes.

Microcomputers:

A microcomputer's CPU is a microprocessor. The microcomputer originated in late 1970's. The first microcomputers were built around 8-bit microprocessor chips. What do we mean by an 8-bit chip? It means that the chip can retrieve instructions/data from storage, manipulate, and process an 8-bit data at a time or we can say that the chip has a built- in 8-bit data transfer path. A few 8-bit microprocessor chips are:

Zilog Z80, MOS 6502, Intel 8080 and MC 6809.

A number of still popular microcomputers have these 8-bit microprocessor chips. To name a few systems-Commodore 64, TRS- 80, BBC Microcomputer, etc.

An improvement on 8-bit chip technology was seen in early 1980s, when a series of 16-bit chips namely 8086 and 8088 were introduced by Intel Corporation, each one with an advancement over the other.

8088 is a 8/16 bit chip i.e. an 8-bit path is used to move data between chip and primary storage (external path), but processing is done within the chip using a 16-bit path (internal path) at a time 8086 is a 16/16 bit chip i.e. the internal and external paths both are 16 bit wide. Both these chips can support a primary storage capacity of upto 1 rnega byte (MB).

Intel's 80286 is a 16132 bit chip, and it can support upto 16 MB of primary storage. Accordingly, the number of secondary storage devices and other peripherals it supports is more than that in 8088 or 8086 chip.

Similar to Intel's chip series, exists another popular chip series of Motorola. The first 16-bit microprocessor of this series is MC 68000. It is a 16/32 bit chip and can support upto 16 MB of primary storage. An advancement over the 16/32 bit chips are the 32/32 chips. Some of the popular 32-bit chips are Intel's 80386 and MC 68020 chip.

Most of the popular microcomputer are developed around Intel's chips, while most of the minis and superminis are built around Motorola's 68000 series chips. There are, however, new trends developing. With the advancement of display and VLSI technology now a microcomputer is available in very small size. Some of these are laptops. note book computers etc. Most of these are of the size of a small note book but equivalent capacity of an older mainframe.

Minicomputer:

The term minicomputer originated in 1960s when it was realised that many computing tasks do not require an expensive contemporary mainframe computers but can be solved by a small, inexpensive computer. Initial minicomputers were 8 bit and 12 bit machines but by 1970's almost all minicomputers were 16 bit machines. The 16 bit minicomputers have the advantage of large instruction set and address field; and efficient storage and handling of text, in comparison to lower bit machines. Thus, 16 bit minicomputer was more powerful machine which could be used in variety of applications and could support business applications alongwith the scientific applications.

With the advancement in technology the speed, memory size and other characteristics developed and the minicomputer was then used for various stand alone or dedicated applications. The minicomputer was then used as a multi-user system, which can be used by various user at the same time. Gradually the architectural requirement of minicomputers grew and a 32-bit minicomputer which was called supermini was introduced. The supermini had more peripheral devices, larger memory and could support more users working simultaneously on the computer in comparison to previous minicomputers.

Mainframes:

Mainframe computers are generally 32-bit machines or on the higher side. These are suited to big organisations, to manage high volume applications. Few of popular mainframe series are MEDHA, Sperry, DEC, IBM, HP, ICL, etc. Mainframes are also used as central host computers in distributed systems. Libraries of application programs developed for mainframe computers are much larger than those of the micro or minicomputers because of their evolution over several decades as families of computing. All these factors and many more make the mainframe computers indispensable even with the popularity of microcomputers.

Supercomputers:

The upper end of the state of the art mainframe machine are the supercomputers. These are amongst the fastest machines in terms of processing speed and use multiprocessing techniques, where a number of processors are used to solve a problem. There are a number of manufacturers who dominate the market of supercomputers-CRAY (CRAY YMP, CRAY 2). ETA (CDC-ETA 10, ETA 20) and IBM 3090 (with vector), NEC (NEC SX-3), Fujitsu (VP Series) and HITACHI (S Series) are same of them. Lately a range of parallel computing products, which are multiprocessors sharing common buses, have been in use in combination with the mainframe supercomputers. The supercomputers are reaching upto speeds well over 25000 million arithmetic operations per second. India has also announced its indigenous supercomputer.

Supercomputers are mainly being used for weather forecasting, computational fluid dynamics, remote sensing, image processing, biomedical applications, etc. In India, we have one such mainframe supercomputer system-CRAY XMP-14, which is at present, being used by Meteorological Department.