Here is a compilation of term papers on ‘Computer’ for class 11 and 12. Find paragraphs, long and short term papers on ‘Computer’ especially written for college and IT students.
Term Paper on Computer
Term Paper Contents:
- Term Paper on the Introduction to Computer
- Term Paper on the Components of Computer
- Term Paper on the Characteristics of Computer
- Term Paper on Computer Language
- Term Paper on the Classification of Computer
- Term Paper on the Working Process of a Computer
- Term Paper on the Timing Mechanism of Computer
- Term Paper on the Development of Hardware
- Term Paper on the Indian Computing Scene
ADVERTISEMENTS:
Term Paper # 1. Introduction to Computer:
In 1833, exactly 111 years before the first computer called Harvard Mark I was made in USA, Charles Babbage of Cambridge University, UK, hailed as the Father of computer, developed the basic concept of computers.
The machine, which he called Analytical Engine, was to have five parts, to carry out arithmetical operations:
ADVERTISEMENTS:
1. A store to hold the numbers involved in computation,
2. A mill to carry out the arithmetic computation automatically,
3. A control unit to ensure correct execution of instructions,
4. An input device to pass data and instruction to the engine, and
ADVERTISEMENTS:
5. An output device to display the results of operation.
Unfortunately, the purely mechanical machine could not be built because the technology available at that period was incapable of producing the gears and wheels of the precision required. Charles Babbage is said to have been born one hundred years ahead of his time.
Keeping the basic idea same, the only addition to the above concept, over the century, has been incorporation of the logical processing ability using electronic circuits, and being in tune with technology, the hardware has been made electro-magnetic — but that has altered the complete picture, adding tremendous capability to the machine.
The machine which was to do only arithmetical computation can now process data of even non-mathematical type and produce all kinds of information. So, we now define a computer as an electronic device capable of manipulating data, as per predetermined logical instructions, to produce information.
ADVERTISEMENTS:
The present architecture, broadly, is as given in Diagram 1:
ADVERTISEMENTS:
Term Paper # 2. Components of a Computer:
As you can see from the Diagram 1, the basic components of a computer has remained the same with modern technologies being used in each area. The input is the gate way leading to the computer system, through which the data and the instructions to execute are entered — this is how we communicate to the computer what we-want it to do, obviously in a manner which the computer can understand.
The control unit is the boss, which ensures that the instructions are available at a predefined place, the data are also kept in where required, and then the instructions are carried out as given. It uses extensively a storage place, called primary storage for keeping the instructions, the data, working area tor processing and then storing the result of processing. Once the processing is completed, as per instructions, it displays the result on the monitor and or gets it printed — the last two devices being classified as output.
The control unit, to get the computation done, whether arithmetical and or logical, takes assistance from another unit called Arithmetic and Logic Unit or ALU in short. The control unit and the ALU together, along with another storage place called Registers, is called the Central Processing Unit or CPU.
It is absolutely essential to give the complete instructions in advance, so that these can be executed one after another automatically. All the units are interconnected as required, by several paths called bus, through which the data and instructions flow from one place to another. The role of the CPU is now played by microprocessors.
In addition, the computers invariably have some special storage devices attached to it, called secondary storage [Disk Drive] — this being a storage medium of more or less permanent nature, where the tiles are stored. These are generally magnetic tapes or disks, which are reusable.
The primary storage or main-memory is in the bad habit of forgetting everything when the computer is switched off, so the secondary storage or auxiliary-memory is needed. Moreover, the size of the main-memory is limited because of a number of factors and so the secondary memory comes to supplement if in main cases.
The computer is also called a two-state electronic device, because, inside it there are millions of switches which are either on or off — that is they are either allowing the flow of electric current or blocking it. The combination of these two-state devices are used to represent the data and the instructions to be carried out — using a special mathematical system called Binary System, 0s and 1s are used to represent the off and on positions of a two state device. Instead of manually changing the status of the switches, which was the practice with earliest computers, the instructions do it now and we get the information required.
Term Paper # 3. Characteristics of Computer
:
The main characteristics of a computers are:
The speed at which the instructions are carried out has gone up in leaps and bounds over the years, making the computers a very fast processing device. The clock cycle, which is an important contributor to the speed of processing has gone up in Personal Computers from 4.77 MegaHertz to about 200 MegaHertz in 15 year.
The lowest level of PC was able to carry out about three quarters of a million of instructions per second, which has now increased to over 100 million instructions per second [MIPS]. Processing times are now being measured in nano- and pico- seconds which are 1011 and 1012 seconds respectively; as against seconds in the earlier computers.
The accuracy of information generated by a computer is directly dependent on the accuracy of the data entered and the instructions given. Once correct data is entered, and of course, it the program is without errors, the results produced will always be correct consistently, no matter what type of processing is done to produce what information. Computers keep track of its various systems with sell-checks to ensure that inaccuracies are immediately taken care of. The starting of a computer begins with self-check of vital parts.
The computers can carry out any type of processing, provided that processing task can be broken down to a series of logical steps, which the computer can execute.
The computer does not get tired or bored. It any hardware malfunction is caused due to some fault, it immediately points it out, as it has self-checking mechanism.
The greatest problem with the computer is that neither it can think nor it can take any decision of its own. There is saving a that computers do all that you ask them to do, but not necessarily what you want them to do. It is up to you to ensure that your computer does what you want to be done.
Generally, the criteria applied tor deciding whether a job will be done manually or using computers are as follows:
The computers would be used in such cases as:
1. Volume of Data — if it is large then computers are desirable.
2. Repetitiveness — if same type of computations are done repeatedly
3. Complexity — if the job is of very complex nature.
4. Speed Required — if time is a great factor in getting the output.
In all other cases, manual operation would be preferable. The most demanding application of computers are in the area of satellite control, where almost all the above four factors are present.
The general term hardware denotes all the equipment with its parts and components attached to the computer, including the computer itself. It comprises all the electronic elements, wires, connectors, disks, tapes, etc., which are physically present in a computer system.
The input and output devices, also called I/O devices, like Keyboard, Video Display Unit, Printer, etc., and the secondary storage devices like disks, tapes, etc., are also called peripherals; as they are within the periphery of the Central Processing Unit, connected via the bus.
The hardware being an inert electronic device, software is needed to bring it into life and control its operations to input data, process it, and to get the output. It includes the programs, the data, or even the manuals containing the details about how to use the program.
The computer cannot do anything by itself without software, which makes and breaks millions of switches, sometimes called gates, to get the desired output. As you will go deeper into the pros and cons of computer programming, you will understand how this tricky business of operating the hardware switches are done by the software.
Term Paper # 4. Computer Language
:
We use English, Bengali, Hindi, etc., to communicate between us, conveying our thoughts and ideas. But, with computers, the dull-headed idiots, we need some special languages to get our orders carried out. The language which the computer understands directly is called a Low Level Language and it exclusively uses 0s and 1s, indicating whether a particular switch would be off or on, to give the instructions, as well as to represent data and information.
For example in a PC XT, the instruction 0000 0101 0001 0110 0000 0000, would ask the computer to add 22 to whatever there is in a register inside the CPU called AX. Realizing the difficulty of converting everything we understand, correctly to such a huge number of 0s and 1s to make the computer understand what we want, a cousin of the Low Level Language called Assembly Language came into existence, which uses short English words to a large extent to facilitate remembering of the basic- instructions. For example, the above instruction in Assembly Language would be ADD AX, 22. An Assembler is required to translate the code to the Machine Language for the computer to understand.
But even this improvement was not very helpful to persons with limited capability, as it requires deep knowledge of the internal system of the computer. So came a number of languages, as a class called High Level Language, which mostly requires no knowledge of the internal system and uses familiar English syntax for instructions. COBOL, FORTRAN, BASIC, PASCAL, C, etc., are different high level languages for giving instructions and data to the computer.
In these cases either an interpreter or a compiler is required to translate these instructions to respective machine language codes. An interpreter does the translation during execution on step by step basis, where as, a compiler does the complete translation in one go and creates a special file for direct execution in the operating system environment. Although, as far as practicable, common English words have been used to write these languages, theses HLLs are artificial languages.
Term Paper # 5. Classification of Computers:
The computers can be classified in different ways, depending on the size type of input and outputs, technology used, etc., although in many cases the definitions are overlapping without having any clear cut demarcation.
The generation-wise classification of computers, for instance, was more or less clearly defined with First Generation using valves, the Second Generation using Transistors, and the Third Generation having Integrated Circuits, but, thereafter the terms 4th and 5th Generations are a bit vaguely applied.
i. Input Based Classification:
Depending on the type of data handled by the computer, the computers are classified as Analog, Digital, or Hybrid. In real life we also have general data types of two kind one which is countable and the other which is measured, the former being called discrete data and the latter is referred to as continuous data. For example, the number of persons in a class room will always be a whole number, like 10, 16, 55, etc.; these are countable. But if you want to measure their weight, depending on the accuracy desired, you can go to any number of decimal places, creating what is known as a continuous series of data.
The wrist-watch with hands pointing to hours, minutes, seconds are of non-discrete type and those displaying time in numbers are of discrete type. Computers can handle both type of data, the one using discrete data is called a Digital Computer and the other using continuous data is called an Analog Computer — the former processes values having discrete properties and the latter dealing with continuous variables. There is a third type which uses both the type of data and it is called a Hybrid Computer.
Those are partly analog and partly digital. For example, a patient monitoring system has to count pulse bits which is digital and also monitor the blood- pressure, which is analog. Most of the computers in use are digital computers.
The digital class of computers are mostly used as general purpose machines, being capable of dealing with variety of problems by using different softwares. For example, using FORTRAN as the programming language, it solves scientific problems and with COBOL, it deals with commercial applications.
But there are also some special purpose digital computers which are used in Electronic Fund Transfer machines like Automated Teller Machines. There are some digital computers exclusively for Desk Top Publishing [DTP] work, which are called dedicated machines. The analog computers are used in very limited applications, generally in industrial applications.
In general, these are special purpose machines, designed for specific usage like solving differential-equations, process control, etc. In analog computers, the input variables are analogous to the values being given as input, and these are programmed by changing circuit paths and components. The hybrid computers are used exclusively as special purpose computers.
ii. Size Wise Classification:
Depending on the size of computers, the computers are classified as Mainframe, Mini, and Microcomputers. There is another special class of computers of fairly recent origin, called Supercomputers — which are so called because of their fantastic processing speed.
a. Mainframe Computer:
The computers coming under the class are those which we were first to develop, the Central Processing Unit being housed in a frame. These are large general purpose computers with fast processing speed and high storage capacity with a number of input/output channels. These machines are stored in a special room, having a number of terminals connected to it; also fitted with multi-channel tape-drives and large disk-drives for handling magnetic tapes and disks.
These can do a number of jobs simultaneously, having its own machine-dependent operating system, which is compatible within the same class of machines. The modern mainframes are usually 64-bit machines. You may be surprised to know that the capability of main-frame computers of the 1960s have been surpassed by the Personal Computers of mid-1980s. In 1970s, majority of the mainframes installed were IBM System/370. The mainframes are extensively used for high volume batch processing operations, management of very large data base systems, as a host in distributed data processing systems, etc.
b. Mini-Computers:
In view of the very high price of the main-frame computers and to fill up the gaps left by earlier mainframe computers, smaller versions with limited capacities started being developed from 1970s to meet the need for medium size users; providing multi-user facilities. The size of the computers are usually that of a filing cabinet. Today, the division between mini-computers and micro-computers are so thin that an expert said, mini-computers are those computers which are called minicomputers by their manufacturers. Earlier, these were mostly used in specialized applications systems, in industry.
The mini-computers now play a significant role in distributed data processing systems as subordinate processors of the main processor. These are also used as File Server in Local Area Networks. Their fast processing speed is on account of large cache memory, which act as a high speed buffer and they usually provide facilities for a large number of terminal connections. These mini-computers were the first stage in bringing down the cost of computers from a very high level to a moderate level, so that many organizations could afford them. DEC PDP series are a popular brand of mini-computers.
The earlier mini-computers were all 16-bit machines, and so, when 32-bit processing was introduced, these were called super-mini computers.
c. Micro-Computers:
The real change in computing scene evolved in 1981, when IBM launched the 16- bit microprocessor based computers, variously called Microcomputers or Personal Computers, or PC in short. These are a class of machines for which apparently the sky seems to be the upper-limit of development. These are called PCs because individuals can now afford to own a computer, which was unthinkable even fifteen years ago.
The heart of the microcomputer is an Integrated Circuit, a small chip, which contains the Central Processing Unit. It all started with 8086 chip manufactured by Intel Corporation. A modified version called 8088 chip is like 8086 inside the chip, but uses 8-bit devices for outside control on cost consideration. The latter versions, with downward compatibility, are 80286, 80386, 80486, and now called Pentium, instead of 80586. Now there are a number of other manufacturers, with compatible chips.
d. Super-Computers:
The Super-Computer, is a special class of computers, which are regarded as national assets because of their stupendous capabilities. The story of battle between the governments of USA and India over a super-computer called CRAY XMP is well known. What singles out the super-computers from other class of computers is the technique of processing, called parallel-processing.
In all other computers, the job is processed in a step by step manner from the beginning to the end and this has its own limitation in terms of processing speed — attempts to get higher speeds result in burn-out of the chip. In super-computers, the computing job is systematically broken down into different sections and each section is processed by different processors simultaneously cutting down the overall computing time.
It is like, instead of constructing a road from one side only, the construction starts at two or more places m patches, ultimately aligning the whole things to complete the construction successfully. The capability of supercomputers is measured in term of FLOPS — numbers of floating-point computations done per second. These are extremely costly machines and very few- Countries in the world have the ability to produce them — India is one of them.
Term Paper # 6. Working Process of a Computer:
Let us briefly discuss how a computer does what it is supposed to do. Since computer is a machine, every storage places and every devices attached to the central processing unit have definite, unique addresses, by which the CTU refers to them. At almost every parts, including inside the CPU, there are storage places, some of them being called differently. For example, the storage place inside the microprocessor is generally called Registers, which are given names.
The primary storage place where the data and instructions are stored, before the microprocessor can use them is called Random Access Memory or RAM. The storage place used for temporarily storing data or instructions which are brought from secondary storage to RAM, is called Buffer.
The memory used for storing the output which is shown on the monitor is called Video RAM. Here, all you have to know that different storage locations with different addresses are used by the control unit of the central processing unit to do the things it is supposed to do.
Hence, the very basic operations of a microprocessor are:
1. Transferring data / instruction from one place to another.
2. Doing arithmetic operations and logical comparisons on data.
3. Ensuring that these operations are carried out properly.
The Transfer Operation Involves:
1. Getting data / instruction from Input Device, like keyboard.
2. Storing these in appropriate places in the RAM.
3. Moving data from RAM to Registers inside the microprocessor.
4. Moving data from Registers to RAM.
5. Moving data from RAM to Video RAM or displaying it.
6. Moving data / instructions from RAM to Secondary Storage.
7. Moving data / instructions from RAM to Secondary Storage.
8. Moving data to output Devices like printer etc.
The process of moving data / instruction involves sending the address along a path called Address Bus to the appropriate location and then either reading from or writing to that location — the data or instructions being sent or received along Data Bus.
Let us say, we w ant a computer to add two number like say, 12 and 4N and give us the answer.
What instructions are to be given?
1. Get a number from the Keyboard.
2. Store the value entered at address #3452.
3. Get a number from the keyboard.
4. Store the value entered at address #3453.
5. Move the value from address #3452 to Register #1.
6. Move the value from address #3453 to Register #2.
7. Add the contents to Register#2 to that of Register #1.
8. Move the contents of Register #1 to address #6452.
9. Stop.
To execute this set of instructions, called program, the first condition to be fulfilled is that this program must be placed at a specific address in the RAM, so that the control unit will know that these are the instructions to be executed.
If this program was stored as a file as addnum.exe in a disk, then to execute it the contents of this file addnum.exe is to be moved to RAM from Disk and that is done by the Operating System, like say MS DOS, which places it at the proper location. The instructions are stored serially one after another in that location at different addresses byte wise.
Once the program is in position, the control unit, sends the address of the first instruction on the address bus to RAM and using the Data Bus gets the instruction in a Register inside the microprocessor. The control unit cannot carry out any instruction or operate on any data unless these are brought from RAM to Registers. The process of getting instruction from RAM is technically called Fetch Operation.
The control unit then Decodes the first instruction so that it knows what is to be done. Our first instruction being that of getting a number from the keyboard, the control unit will wait for a number to be typed. Let us say we typed 12. This number comes and gets stored in a Register inside the microprocessor.
After fetching and decoding of the second instruction, the control unit sends the address #3452 along the address bus and then sends the number stored in the register along the data bus to “write” — the number 12 gets stored at #3452. This is completion of the Execution part. The control unit continuously follows the cycle of Fetch — Decode — Execute, unless the last instruction tells it to stop.
Similarly, based on the third and fourth instructions, another number say 48, is received from keyboard and it gets written [stored] at #3453. By the fifth and sixth instructions, these two numbers get transferred to the registers inside the microprocessor, in the same way using the address and data buses. Incidentally, the control unit sends all its commands along a third path called Control Bus.
As per the seventh instruction, the control unit uses the service of the Arithmetic and Logic Unit to get the instruction executed and after execution, the content of Register #1 becomes 60. Under the eighth instruction this number is sent to an address where from it is displayed on the screen. As per ninth instruction, the fetch-decode-execute cycle stops.
You must have noticed that the control unit is not concerned with the value of any data that is stored, it only deals with the contents of the locations given by addresses and so once a program is written for adding two numbers, it will add any two numbers, provided the value of the numbers do not exceed the storage capacity.
Moreover, since to the computer, whether data or instructions, everything is given by switch positions of 0 and 1, the address mechanism is used to differentiate between data and instruction, or even a monitor or a printer.
Term Paper # 7. Timing Mechanism of Computer:
To smoothly control the operations of a computer, the control unit uses the service of a clock for timing its operations, the duration being expressed in cycles per second. Hence, the time required to fetch an instruction is called Fetch Cycle and that to decode and execute that instruction is called Execution Cycle. When the timing of both these cycles are added up, we get Instruction Cycle, which is the time required to carry out one instruction.
The term Machine Cycle is used to refer to the time required by the central processing unit to access the RAM or Input / Output addresses. An Instruction Cycle usually needs more than one Machine Cycle, which again may need more than one Clock Cycle.
The Clock Cycle refers to the timing interval between the two pulses generated by the clock. The reason for expressing timing in terms of cycles is that if the clock speed is changed the cycle time will also change, even with number of cycles being same.
Term Paper # 8. Development of Hardware:
The technological development in the hardware systems of computers, especially the class called micro-computers or Personal Computers [PCs], over the last five decades have really been spectacular. The first computer railed Automatic Sequence Controlled Calculator, and popularly known as Harvard Mark I was born in August 1944. The computer was 51 feet in length and 8 feet in height, consuming more than 500 miles of wires.
In that computer, an addition took 0.3 seconds and the time required for a multiplication was 4.5 seconds — even much slower than modern pocket calculators. There after, initially with a slow progress, the development gathered momentum with invention of microprocessor chips from around mid-1970s. Now with advanced technology, the computing speed is measured in nano-seconds, which is one-billionth of a second [109].
Earlier, there was a system of classifying computers, based on the basic hardware, into different generations. The First Generation (1949 – 1959) computers were built with electronic vacuum tubes. These were slow speed machines with very limited capacity and generated lots of heat. These were used to be programmed in machine language only and there was no such thing called an operating system.
The Second Generation (1959 – 1964) computers switched over to transistors, which are solid state devices and do not need any heating to operate. Naturally, the sizes came down and ability increased. These machines also had no operating system, but started using Assembly Language for programming at later stages. These machines were mostly used for scientific and specialized purposes.
The Third Generation (1964 -1974) came up with Integrated Circuits, called ICs in short. This design packed up a number of transistors in a single device, reducing the size and improving the performance considerably. With this generation, Operating Systems came into use and these also used high-level programming languages. The design of the machines moved more towards general purpose usage.
The Fourth Generation (1974 – ????) started using large scale integrated circuits [LSI]. Frankly speaking, there is a wide difference of opinion as to which hardware technology is to be classified as to belong to fourth generation, because LSI or VLSIs are also ICs.
As per majority view we are still in the fourth generation, where as some feel we are using Fifth Generation technology Some feel, the Fifth Generation computers will have lots of Artificial Intelligence, being closely linked with declarative languages like Prolog.
Emergence of Microcomputer:
The real break-through in computing started with the design and development of microprocessor chips, which were developed by Intel Corporation of USA for industrial control systems — initially these being 4-bit and later 8-bit chips.
The first microprocessor named 4004 was built by Intel in 1971. Soon after Motorola Corporation came out with their own chips. In 1997, Steve Jobs and Steve Wozniak built the first personal computer Apple II. However, the credit tor making first personal computer is given lo IBM Corporation.
In 1980s when IBM started planning for microprocessor based computers, the original idea was to build 8-bit machines with readily available chips. However, as the story goes, Bill Gates of Microsoft Corporation advised to go in for 16-bit machines with the new 8086 chip produced by Intel.
In August 1981, the famous IBM Personal Computer was released, along with a generalized operating system called PC DOS by IBM and MS DOS by Microsoft. It replaced the earlier operating system called CP/M — Control Program / Microcomputer. Though some microcomputers, with 8-bit chips and the CP/M operating system existed before August 1981, the history of Personal Computers starts with IBM Personal Computer.
Since the 16-bit 8086 chip required 16-bit peripherals and external devices which were vet to fully develop and was costlier than the corresponding 8-bit chips, Intel came out with the 8088 chip, which is 16-bit inside and 8-bit outside and extensively used in PCs — the first IBM Personal Computer was built with 8088 chip. The BIOS of the first Personal Computer is dated as 24th April, 1981; which was revised on 19th October, 1981; removing some bugs.
The first Personal Computer with DOS Version 1.0 had single-sided drive with a 320 K capacity and a computing speed of approximately quarter-million instructions per second; the clock speed being 4.77 MHz. With DOS Version 1.1, the use of double-sided diskette started.
The next major modification came in early 1983, with BIOS dated as 16th August, 1982 which was called PC XT — the Extended Technology. For the first time, the Personal Computer had a hard disk, with a DOS version of 2.0 dated 8th March, 1983. The capacity of the hard-disk, also called fixed-disk, was 10 MB. The hard-disk operated about 5 times faster than the floppy-disk-drive in data handling.
In the same year IBM came out with a junior version of Personal Computer, called PC Jr, but it did not click and had a natural death in 1984.
In 1985, IBM came out with a renovated Personal Computer using the then latest 80286 chip of Intel and called this machine PC AT — Advanced Technology. The machine had a computing speed of about 5 times that of PC XT and it had a hard-disk of 20 MB.
For the first time, the concept of Virtual Memory developed. The machine could also go beyond the standard limitation of 640 KB RAM — there being two operating modes called Real Mode and Protected Mode. The technique of multi-tasking also came into use.
The first deviation from 16-bit machine was made when the 80386 chip came out with 32-bit structure, which has been followed with 80486, and now 80586, called Pentium.
The DOS Version 3.10 added another feature to the Personal Computer Family called Networking, which has evolved tremendously interconnecting computers in offices, countries, and across international borders. Though DOS is a single-user operating system, it has been greatly influenced by the design of the UNIX operating system and it also retained some peculiarity of the CP/M operating system. The current version of MS DOS is numbered 6.22.
The clock-cycle, which was 4.77 MHz in the original Personal Computer, has now increased to about 10 MHZ in the 8088 chip itself and about 200 MHZ in the Pentium.
Graphics is another area, including desk-top publishing, where the impact of Personal Computer is considerable. Now, multi-media systems have come up, providing audio as well as video facilities with PCs.
Term Paper # 9. The Indian Computing Scene:
The role of Indians in the evolution of mathematical system are now acknowledged because of the contribution of zero to the number system. But our development in science and technology being thwarted by colonial obstacles, the first significant activity relating to computer started in 1954 at Tata Institute of Fundamental Research, which resulted in the design of TIRFAC in 1960; which was a first generation computer.
The next recorded attempt was in 1963, with a joint collaboration by the Jadavpur University and the Indian Statistical Institute, producing a second generation machine in 1966 called ISIJU. The Atomic Energy Commission, however, developed analog computers in 1960 and real-time computers in 1965. The TDC – 12 was made operating by 1969, which are subsequently manufactured by the Electronic Corporation of India L td. [ECII].
Apart from some computers tor scientific use, the computers for business applications were supplied by IBM of USA and ICL of UK; which started trickling down to India. In 1961, there was only one computer in use, increasing the number to 16 in 1969, reaching 120 in early 1970s.
The birth of micro-computers in early 1980s gave a new fillip to Indian computing industry and currently, we are almost at par with developed nations in technical ability — and, exceeding in super- computing area. Now, all types of computers using the latest technology is made in India, be it micro-, mini-, mainframe-, or super-computer.
India’s ability in supercomputing is disclosed by an article in the Science Reporter of August 1993, written by Biman Basu. Six years ago, after obtaining a super-computer from USA called CRAY 14 with great difficulties, for weather prediction, when a second one was required for Indian Institute of Science, Bangalore, its sell was refused by the government of USA. As the story goes, the machine called CRAY YMP which India wanted to buy, still remains unsold as Indian super-computers are now dominating the world market — with higher capability and cheaper price.
The refusal came as a boon in disguise and the Indian scientists took up the challenge. In the area of specialized computing and super-computing, Plosolver was designed by Aeronautical Space Research Laboratory in 1986.
The latest model called Plosolver Mk-3 developed in 1992 is as powerful as the Cray XMP, which has a processing speed of 60 megaflops; but at one-tenth of the cost. The Advanced Numerical Research Analysis Group [ANURAG ] has made a supercomputer called PACE (Processor for Aerodynamic Computation and Evaluation) which has a performance rating of 100 megaflops.
The greatest achievement came from the Centre for Development of Advanced Computing, Pune which has designed and developed PARAM in July 1991, within a period of 36 months, a super-computer with one giga-flops rating at a cost of only Rs 2 crores, which is by far the cheapest super-computer in the world, already a dozens have been sold in India and 4 exported to developed countries [keeping the Cray YMP unsold].