Here is a compilation of term papers on the ‘Operating System’ for class 11 and 12. Find paragraphs, long and short term papers on the ‘Operating System’ especially written for college and IT students.
Term Paper on the Operating System
Term Paper Contents:
- Term Paper on the Definition of Operating System
- Term Paper on the Functions of Operating System
- Term Paper on Multi-Programming Operating System
- Term Paper on Different Operating Systems
- Term Paper on the Characteristics of Operating System
- Term Paper on the Fundamental Tasks of Operating System
Term Paper # 1. Definition of the Operating System:
ADVERTISEMENTS:
In early computers up to the second generation, the individual jobs for processing by the computer system used to be manually loaded, executed by the processor, again manually unloaded with some house-keeping operations, and then the next job used to be loaded manually again.
In fact, in those pre-operating-system days, the concept of directly executable programs, like we have today, just did not exist and each program, called a job had to be done in steps — with source-code in memory, loading of tape containing compiler [assembler], running the compiler to get object code, unloading compiler tape, loading the linker-editor tape, running object program, unloading everything and clearing primary storage.
During this period called set up time, used for loading and unloading, the central processing unit used to be totally idle. Again, if an error condition developed during processing, it had to be manually handled for detection and correction of errors; the central processing unit remaining idle during this time also.
In case of errors, memory dumps had to be taken and given to the programmer for correction and then the whole process had to be started again. Moreover, there is always a great imbalance between the speed of processing by the central processing unit and the speed of the Input/output operation, making the central processing unit idle during Input/output operations to a large extent. Incidentally, in pre-operating system days, the boot-strap program was used to load each job.
ADVERTISEMENTS:
The first step taken to cut down the set up time was to have skilled operators and several jobs being run on batches. During those days, the programs were mainly written either in COBOL or FORTRAN, and, batching meant running all COBOL programs in a batch and then all FORTRAN programs in another batch. But the operators were not programmers, so even for simple errors, the memory dump had to be taken and given to the respective programmers for correction; considerably delaying jobs.
The next step in development, which started from late 1950s, is to have a small monitor program for automatic job sequencing — the job Control Language [JCL] having been born, which were used to prepare the job-control cards. These programs were the fore-runners of the present day Operating Systems.
As far as the input operations were considered, these were extremely slow processes, holding down operation of the central processing unit. So, the first step was to get the input cards read in smaller satellite computers in off-line mode and then using the tapes in the main computer. The next step was to develop a system known as buffering.
There the concept was to make the input devices capable of operating independently within the overall control of a monitor program and taking away the choice of the actual device to be used from the hands of the programmers to the monitor — the programmers were required to specify logical devices with the monitor deciding which device to actually use, there being multi-device physical systems.
ADVERTISEMENTS:
The ability to run a program with different input/ output devices is called device independence. The concept of DMA — Direct Memory Access also developed a hardware doing what used to be done by the software.
Taking all these aspects within its fold for efficient control of the hardware, the class of software called Operating System developed, which always remains active in the main memory, being called a resident program. Since then this Operating System has undergone a number of modifications to make it highly efficient, effective and convenient to user.
Actually, the different tasks performed by an Operating System is quite complex and developing an Operating System calls for highest level of programming expertise. The Operating System always stands as a layer between the application software and the hardware — insulating the former from the latter.
It provides all the required hardware control services to the application programs through simple sub-routines which can be called by these programs; the detail control being done by the Operating System.
In the progressive development of the Operating System, a major step was to make the program processing ability interactive as against the batch-mode used earlier. In batch processing the user could not interact with the central processing unit creating lot of difficulties and they had to set up control cards and wait for processing to be completed with high turnaround time.
In interactive mode, the keyboards of the terminals replaced control cards. Another spectacular development was allowing a number of programs of one or more users to be processed simultaneously, called multi-programming. Currently, most mini- and micro-computers use Operating Systems of interactive mode.
Of course, such improvements further complicated the design of the Operating Systems, which had to incorporate memory management, file control systems, processor and device scheduling, deadlock handling, concurrency control, protection, etc.
The programming language used for interacting with the Operating System is called the command language — the commands given are handled by a special software called command processor or command interpreter.
ADVERTISEMENTS:
Under the MS DOS, the third system file, called COMMAND(dot)COM, which is not hidden like the other two, is the command interpreter which executes all the internal commands given at the MS DOS prompt — be it for clearing the screen or loading and executing an application program. This is the only executable file of MS DOS which is never used directly by the user by typing the name command at MS DOS prompt, as is done with other MS DOS executable files, like FORMAT(dot)COM.
The program COMMAND(dot)COM provides the command language under MS DOS — the actual syntax of the MS DOS commands, both internal and external, come under the category of command languages, as the shell commands of the UNIX Operating System are.
Term Paper # 2. Functions of the Operating System
:
Some of the functions are given below:
i. I/O Management:
Selecting the appropriate channel of data transfer as required, activating and handing over control, giving them autonomy for overlapped operation. The transfer of the data between primary and secondary storage is left to the control of the DMA Controller.
ii. Memory Management:
Allocating/deallocating memory to programs, creating virtual memory in disk drives, swapping programs and data from one place to another in the memory. Preventing one program to interfere with another program.
iii. File Management:
Anything and everything that is kept in permanent storage is done by means of a file which can be of any length, as far as the user is concerned. But, as far as the disk drives are concerned, the spaces are allocated to a file in clusters, as and when additional space is required, and the clusters are not necessarily in sequence one after another.
The Operating System, not only creates and allocates spaces for files, but it keeps quick accessible records of the file structure details in the directory area and in the File Allocation Table [FAT].
iv. Job Control:
In case of batch-processing jobs, the Operating System controls the loading, execution and unloading of the jobs, along with initiating activities of input/output operation.
v. Buffering:
A buffer is a specific storage area, created in primary storage or in the data-channel or both, where the data is stored in transit between input/output devices and main memory. The buffer in the data channel is called buffer register. Buffering is temporary storing of data in buffer during data transfer to compensate for imbalance in the operating speed of the central processing unit and the input/output devices.
During transfer from storage device, the IOCS fills up the buffer, where from the central processing unit processes it at its normal processing speed. In case of tape-drives, sometimes double buffering is used — one buffer for transfer of data from the tape-drive and the second buffer is for reading by the central processing unit, data being moved from the first to the second buffer at appropriate moment. Buffering is directly related to interrupt processing. When a buffer becomes full an interrupt is generated to bring it to the attention of the kernel, to whom the control is handed over.
vi. Spooling:
It is an acronym for Simultaneous Peripheral Operation On Line. This is another technique which takes into account of the low- speed of printers as compared to the speed of the central processing unit. Under this method all printing jobs are serially stored in a dedicated storing place, generally in a direct access storage device, and whenever the central processing unit gets a free time, it continues with printing operation. If the service of the central processing unit is required for a processing job, it stops printing operation and goes back to program processing again.
When a buffer with Input/output control system is used, overlapping takes place, that is simultaneous input/output operation and program processing are carried out. As far as buffering is concerned it overlaps the input/output operation of a job with its own program processing operation, whereas, in case of spooling, it overlaps the input/output operation of one job with the program processing of another job.
A number of algorithms have been developed to ensure proper scheduling of jobs for using the central processing unit.
Some of these are:
FCFS — First Come First Served,
SJF — Shortest Job First,
RR — Round Robin scheduling etc.
Scheduling of the central processing unit means the process of selecting and allocating it to different processes waiting for its service. The different criteria used for scheduling of the central processing unit are: CPU utilization, throughput-time, jobs completed/unit of time, turnaround time, waiting time, response time, etc.
Term Paper # 3. Multi-Programming Operating System
:
In general, Multi-tasking, Concurrent Programming, or Multi-programming refer to a state where two or more different and independent programs are executed in interleaved manner by the same processor. Under this system, the Operating System quickly switches over control from one program to another, so that each is executed one by one — the users never being aware of this, feel that the computer exclusives belong to each of them.
The processor is kept busy by trying to balance the input, output, and processing operations, in which autonomous nature of the input/output control and the technique of buffering is used. In multi-tasking, an input, an output, and a processing job can occur simultaneously, called overlapped operation.
The technique adopted in multi-programming Operating System, to minimize the idleness of the central processing unit and to allow a number of programs to run simultaneously, is to attach different priorities to different jobs. A job requiring large input/output operation is called I/O Bound and generally given high priority.
A job requiring large processing operation is called CPU Bound and a low priority is attached to it. The program control shifts from one program to another depending on the relative priorities. In addition, different algorithms are available for such job scheduling. Once an input/output control completes its operation, it draws attention of the master program of the Operating System through interrupts, which passes control from one section to other, allocating central processing time.
In multi-programming, when a job has to wait for something like a tape to be mounted, a command to be typed, or an input/output operation to be completed, the Operating System switches to another job and starts executing it.
In time-sharing Operating Systems, the allocation of central processing operation is based on a fixed time supplied by the real-time clock [RTC]. The pulses generated by the RTC at fixed intervals, generate interrupts drawing attention of the master program, which causes another program to be executed from where it paused last time.
The technique is called time-sharing. Whether the Operating System uses only time sharing or job scheduling on the basis of priorities or both, depend on the design of the particular Operating System.
In a networked system, independent computer systems are inter-connected, where the data files and programs are in the central unit [file server] but used by other computers for processing at their own end, whereas, time-sharing applies to a central processing unit which meets the processing requirements of a number of users connected through terminals, none having their individual processing capability.
Term Paper # 4.
Different Operating Systems:
In case of large machines, the mini-computers and mainframes, the Operating System is specific to the same family of machines. For example CP/67 — the Control Program 67 is for IBM 360/67 mainframes only. But in case of microcomputers, the Operating Systems are of general nature being compatible with large variety of machines of different makes.
When the microcomputers were 8-bit systems, the most popular Operating System was CP/M — Control Program /Microcomputer. But when microprocessors graduated to 16-bit systems, giving birth to Personal Computers, PC DOS with IBM’s copyright and MS DOS with Microsoft’s copyright, both being same, were born. These are a successor to an Operating System called QDOS [Quick and Dirty Operating System] designed by Tim Paterson, who named it like that. It is basically a single-user system, with limited multi-tasking facility for PRINT operation.
If the Operating System can do different things apparently simultaneously, it is called multi-tasking operating system. For example, if you are copying certain files in one operation and also seeing the files in another location, it is a case of multi-tasking. In MS DOS, only the printing operation with the PRINT command can work like this.
There are a number of other Operating Systems which are compatible to MS DOS, the most prominent one being DR DOS. Others are Multi-DOS of Consortium Technologies, which is multi-tasking and multi-user. The Concurrent DOS of the Digital Research is, however, multi-tasking only.
Another joint venture of IBM and Microsoft in the area of Operating System resulted in creation of 80286 based Operating System called OS/2, which is multi-tasking and it is fully compatible with MS DOS. It provides for 48 MB of RAM, needing 1.5 MB for itself.
CP/M-86 is a development of CP/M for 16-bit machines, but it has been outsmarted by MS DOS.
Among the multi-user operating system, the most popular one is UNIX, which has been developed from an Operating System called Multics [Multiplex Information and Computing Service], at Bell Laboratories, almost by the same team which developed the C language. Since Bell Laboratories made available the source code of the original UNIX, different versions of UNIX have come up, most of which are not compatible to each other. Microsoft came out with an 8088 version of UNIX and it is called Xenix. In VAX machines, it is called Ultrix.
For 32 bit computers, Microsoft has come out with a 32-bit Operating System called Windows N T. For networked systems, the most popular Operating System is called Novel Netware, which operates above MS DOS or OS/2 platform, having additional facilities over MS DOS. Window 95 is also an operating system by merging MS DOS and Windows facilities.
Some of the operating systems of large computers are- 1100 Executive for UNIVAC 1100 Series; MCP for Burroughs B5000, 6000, and 7000 Series; Multics for Honywell Series 60 Level 68; Scope for CDC Cyber 70, Cyber 170 and 6000 Series; Tops 10 for DEC System 10; etc.
Term Paper # 5. Characteristics of the Operating System
:
Before going into the characteristics of different types of the operating systems, let us try to understand the basics of a computer system vis-a-vis its processing functions. Analyzing from a simplified view, any program or a job carried out using a computer, like say payroll accounting, can be broken down into three basic components of Input, Processing, and Output [I-P-O], that is, into three different tasks.
If a single job is carried out serially from the beginning to the end, it would not be an efficient way of using the computer’s ability, especially that of the CPU. So buffering and spooling were introduced to take care of the simple tasks of I/O operations through IOCS by overlapping and we got what is called multi-tasking operating system — the serial I-P-O procedure being converted to serial-parallel processing.
Now, this certainly improved efficiency and cut down the total job processing time, but, it also resulted in the CPU being idle to a considerable extent during I/O operations, having nothing else to do. So, if more than one job is made available to the CPU at a time by loading them into the memory, the idleness of the CPU can be reduced by switching its processing service from one job to another, over and above the same type of multi-tasking operation for each individual jobs, and then, we get what is called multi-programming operating system — each job being a separate program.
Thus, multi-programming implies multi-tasking but the reverse is not true. The basic objective of multiprogramming is to maximize utlisation of the resources of the CPU, especially in batch processing of different programs. The multi-programming operating systems have to schedule loading as well as execution of a number of programs.
Again, because of inherent inflexibility with batch processing, the system of simultaneous interactive processing for a number of users, each with his own program became a necessity. In order to cater to such multi-access mode with good response time, the technique of time-sharing came into use, which primarily relied on the system clock.
Later on both the techniques of multi-programming and time-sharing were integrated, developing modern operating systems, some of which are multi-user operating systems. Time-sharing is basically a technique of organizing the processing operation of a computer system in such a way that a number of end-users can interact with the system simultaneously. Incidentally, multi-programming can also be done with a single job.
In a multi-access operating system the objective is to ensure quick response time to each user, where as, in a batch processing system, the objective is to maximize the total processing done by the system.
Term Paper # 6. Fundamental Tasks of the Operating System:
i. Job Control Language (JCL):
One of the fundamental task of the Operating System in the batch processing mode is to eliminate the idleness of the central processing unit during set up operation of jobs being loaded one after another, called stacked-job operation. A special type of command language, called Job Control Language [JCL] is used for this purpose, which identifies each job to be processed one after the another, and conveys the specific requirement of each job to the Operating System including the user’s name/code, account number, authority, the input/output devices to be used, the language compiler/linker to be used, etc.
This type of job processing is called batch-processing, with jobs using similar compiler being processed together. The complete operation is controlled by the Job Control Program of the Operating System — the instructions being given in Job Control Language.
Once a job is processed completely, the system control returns to the master program of the Operating System, which then calls the Job Control Program. The JCL gets loaded, takes over control and carries out the instructions for the next job and transfers control to it, which on completion returns control to the master program of the Operating System. This process continues until all the jobs of the batch are fully processed.
Each input/output devices having special characteristics need to be controlled in different manner. To cater to this wide variety of input/output devices there are specific small control programs called device drivers. A device driver is actually a set of procedures that control peripheral hardware devices.
These device drivers, containing specific input/output routines are called by the master program of the Operating System when dealing with a specific device for input or output operations, as the case may be.
iii. Input/Output Control System:
A large part of computer processing, especially data processing, involves movement of data from one device to another, from a device to the primary storage, or from the primary storage to a device. As already stated, the operating speed of these input/output devices are considerably lower than the processing speed of the central processing unit; which normally keep the latter idle during input/output operations.
In order to ensure that the central processing unit does not remain idle during input/output operations, special Input/output Control Systems [IOCS] have been developed which are used by the master program of the Operating System. The input/output operations are fairly simple but involves longer times, because of inherent deficiency in the speed of the I/O devices.
These can now be carried out without direct and continuous supervision of the central processing unit throughout the process of such data movement, avoiding unnecessary idle state of the central processing unit. The IOCS exactly does this kind of operation. Let us see how it is done.
A channel is a physical path, with its own control and monitoring circuits in a processor, along which the data flows between the slow speed input/output devices and the high speed central processing unit. The characteristics of these channels are that once these are instructed by the central processing unit to get or send data to a particular device, they operate independently transferring all data and when completed, reports back to the central processing unit the status.
A computer system, especially the large one, has a number of channels for input as well as output. Whenever a data movement is required, the master program of the Operating System activates the required channel processor and hands over control to it — the central processing unit becoming free to do other processing activities. The channel processor uses buffers which causes overlapping of input/ output and processing operation — both being done simultaneously. The interrupts, play a vital role in informing the central processing unit when a particular data operation is completed.
Basically, the functions of the Input Output Control System is to interpret I/O requests and execute it after locating the source and destination of data — reporting back to the central processing unit by generating appropriate interrupt.
The I/O Control Device, used in microcomputers for transfer of data from disk drives to primary storage and vice versa is carried out by DMA Controller, which stands for Direct Memory Access controller, given by chip number 8237, which has four channels. In PC ATs, two of these chips are used.
In each operating system, there is invariably a special program called Linker (also called link-editor), which converts the object-program created by a compiler or assembler to the executable mode, so that the program can be processed at the command level of the Operating System.
The link program also links different object modules of the application program, along with the programs used from the Library files to create a single relocatable program for loading and execution under the platform of the Operating System. Since it is not known beforehand, at which absolute address of the main memory the application program will be loaded during execution, the address of data blocks are usually left as a relative address 15 in the application program with reference to the beginning of the address of the instruction code block.
When the program is loaded, the relative addresses are appropriately converted to respective physical addresses by the Operating System so that the program can execute successfully.
Generally in PC system, the compilers and assemblers provide their own link program, which is claimed to be more efficient for that application than the MS DOS Link program, which can also be used.
The linker, called LINK in MS DOS, creates a special file header in the executable file, which is used by the master program of the Operating System to provide the physical addresses inside the program file, so that these can be executed. The birth of link is related to the development of Operating Systems; it did not exist in pre-Operating System days.
An application program, or for that matter any program cannot be executed, until and unless, the Operating System allocates a need-based storage place in the main memory, creates appropriate headers, stack location, etc., and then hands over the control to the program to be executed. After the application program is executed, it returns back control to the Operating System, which does a bit of house-keeping and deallocate the memory for use by other programs.
The main memory is the exclusive jurisdiction of the Operating System, which it temporarily leases out to different application programs for use, as long as it is required to use. The technique of memory management is highly complex in case of Multiprogramming Operating System, because all the programs have to be accommodated in memory and each in turn has to be handed over to the central processing unit for execution.
Sometimes, virtual memory, which is an extension of the main memory to storage disks, is created and used for swapping different programs in turn for execution. It is to be noted that both the central processing unit and the input/output units interact with the main memory. In TSR [Terminate but Stay Resident] programs, the memory allocated to a part of the program is not deallocated, so that it can be revived by the user at any time with the flick of a switch; the memory management routines ensure this.
At the core of each Operating System is a master program, which is variously called as Executive, Supervisor, Monitor, or Kernel. This part of the Operating System always remains in memory in active state, controlling the computer in various ways, as and when required. The kernel, calls other programs of the Operating System whenever required by loading these into main memory form the direct access storage devices.
In MS DOS, there are three files called system files which contain the system software. These three files are IO.SYS, MSDOS.SYS, and COMMAND.COM. Except for a portion of the command processor, called transient portion, the rest are always resident in memory, continuously monitoring and controlling the operation of the computer along with the peripherals to allow the computer system to be used for problem solving.
The MS DOS allows the transient portion to be over written by application programs if there is a shortage of space in memory and reloads it when the execution of the application program ends. When you will be working with MS DOS in floppy drives only, you will find that at the end of many programs, there will be a prompt to insert a disk in drive A with command.com — this is required to reload the transient portion of the Operating System, which is MS DOS here.
In multi-user Operating System like UNIX, the kernel creates a shell, a special environment above the kernel layer, which acts as a command interpreter; the most common shells being Bourne and C shells. The MS DOS, since Version 4 has also incorporated shell environment for carrying out various command level functions with graphical screens.
The currently quite popular Window software is not an Operating’ System, but it is a graphic environment like shell, operating above the kernel of MS DOS — where mouse can be used extensively.