In the previous section, we discussed services of OS. Today we will discuss the different properties of operating system and also some of the features of the OS .
Properties of Operating System
1: Batch Processing
In batch operating system the interaction of user with computer is not direct. Users prepare an offline device like punch card which they enter into computer operator for further processing. Tasks requiring same resources are batched together to increase processing speed. Operator sorts the programs, which users left on it, into different batches on the basis of similar requirements. OS involves following activities relating to the interaction of users with system:
- The job relating to OS has three things – predefined sequence of commands, programs and single unit of data.
- OS executes jobs without manual information from memory.
- Job execution is in first come first serve manner.
- After of execution of job, the operating system releases the occupied memory and copies the output into output spool for further use.
Advantages of Batch Processing:
- Batch processing takes more work.
- There is no manual intervention for the new job.
Disadvantages of Batch Processing:
- Difficult in debugging and there is a possibility of a job getting stuck into infinite loop.
- One job can affect other jobs due to lack of protection.
This process is the practical implementation of multi-programming, which is basically a concept. Multiprocessing and multi-threading are the two ways to achieve multitasking.
Multitasking, in an operating system, is allowing a user to perform more than one computer tasks (such as the operation of an application program) at the same time. The operating system is able to keep track of where you are in these tasks and allows you to move from one task to the other without losing information.
Keeping the Track:
This property of Operating system allows the process to continue a task from the last execution.
Example of Keeping Track:
If there are two processes and first process has the CPU. After some time, context switch occurs and execution of other process starts i.e. second process takes the control of CPU. Similarly, after some time this process also context switches and control goes back to the first process again. Now the first process will start again from the position (line of code which was in execution previously) from where it gave the control of CPU to other process.
What is Context Switching?
In computing, a context switch is the process of storing and restoring the state (more specifically, the execution context) of a process or thread so that execution can be resumed from the same point at a later time.
No Information Loss:
When the process regains the control of CPU again then it starts from the same position from where the CPU has left this process. So no part of process will be missed i.e. every line of code will be executed in the end.
Main Theme of Multitasking:
Main theme of multitasking is time-sharing. Time-sharing is a method to allow fast response for interactive user applications. In time-sharing systems, context switches are performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor. This seeming execution of multiple processes simultaneously is called concurrency.
Preemptive multitasking allows the computer system to more reliably guarantee to each process a regular “slice” of operating time. It also allows the system to deal rapidly with important external events like incoming data, which might require the immediate attention of one or another process. Operating systems were developed to take advantage of these hardware capabilities and run multiple processes preemptively.
Problems with Multitasking:
- Multitasking Facility does not require any explicit inter-process communication mechanism; processes can communicate by using any-global data structure.
- Any process can modify the global value of a special variable.
- Any process is free to modify anothers data, to create new processes, to kill a process, or to prevent other processes from running. Using multiple processes even requires special considerations when you exit Lisp (is a family of computer programming languages).
Now just relax yourself first and think of it a as a simple logic. Basically Multi-programming is the logical concept based on the procedures of multi-tasking and pseudo parallelism combined. These two concepts will be defined in the next chapters.
As CPU speed is very high so it can work on several programs in a second. It gives user an illusion of parallelism i.e. several processes are being processed at the same time. This rapid switching back and forth of the CPU between programs gives the illusion of parallelism and is termed as pseudo parallelism.
Multi-programming is a form of parallel processing in which several programs are run at the same time on a uni-processor.
Uni-processor but Multi-programming. How?
The first thing which comes to our mind is how a single CPU can perform multiple tasks at the same time. Since there is only one processor, it can’t be true that there is a simultaneous execution of different programs, but instead the operating system executes part of one program then part of another and so on.
Main theme of Multi-programming:
The main idea behind the process of multi-programming is to maximize the use of CPU time so more tasks can be done in less time. Its ultimate goal is to make the CPU busy as there are processes ready to execute.
Needs for Multi-programming:
Following are the basic needs for multi-programming:
- OS must be able to load multiple programs into separate areas of the main memory and provide the required protection to avoid the chance of one process being modified by another one.
- Second problem that need to be addressed when having multiple programs in memory is fragmentation as programs enter or leave the main memory.
- Another issue that needs to be handled as well is that large programs may not fit at once in memory which can be solved by using pagination and virtual memory.
Require Protection and avoid modification:
This feature has been put in so that no process can be over written with any other process. In this case every process will retain its original material and the execution of any other process will not change the features of this process.
What is Fragmentation?
Fragmentation is a phenomenon in which storage space is used inefficiently, reducing capacity or performance and often both. In many cases, fragmentation leads to “wastage” of storage space , and in that case the term also refers to the wasted space itself.
What is pagination?
In computer operating systems, paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages.
The Worst case:
Note that if there are N ready processes and all of those are highly CPU-bound (i.e., they mostly execute CPU tasks and none or very few I/O operations), in the very worst case one program might wait all the other N-1 ones to complete before executing.
The users should have some kind of ability to interact with computer system. This relation of user with computer is called interactivity. OS involves following activities related to interaction of users with system,
- Users are provided with interface to interact with system.
- Manages input and out devices to get input from and give output to user respectively.
For example: A keyboard is an input device and speakers are output device.
The user waits for the response after input. So, the response time of the system should be short to give the user immediate output.
5: Real-Time System
We can also call real-time systems as data-processing systems. The dedicated and embedded systems are usually known as real-time systems. For more detail check this article about Types of OS.
What are Dedicated and Embedded Systems?
A dedicated system is one which we use for one task only, like file serving or running a database.
An embedded system is a computer system with a dedicated function within a larger mechanical or electrical system, often with real-time computing constraints.
OS involves following activities related to real-time systems:
- OS reads from and react to sensor data in this system.
- To guarantee correct performance, OS have fixed time periods for response.
When multiple CPUs or processor are working in a computer system then they form a distributive environment.
OS involves following activities related to distributive environment systems,
- Each process has its own local memory and clock and they don’t share it.
- The OS manages communication of the processors through communication lines.
- The computation is distributive between a number of processors.
Buffer collects data of various I/O jobs when it is put into it. This is known as spooling. It is the process when several peripheral operations are working on line.
Memory or hard disk has special area which can be accessed by I/O devices. It is called buffer.
OS involves following activities related to spooling systems,
- I/O devices have different data access so OS manages their data spooling.
- When processes of slower devices are executed spooling buffer provides area for data to rest. This buffer is managed by OS.
- As computers are designed to perform parallel I/O so OS maintain parallel computation because of spooling process. It can be explained by an example below.
It becomes possible to have the computer read data from a tape, write data to disk and to write out to a tape printer while it is doing its computing task.
Following are the advantages of spooling:
- A disk is used for spooling process which has a large space.
- Spooling is capable of overlapping I/O operation with processor operations.