UNIVERSITY OF NORTH BENGAL
B.Sc. Honours 3rd Semester Examination, 2022
CC6 - COMPUTER SCIENCE (32)
OPERATING SYSTEMS
GROUP - A
1. What do you mean by Time-Sharing Operating Systems?
A time-sharing operating system is an operating system design that allows multiple users or processes to concurrently share the same system resources, such as the CPU, memory, and peripherals.
2. What is a safe state and an unsafe state?
If all the resources required for a Process are satisfied with the available resources, then the System is said to be in a Safe state. If all the resources requirements of the Process are not satisfied with the available resources in any possible way, then the System is said to be in an unsafe state.
3. Differentiate between process and thread.
A process is an instance of a program that is being executed. A thread is the subset of a process and is also known as the lightweight process. A process can have more than one thread, and these threads are managed independently by the scheduler.
4. What do you mean by system calls?
System calls are usually made when a process in user mode requires access to a resource. Then it requests the kernel to provide the resource via a system call.
5. Differentiate between Trap and Interrupt.
The trap is a signal raised by a user program instructing the operating system to perform some functionality immediately. In contrast, the interrupt is a signal to the CPU emitted by hardware that indicates an event that requires immediate attention.
6. What is the main function of the memory-management unit?
A memory management unit translates addresses between the CPU and physical memory. This translation process is also known as memory mapping because addresses are mapped from a logical space into a physical space.
7. What is file system mounting?
File system mounting is the process of making a file system accessible to the operating system. This typically involves attaching the file system to an empty directory on the operating system, called the mount point. Once mounted, the files and directories in the file system can be accessed as if they were part of the operating system's native file system.
8. What is dispatcher?
The dispatcher is a module that gives a process control over the CPU after it has been selected by the short-term scheduler.
GROUP - B
9. Define process. Explain the process states with a suitable diagram.
A process is an independent program in execution, It is an instance of a program in execution. It is an active entity that evolves during the execution of a program.
Process States:
A process undergoes various states during its execution. The classic process states include:
New: The process is being created.
Ready: The process is waiting to be assigned to a processor. It is waiting in the ready queue.
Running: The process is being executed on the processor. Only one process can be in this state at a time on a single processor.
Blocked (or Waiting): The process is waiting for some event to occur (such as I/O completion or a signal from another process) before it can proceed.
Terminated: The process has finished its execution.
10. What is PCB? How the operating system maintain the state of a process using PCB?
A Process Control Block (PCB) is a data structure used by the operating system to manage information about a process. It contains essential information about a process, allowing the operating system to control and coordinate the execution of processes. The PCB is also known as the Task Control Block or Process Descriptor.
Maintaining Process State using PCB:
Context Switching: PCB facilitates the switch between processes by saving and loading their states during context switches.
State Update: The PCB is regularly updated to reflect the current state of a process as it transitions between states like ready, running, and blocked.
Interrupt Handling: During interrupts, the PCB helps save and restore the state of the interrupted process.
Process Termination: When a process completes, the PCB is updated to indicate the terminated state.
11. Can a system detect that some of its processes are starving? If yes, explain how it can. If no, explain how the system can deal with the starvation problem.
Yes, a system can detect if some of its processes are starving. Starvation occurs when a process is unable to access the resources it needs for a prolonged period.
There are a few ways to do this:
Monitor resource usage: The system can monitor the usage of resources such as CPU time, memory, and I/O bandwidth. If a process is consistently being denied access to these resources, it may be starving.
Track process state: The system can track the state of processes to identify those that are blocked or waiting for resources. If a process is blocked or waiting for an excessively long time, it may be starving.
Use priority algorithms: The system can use priority algorithms to ensure that all processes get a fair share of resources. If a process is consistently being given low priority, it may be starving.
Once a system has detected that a process is starving, it can take steps to resolve the issue. These steps may include:
Increasing the priority of the starving process: This will give the process a higher priority when it comes to accessing resources.
Allocating more resources to the starving process: This will give the process more access to the resources it needs.
Terminating other processes: This will free up resources for the starving process.
12. Explain the following:
(a) Paging and segmentation
Paging: Paging is a memory management scheme that involves dividing the physical memory into fixed-size blocks called "frames" and breaking down the logical address space of a process into fixed-size blocks known as "pages." The operating system uses a page table to map between logical and physical addresses. Paging helps in efficient use of memory and eliminates external fragmentation.
Segmentation: Segmentation is a memory management technique that divides the logical address space of a process into segments, where each segment represents a logical unit (e.g., code, data, stack). Segments can vary in size and are assigned specific attributes. The operating system uses a segment table to map logical addresses to physical addresses. Segmentation allows for a flexible memory allocation strategy and supports dynamic data structures.
(b) Swapping
Swapping is a memory management approach where an entire process or parts of it are moved between the main memory and secondary storage (e.g., disk). This is done to free up space in the main memory and allow the system to execute more processes than the physical memory can accommodate. Swapping is often used when a process is waiting for I/O or to maintain overall system performance. While it increases the effective memory capacity, swapping introduces additional overhead due to the data transfer between the main memory and the disk.
13. What is critical-section problem? How to solve critical-section problem?
In the computer world, the critical-section problem happens when lots of tasks are trying to use the same part of a program at the same time. This can create problems like messed-up data. The goal is to make sure only one task can use that part at a time. Here's how we solve it:
Mutex (Mutual Exclusion):
- What it does: Think of a mutex like a special lock. Only one task can have the lock at a time.
- How it works: Before using the shared part, a task gets the lock. If someone else has it, the task has to wait. After finishing, the task gives up the lock.
Semaphore:
- What it does: A semaphore is like a number that controls how many tasks can use the critical part.
- How it works: Tasks wait until the number is okay before using the critical part. Counting semaphores let a certain number of tasks use it together.
Monitors:
- What it does: Think of a monitor like a special room. Only one task can be in the room at a time.
- How it works: Tasks use condition signals in the room for coordination. They wait for certain conditions before entering the critical part.
GROUP - C
14. Explain FCFS, RR and SJF scheduling algorithm with illustrations.
First-Come, First-Served (FCFS) Scheduling: FCFS is a non-preemptive scheduling algorithm that selects processes in the order they arrive in the ready queue. The process that arrives first is the first to be executed.
| Process | Arrival Time | Burst Time |
|--------- |--------------|---------- --|
| P1 | 0 | 7 |
| P2 | 2 | 4 |
| P3 | 4 | 1 |
- Execution Order: P1 → P2 → P3
- Turnaround Time: P1 (7), P2 (11), P3 (12)
- Average Turnaround Time: (7 + 11 + 12) / 3 = 10
Round Robin (RR) Scheduling: RR is a preemptive scheduling algorithm that allocates a fixed time quantum to each process in the ready queue. If a process's burst time exceeds the quantum, it's moved to the back of the queue to wait for its next turn.
| Process | Burst Time |
|------- --|----------- -|
| P1 | 8 |
| P2 | 4 |
| P3 | 6 |
- Quantum (Time Slice): 2
- Execution Order: P1 → P2 → P3 → P1 → P3 → P1 → P3
- Turnaround Time: P1 (14), P2 (10), P3 (16)
- Average Turnaround Time: (14 + 10 + 16) / 3 = 13.33
Shortest Job First (SJF) Scheduling: SJF selects the process with the shortest burst time first. It can be preemptive (PSJF) or non-preemptive (NP-SJF).
| Process | Arrival Time | Burst Time |
|------- --|------------ --|---------- --|
| P1 | 0 | 6 |
| P2 | 2 | 8 |
| P3 | 4 | 7 |
| P4 | 6 | 3 |
- Execution Order: P1 → P4 → P3 → P2
- Turnaround Time: P1 (6), P4 (9), P3 (16), P2 (24)
- Average Turnaround Time: (6 + 9 + 16 + 24) / 4 = 13.75
15. Write short notes on:
(a) File Systems
A file system is a way of organizing and storing data on a computer storage medium, such as a hard drive or SSD. It provides a structure for naming, storing, and retrieving files and directories while managing access permissions and ensuring data integrity.
Key Components:
- Files: Units of data storage, each with a unique name and associated metadata.
- Directories: Containers for organizing and managing files hierarchically.
- File Attributes: Information about a file, including size, creation/modification timestamps, and access permissions.
- File Operations: Actions performed on files, such as reading, writing, deleting, and renaming.
- File System Structures: Different file systems (e.g., FAT, NTFS, ext4) have varying structures and features to optimize storage and retrieval.
Functions:
- Data Organization: Hierarchical arrangement of files and directories for easy navigation.
- Access Control: File systems implement permissions to control who can read, write, or execute files.
- Error Handling: File systems include mechanisms for error detection and correction to ensure data integrity.
- Storage Management: Allocation and deallocation of storage space, and optimization of storage utilization.
(b) Inter-process Communication (IPC)
Inter-process communication refers to the mechanisms through which different processes in an operating system can share information, coordinate their activities, and synchronize their execution. IPC is essential for processes to exchange data and collaborate effectively.
Methods of IPC:
- Shared Memory: Processes share a common portion of memory to communicate. Changes made by one process are visible to others.
- Message Passing: Processes communicate by sending and receiving messages. Messages can be sent using various mechanisms, including direct or indirect communication.
- Pipes and FIFOs (Named Pipes): Pipes allow one-way communication between processes. FIFOs (Named Pipes) are similar but can be used for communication between unrelated processes.
- Sockets: Processes communicate over a network using sockets. It enables IPC between processes on different machines.
- Signals: Processes can use signals to notify each other about events or to handle interruptions.
Importance:
- Concurrency: IPC allows concurrent processes to work together without interfering with each other.
- Resource Sharing: Processes can share resources, such as data or computation results.
- Coordination: IPC facilitates coordination between processes for better system functionality.
- Decoupling: Processes can be designed independently, with IPC enabling them to work together.
16.(a) A computer system has a 36-bit virtual address space with a page size pf 8K, and 4 bytes per page table entry.
Q) How many pages are in the virtual address space?
To find the number of pages in the virtual address space, we can use the formula:
Number of Pages = Size of Virtual Address Space / Page Size
Given:
- Virtual address space: 36 bits
- Page size: 8 K (which is 2¹³ bytes because 1 K = 2¹⁰ bytes)
- 4 bytes per page table entry
Calculations:
1. Calculate the size of the virtual address space:
Size of Virtual Address Space = 2³⁶ bytes
2. Calculate the number of pages:
Number of Pages = 2³⁶ / 2¹³
Now, let's simplify this expression:
Number of Pages = 2 ^ 36-13 pages
Number of Pages = 2²³ pages
Therefore, there are 2²³ pages in the virtual address space.
Q) What is the maximum size of addressable physical memory in this system?
To find the maximum size of addressable physical memory, we can use the formula:
Maximum Physical Memory Size = Number of Pages × Page Size
Given:
- Number of Pages: 2²³ (as calculated in the previous answer)
- Page Size: 8 K (which is 2¹³ bytes)
Calculations:
Maximum Physical Memory Size = 2²³ × 2¹³ bytes
Maximum Physical Memory Size = 2³⁶ bytes
Therefore, the maximum size of addressable physical memory in this system is 2³⁶ bytes.
(b) Discuss DMA transfer and DMA controller.
DMA Transfer: Direct Memory Access (DMA) is a method for peripherals to communicate directly with memory, bypassing the CPU. DMA allows for faster and more efficient data transfers by offloading tasks from the CPU. Increased throughput, reduced CPU overhead, and efficient handling of large data sets.
DMA Controller: DMA Controller is specialized hardware managing and coordinating DMA transfers. Manages multiple channels, generates memory addresses, controls data transfer, arbitrates between competing requests, and generates interrupts. Efficient resource utilization, parallelism for concurrent transfers, and reduced latency.
17. Write short notes on:
(a) Deadlock prevention
Deadlock prevention is a set of techniques and strategies employed in operating systems to avoid the occurrence of deadlocks—situations where two or more processes are unable to proceed because each is waiting for the other to release a resource.
Techniques:
- Mutual Exclusion: Ensure that resources cannot be simultaneously used by more than one process.
- Hold and Wait: A process requesting additional resources must release its currently held resources.
- No Preemption: Resources cannot be forcibly taken away from a process; they must be released voluntarily.
- Circular Wait: Establish a partial order among resource types and require processes to request resources in increasing order.
Advantages:
- Proactively avoids deadlocks before they occur.
- Ensures resource utilization without compromising system stability.
(b) Semaphores and monitors.
Semaphores: Semaphores are synchronization tools used to control access to a shared resource in a concurrent system.
Types:
- Binary Semaphore: Takes values 0 or 1, used for mutual exclusion.
- Counting Semaphore: Can take multiple values, used for managing multiple resources.
Operations:
- wait() (P): Decrements the semaphore value.
- signal() (V): Increments the semaphore value.
Monitors: Monitors are higher-level synchronization constructs that encapsulate shared data and the procedures that operate on it.
Features:
- Only one process can execute a monitor at a time, ensuring mutual exclusion.
- Processes can use condition variables within a monitor for communication and synchronization.
Advantages:
- Simplifies synchronization and avoids low-level pitfalls associated with semaphores.
- Provides a structured approach to managing shared resources.
.jpg)
