OS_NOTE

Published on
Embed video
Share video
Ask about this video

Scene 1 (0s)

[Audio] OPERATING SYSTEMS UNIT 1 Chapter 1: Operating System Overview 1. Operating System -An OS is a program that controls the execution of application programs, and acts as an interface between applications and the computer hardware a. Functions of OS Functions of Operating System Processor Management: Allocating jobs to processors and ensuring each process has sufficient time for proper functioning. Memory Management: Allocating and distributing memory to processes, preventing other processes from consuming the allocated memory. File Management: Organizing and protecting data stored in files, including file directory structure, against unauthorized access. Security Measures: Ensuring user data integrity and confidentiality through login protection, Firefall activation, system memory protection, and system vulnerability messages. Error Detection: Regularly checking the system for external threats and hardware damage, displaying alerts for appropriate action. b. Services provided by OS 1. Program Development Services: Tools for editing, debugging, and compiling programs. 2. Program Execution Services: Loading programs into memory, initializing I/O devices, and managing execution. 3. I/O Device Management: A uniform interface for accessing various I/O devices. 4. File Management Services: Controlled access to files and data structures on storage devices. 5. Error Detection and Response: Handling errors and providing responses to minimize impact on applications.

Scene 2 (1m 38s)

[Audio] 2. Serial processing, two main problems of Serial Processing. Serial processing refers to the operation where users access the computer in a sequential manner, typically seen in early computing systems. The two main problems of serial processing are: I. Scheduling Issues: Users often faced wasted processing time due to fixed time slots, leading to either idle time or incomplete tasks. II. Setup Time: The process of loading and linking programs involved multiple steps, which could be timeconsuming and inefficient, causing delays in execution. 3. Simple Batch Systems. Early computers were expensive, necessitating maximized processor utilization, leading to the development of batch operating systems in the mid-1950s, notably by General Motors for the IBM 701. The central idea behind the simple batch-processing scheme is the use of a piece of software known as the monitor. 4. Time sharing system. Multiprogramming and batch processing were efficient, but interactive modes were needed for certain jobs. Today, interactive computing facilities can be met by a dedicated personal computer or workstation. Time sharing was developed in the 1960s, replacing large, costly computers. 5. Types of Operating System based different approaches and design elements. Microkernel Architecture: Focuses on minimal core functionality, allowing additional services to run in user space, enhancing modularity and flexibility. Multithreading: Supports multiple threads of execution within a single process, improving resource utilization and responsiveness. Symmetric Multiprocessing (SMP): Utilizes multiple processors to execute tasks simultaneously, enhancing performance and efficiency. Distributed Operating Systems: Manages a collection of independent computers and makes them appear as a single coherent system to users. Object-Oriented Design: Employs object-oriented principles to facilitate modular extensions and customization without compromising system integrity. 6. Fault tolerance. Refers to a system's ability to maintain normal operation despite hardware or software faults. Involves some degree of redundancy. Aims to increase system reliability. Increased fault tolerance often comes with a cost, either in financial terms or performance. Adoption depends on the criticality of the resource. 7. Fault, Categories of Fault. A fault is an erroneous state in hardware or software caused by component failure, operator error, or design flaws, manifesting as defects in hardware or incorrect processes in software..

Scene 3 (4m 28s)

[Audio] Permanent: A fault that, after it occurs, is always present. The fault persists until the faulty component is replaced or repaired. Examples include disk head crashes, software bugs, and a burnt-out communications component. Temporary: A fault that is not present all the time for all operating conditions. Temporary faults can be further classified as follows: —Transient: A fault that occurs only once. Examples include bit transmission errors due to an impulse noise, power supply disturbances, and radiation that alters a memory bit. —Intermittent: A fault that occurs at multiple, unpredictable times. An example of an intermittent fault is one caused by a loose connection. 8. How fault tolerance is built into a system by adding redundancy. Spatial (physical) redundancy: Uses multiple components for simultaneous function or backup in case of component failure. Examples include multiple parallel circuitry and backup name servers. Temporal redundancy: Repeats a function or operation when an error is detected, effective with temporary faults but not useful for permanent faults. Examples include data link control protocols. Information redundancy: Provides fault tolerance by replicating or coding data for bit error detection and correction. Examples include error-control coding circuitry and error-correction techniques in RAID disks. UNIT 2 Mutual exclusion is a principle that ensures that only one process or thread can access a shared resource at a time. This prevents conflicts and errors that can occur when multiple processes try to use the same resource simultaneously. Dekker's Algorithm An algorithm for mutual exclusion for two processes, designed by the Dutch mathematician Dekker. Dekker's algorithm is the first solution of critical section problem. There are many versions of this algorithms, the 5th or final version satisfies the all the conditions below and is the most efficient among all of them. The solution to critical section problem must ensure the following three conditions: o Mutual Exclusion o Progress o Bounded Waiting UNIT 3 1) Fixed Partitioning In fixed partitioning, the available memory is divided into a fixed number of partitions with set boundaries. These partitions are defined when the system is initialized, and they cannot change in size during execution..

Scene 4 (6m 46s)

[Audio] The operating system (OS) occupies a fixed portion of memory, while the rest of the memory is divided into partitions to be used by processes. 2) Dynamic Partitioning In dynamic partitioning, the memory is not divided into fixed sizes. Instead, partitions are created dynamically to match the exact memory requirements of each process at runtime. When a process is loaded into memory, it is allocated just enough memory to fit the process size, unlike fixed partitioning where memory is pre-divided into fixed blocks. Q3) Fragmentation refers to the inefficient use of memory due to how memory is allocated and deallocated, causing free memory to be split into smaller pieces. This makes it difficult to use the available memory efficiently, as some of the free space may be unusable for larger processes. 1. Internal Fragmentation: Occurs when: A process is given more memory than it needs. Example: If a process gets an 8 MB block but only uses 5 MB, the remaining 3 MB is wasted inside the block. 2. External Fragmentation: Occurs when: Free memory is scattered in small blocks after multiple processes have been allocated and removed. Example: There may be enough total free memory, but it's split into small, non-contiguous chunks, making it hard to fit larger processes. Q4) First-Fit Algorithm: How it works: o The operating system starts looking for free memory from the beginning..

Scene 5 (8m 13s)

[Audio] o It finds the first block of memory that is big enough for the process. o Once it finds a suitable block, it allocates the memory to the process. Advantages: Fast: Since it stops at the first suitable block, it works quickly. Simple: The system just goes through the memory from start to finish. Disadvantages: Can lead to fragmentation: Over time, small unused blocks of memory may be left at the beginning, which may not be enough for future processes, causing fragmentation problems Q5) Simple Paging is a memory management technique where both the process and the memory are divided into small, fixed-size blocks. How it works: 1. Memory is divided into equal-sized blocks called frames. 2. Processes are divided into equal-sized blocks called pages. 3. When a process is loaded into memory, its pages are placed into available frames. 4. Pages from a process don't have to be stored in consecutive frames, allowing more flexibility. Q6) Simple Segmentation is a memory management technique where a process is divided into different segments, based on its logical structure. How it works: 1. Process divided into segments: A process is split into multiple segments like code, data, and stack, each of which may vary in size. 2. Each segment is stored separately: These segments are loaded into memory, but unlike paging, they don't have to be of equal size. 3. Segment table: The operating system maintains a segment table that keeps track of where each segment is located in memory. Q1) 1. FIFO (First-In, First-Out) How It Works: The oldest page in memory is replaced first. Advantages: Simple to implement. Disadvantages: Can lead to suboptimal page replacements (e.g., Belady's Anomaly). 2. LRU (Least Recently Used) How It Works: The page that has not been used for the longest time is replaced. Advantages: More efficient than FIFO; better at predicting which pages will be needed next..

Scene 6 (10m 21s)

[Audio] Disadvantages: Requires more complex data structures to track usage. 3. Optimal How It Works: Replaces the page that will not be used for the longest time in the future. Advantages: Theoretically the best replacement strategy; minimizes page faults. Disadvantages: Requires future knowledge of references, making it impractical in real situations. Q2) Virtual Memory Paging Virtual Memory: A technique that allows a computer to use more memory than what is physically available by using disk space. Paging: o Divides memory into fixed-size blocks called pages (in virtual memory) and frames (in physical memory). o When a program needs data, it loads pages into available frames. Page Table: o A map that connects virtual addresses to physical addresses, showing where each page is stored in memory. Page Fault: o Occurs when a program tries to access a page not currently in RAM. The OS loads it from disk. Replacement Policies: o When memory is full, the OS decides which page to remove using methods like FIFO, LRU, or Optimal. Benefits o Eliminates fragmentation and allows efficient use of memory. Drawbacks o Can cause performance issues if there are too many page faults (thrash) Q3) Virtual Memory Segmentation Definition: Segmentation is a memory management technique that divides the program's logical address space into multiple segments of variable size, allowing more flexibility than fixed-size pages. Key Points: 1. Multiple Address Spaces: Segmentation allows a program to be viewed as multiple segments, each representing a different logical unit (like code, data, or stack). 2. Unequal Size: Segments can vary in size, adapting to the needs of different data structures, making it easier to manage growing data. 3. Simplified Management: Programmers can handle large data structures more effectively, as segments can grow or shrink dynamically without needing to adjust the entire program..

Scene 7 (12m 26s)

[Audio] 4. Sharing and Protection: Segmentation makes it easier to share code and data between processes and apply access controls to specific segments, enhancing security and modularity. 5. Segment Table: Each process has a segment table that contains entries for each segment, including its starting address in memory and length. This helps in translating logical addresses Q4) Thrashing Definition: Thrashing occurs when a system's resources are overused, causing continuous page faults and frequent swapping. Effect: The CPU utilization drops significantly because the system is busy swapping pages instead of executing instructions. Cause: It typically arises when processes are allocated too few frames, resulting in too many page faults and insufficient work being done by the CPU Unit 4 Q1) Long-Term Scheduling: Purpose: Manages which processes enter the system. Function: Controls the number of processes in the system (degree of multiprogramming). Medium-Term Scheduling: Purpose: Manages processes in memory. Function: Decides which processes to keep in memory and which to swap out. Short-Term Scheduling: Purpose: Determines which process runs next. Function: Frequently selects from the ready queue, reacting to system events like I/O interrupts. Q2) The objective of short-term scheduling is to decide which ready process will execute next, aiming to optimize system performance by allocating processor time efficiently. This scheduler operates frequently and is invoked whenever certain events occur, such as I/O interrupts or system calls. Criteria to evaluate scheduling algorithms include: 1. User-Oriented Criteria: o Response time: The time a process takes to respond to input. 2. System-Oriented Criteria: o Throughput: Number of processes completed in a given time. o Processor efficiency: Maximizing CPU utilization..

Scene 8 (14m 21s)

[Audio] Q3) If purely priority-based scheduling is used in a system, what are the problems that the system will face? If purely priority-based scheduling is used, the system can face problems like: Starvation: Lower-priority processes may never get executed if higher-priority processes continuously enter the system. Unfairness: Processes with lower priority might have to wait indefinitely while higher-priority ones are always served first. Q4) Primitive and Non primitive Scheduling Algorithm Non-Preemptive Scheduling: Once a process starts, it runs till completion or it voluntarily releases the CPU. Example: First Come First Serve (FCFS): The process that arrives first gets executed first. Preemptive Scheduling: The currently running process can be interrupted and replaced by a higher-priority process. Example: Round Robin (RR): Each process gets a fixed time slice, and after that, it is preempted to allow another process to execute..