Definition:
Software that manages computer hardware and software resources and provides common services for applications.
Types:
-
๐ฆ Batch OS:
Processes batches of jobs without interaction. -
โฐ Time-Sharing OS:
Allows multiple users to interact with programs simultaneously. -
๐ Distributed OS:
Manages a group of separate computers and makes them act as one system.
-
๐ง Resource Management:
Allocates CPU, memory, and I/O devices to processes. -
โ๏ธ Task Management:
Manages tasks like process scheduling, multitasking, and resource sharing. -
๐ฑ๏ธ User Interface:
Provides a user-friendly way to interact with the system (GUI or command-line).
Definition:
An OS designed for real-time applications where responses are needed within a specific time.
-
๐น Program:
A set of instructions designed to complete a specific task. It is a passive entity residing in secondary memory. -
๐ธ Process:
An active entity created during execution, loaded into main memory. It exists for a limited time, terminating after task completion. -
๐งต Thread:
A single sequence of execution within a process, often called a lightweight process. Threads improve application performance through parallelism.
Key Points:
- Processes are isolated and considered heavyweight, requiring OS intervention for switching.
- Threads share memory within the same process and are lightweight, allowing efficient communication.
-
๐ Process Control Block (PCB):
Tracks the execution status of a process, containing information like registers, priority, and execution state. -
๐ Socket:
An endpoint for sending/receiving data over a network. -
๐ฅ๏ธ Shell:
Interface to access OS services, either via command-line or GUI. -
๐ง Kernel:
The core component of an OS, managing memory, CPU time, and hardware operations. Acts as a bridge between applications and hardware.
Monolithic Kernel:
- ๐ช Monolithic Kernel:
Manages system resources and implements user and kernel services in the same address space, making OS execution faster but increasing its size.
- ๐ Multiple threads are executed simultaneously within the same or different parts of a program.
- ๐จ Lightweight process, involving parts of a single process.
- ๐ CPU switches between multiple threads.
- ๐ Shares computing resources among threads of a single process.
- ๐ Several programs (or tasks) are executed concurrently.
- ๐ช Heavyweight process, involving multiple processes.
- ๐ CPU switches between multiple tasks or processes.
- ๐ Shares computing resources (CPU, memory, devices) among multiple processes.
- ๐ข Performs multiple tasks using a single processor.
- ๐งฎ Has only one CPU.
- ๐ฐ More economical.
- โก Allows fast switching between tasks.
- ๐ข Performs multiple tasks using multiple processors.
- ๐งฎ Has more than one CPU.
- ๐ธ Less economical.
- โก Allows smooth simultaneous task processing.
Different states that a process goes through include:
-
๐ New State:
The process is just created. -
๐ Running:
The CPU is actively executing the process's instructions. -
โณ Waiting:
The process is paused, waiting for an event to occur. -
โ Ready:
The process has all necessary resources and is waiting for CPU assignment. -
๐ Terminate:
The process has completed execution and is finished.
-
๐ Ready Queue:
Holds processes that are ready for CPU time. -
๐ Waiting Queue:
Holds processes that are waiting for I/O operations.
-
๐ฏ Purpose:
Allows processes to communicate and share data. -
๐ ๏ธ Techniques:
Includes pipes, message queues, shared memory, and semaphores.
-
๐ Definition:
Linking a function or variable at runtime rather than at compile-time. -
โ Advantage:
Flexibility in program behavior and memory use.
-
๐ Definition:
Moving processes between main memory and disk storage. -
๐ฏ Purpose:
Frees up memory for active processes, improving system performance.
-
๐ Definition:
Involves saving the state of a currently running process and loading the saved state of a new process. The process state is stored in the Process Control Block (PCB), allowing the old process to resume from where it left off. -
โ๏ธ Overhead:
Increases CPU load but allows multitasking.
-
๐งโโ๏ธ Zombie Process:
A terminated process still occupying memory until the parent acknowledges it. -
๐ผ Orphan Process:
A child process without a parent, often adopted by the init system in Unix-based OS.
-
๐ Definition:
A method of storing data across multiple disks for redundancy or performance. -
๐ข Types:
Includes RAID 0 (striping), RAID 1 (mirroring), RAID 5 (striping with parity), etc.
-
๐ Starvation:
When a process does not get the resources it needs for a long time because other processes are prioritized. -
โณ Aging:
Gradually increases priority of waiting processes to prevent starvation.
-
๐ฏ Purpose:
Determines the order in which processes access the CPU. -
๐ข Types:
Includes FCFS (First-Come, First-Serve), Round Robin, Priority Scheduling, etc.
- โก OS can interrupt and reassign CPU from a running process.
- Once a process starts, it runs until completion or voluntary release of CPU.
-
๐ FCFS (First-Come, First-Serve):
Schedules jobs in the order they arrive in the ready queue. It is non-preemptive, meaning a process holds the CPU until it terminates or performs I/O, causing longer jobs to delay shorter ones. -
๐ Convoy Effect:
Occurs in FCFS when a long process delays others behind it.
-
๐ Definition:
Schedules processes in a time slice or quantum, rotating through processes to ensure fair allocation of CPU time and preventing starvation. It is cyclic and does not prioritize any process. -
โ Advantage:
Fair and efficient for time-sharing systems.
-
๐ Definition:
Processes are assigned CPU based on priority levels. -
โ ๏ธ Challenge:
Risk of starvation for lower-priority processes.
-
๐ Definition:
Multiple processes appear to run simultaneously. -
๐ Achieved By:
Multithreading or multitasking within a single CPU.
-
๐ Definition:
Two processes access shared data simultaneously, leading to unexpected results. -
๐ก๏ธ Solution:
Use locks or synchronization mechanisms.
- ๐ Definition:
A part of code that accesses shared resources and must not be executed by more than one process at a time.
-
๐ Mutexes:
Only allows one process at a time, preventing concurrent access. -
๐ Condition Variables:
A variable used to control access in multithreading, allowing threads to wait until certain conditions are met. -
๐ Semaphores:
Allows multiple processes to access resources up to a limit. -
๐ File Locks:
Restricts access to files to prevent conflicts.
- ๐ Definition:
A Semaphore is a synchronization tool used in operating systems to manage access to shared resources in multi-threaded or multi-process systems. It keeps a count of available resources and uses two atomic operations, wait() and signal(), to control access.
-
๐ Binary Semaphore:
- Has values 0 or 1.
- Signals availability of a single resource.
-
๐ข Counting Semaphore:
- Can have values greater than 1.
- Controls access to multiple instances of a resource, like a pool of connections.
-
๐ Binary Semaphore:
- Signals availability of a shared resource (0 or 1).
- Uses signaling mechanisms.
- Faster in some cases with multiple processes.
- Integer variable holding 0 or 1.
-
๐ Mutex:
- Allows mutual exclusion with a single lock.
- Uses a locking mechanism.
- Slower when frequently contended.
- Object holding lock state and lock owner info.
- ๐ข Only two values: 0 or 1, similar to a lock.
- ๐ Usage: Signals availability of a single resource.
- โก Efficiency: Faster in scenarios with multiple processes.
- ๐ Mechanism: Uses signaling mechanisms.
- ๐ข Range of values: Allows values greater than 1.
- ๐ Flexibility: Manages multiple resources effectively.
- โ๏ธ Usage: Controls access to multiple instances of a resource, like a pool of connections.
- ๐ Mechanism: Uses counting to manage resource allocation.
-
๐ Definition:
A synchronization issue where producer and consumer processes access shared data. -
๐ง Solution:
Use semaphores or mutexes to control access and prevent race conditions.
-
๐ Definition:
An increase in page faults despite increasing memory pages in certain page replacement algorithms. -
๐ Occurs In:
FIFO (First-In, First-Out) page replacement algorithm.
-
๐ Definition:
A deadlock is a situation where a set of processes are blocked because each process holds resources and waits to acquire additional resources held by another process. -
๐ Scenario:
Two or more processes are unable to proceed because they are waiting for each other to release resources. -
โ ๏ธ Common Occurrence:
In multiprocessing environments, leading to the system becoming unresponsive.
-
๐ Mutual Exclusion:
Resources cannot be shared; at least one resource must be held in a non-shareable mode. -
๐ค Hold and Wait:
Processes holding resources are allowed to wait for additional resources. -
โ No Pre-emption:
Resources cannot be forcibly taken from a process; they must be voluntarily released. -
๐ Circular Wait:
A set of processes exists such that each process is waiting for a resource held by the next process in the cycle.
-
๐ฏ Purpose:
A deadlock avoidance algorithm used in resource allocation. -
๐ ๏ธ Method:
Checks if resources can be safely allocated without causing a deadlock by ensuring the system remains in a safe state.
- ๐ Ensure at least one necessary condition for deadlock cannot hold.
- ๐ค Mutual Exclusion: Allow resource sharing where possible.
- โ Hold and Wait: Require all resources to be requested upfront.
- โ No Pre-emption: Permit resource preemption.
- ๐ Circular Wait: Impose a strict order for resource allocation.
- ๐ Dynamically examine resource allocation to prevent circular wait.
- ๐ฒ Use the Bankerโs Algorithm to determine safe states; deny requests that would lead to an unsafe state.
- ๐ Allow the system to enter a deadlock state, then detect it.
- ๐ Use a Wait-for Graph to represent wait-for relationships; a cycle indicates a deadlock.
- ๐ Employ a Resource Allocation Graph to check for cycles and determine the presence of deadlock.
- ๐ Terminate one or more processes involved in the deadlock (abruptly or gracefully).
- ๐ Use resource preemption to take resources from processes and allocate them to others to break the deadlock.
| Parameter | Logical Address | Physical Address |
|---|---|---|
| ๐ Basic | Generated by the CPU. | Located in a memory unit. |
| ๐ฆ Address Space | Set of all logical addresses generated by the CPU. | Set of all physical addresses corresponding to logical addresses. |
| ๐ Visibility | Visible to the user. | Not visible to the user. |
| โ๏ธ Generation | Created by the CPU. | Computed by the Memory Management Unit (MMU). |
- ๐ Definition:
Hardware that translates logical addresses to physical addresses.
-
๐พ Usage:
Used for temporary data storage while the computer is running. -
โก Access Speed:
Faster as it is directly accessible by the CPU. -
๐จ Nature:
Volatile; data is lost when power is turned off. -
๐ฐ Cost:
More expensive due to the use of semiconductor technology. -
๐ Capacity:
Ranges from 16 to 32 GB, suitable for active tasks. -
๐ Examples:
RAM, ROM, and Cache memory.
-
๐พ Usage:
Used for permanent data storage, retaining information long-term. -
โก Access Speed:
Slower; not directly accessible by the CPU. -
๐จ Nature:
Non-volatile; retains data even when power is off. -
๐ฐ Cost:
Less expensive, often using magnetic or optical technology. -
๐ Capacity:
Can range from 200 GB to several terabytes for extensive storage. -
๐ Examples:
Hard Disk Drives, Floppy Disks, and Magnetic Tapes.
-
๐ Definition:
Small, fast memory located close to the CPU for quick access to frequently used data. -
โก Caching:
Involves using a smaller, faster memory to store copies of data from frequently used main memory locations. Various independent caches within a CPU store instructions and data, reducing the average time needed to access data from the main memory.
-
๐ Fixed Location:
Each block has a fixed cache location. -
โก Simplicity:
Simpler and faster due to the fixed placement.
-
๐ Flexible Location:
Any block can be placed into any cache line, providing more flexibility. -
โ๏ธ Efficiency:
Better cache utilization but more complex to implement.
- ๐น Definition:
Occurs when allocated memory blocks are larger than required by a process, leading to wasted space within the allocated memory. - ๐ Characteristics:
- Fixed-sized memory blocks are allocated to processes.
- Difference between allocated and required memory is wasted.
- Arises when memory is divided into fixed-sized partitions.
- ๐ง Solution:
Best-fit block allocation to minimize wasted space.
- ๐น Definition:
Happens when free memory is scattered in small, unusable fragments, preventing the allocation of large contiguous memory blocks. - ๐ Characteristics:
- Variable-sized memory blocks are allocated to processes.
- Unused spaces between allocated blocks are too small for new processes.
- Arises when memory is divided into variable-sized partitions.
- ๐ง Solution:
Compaction, paging, and segmentation to reorganize memory and reduce fragmentation.
-
๐ Definition:
The process of rearranging memory to reduce fragmentation. -
๐ ๏ธ Compaction:
Collects fragments of available memory into contiguous blocks by moving programs and data in a computer's memory or disk, thereby optimizing memory usage.
-
๐ Definition:
Storing data temporarily for devices to access when they are ready, such as print jobs. -
๐ก Meaning:
Spooling stands for Simultaneous Peripheral Operations Online, which involves placing jobs in a buffer (either in memory or on a disk) where a device can access them when ready. -
๐ง Purpose:
Helps manage different data access rates of devices, ensuring efficient data processing.
-
๐ Definition:
Loading only the required part of a program into memory, unloading it when done, and loading a new part as needed. -
๐ง Purpose:
Efficiently manages memory usage by ensuring that only necessary parts of a program are in memory at any given time, optimizing resource allocation.
-
๐๏ธ Page Table:
Maps logical pages to physical frames, enabling the memory management unit (MMU) to translate addresses. -
๐ฒ Frame:
Fixed-size physical memory blocks where pages are loaded. -
๐ Page:
Fixed-size blocks of logical memory that are mapped to frames in physical memory.
-
๐ Definition:
A memory management technique for non-contiguous memory allocation, dividing both main and secondary memory into fixed-size partitions called pages and frames, respectively. -
๐ฏ Purpose:
- Avoids external fragmentation.
- Simplifies memory management by using fixed-size blocks.
-
๐ Operation:
Fetches process pages into main memory frames as needed, ensuring efficient use of memory resources.
-
๐ Definition:
Dividing memory into segments based on logical units such as functions, objects, or data structures. -
๐ Features:
- Segments are variable-sized, reflecting the logical structure of programs.
- Provides a more natural view of memory for programmers.
-
๐ง Purpose:
Enhances memory organization by grouping related data and code, improving access and management.
-
๐ Invisible to the Programmer:
Memory management is handled by the OS and MMU, not directly visible in the programming model. -
๐ข Fixed-Size Pages:
Memory is divided into uniform page sizes, simplifying allocation. -
๐ Procedures and Data:
Cannot be separated, as both are stored in fixed-size blocks. -
๐ Virtual Address Space:
Allows virtual address space to exceed physical memory, supporting virtual memory. -
โก Performance:
Faster memory access compared to segmentation. -
โ ๏ธ Fragmentation:
Results in internal fragmentation due to fixed page sizes.
-
๐ Visible to the Programmer:
Programmers work with segments that correspond to logical units in the code. -
๐ Variable-Size Segments:
Segments can be of different sizes, matching the logical structure of the program. -
๐ Procedures and Data:
Can be separated, allowing more flexible memory organization. -
๐ Address Spaces:
Breaks programs, data, and code into independent spaces, enhancing modularity. -
โก Performance:
Slower memory access compared to paging due to variable sizes. -
โ ๏ธ Fragmentation:
Results in external fragmentation as free memory becomes scattered.
-
๐ Definition:
Occurs when a program accesses a page that is not currently in physical memory. -
๐ Handling:
Triggers the OS to fetch the required page from secondary memory (e.g., disk) into physical memory, potentially causing a temporary pause in execution.
-
๐ฏ Definition:
A memory management technique in operating systems that creates the illusion of a large contiguous address space. -
๐ Features:
- Extends physical memory using disk space.
- Allows more programs to run simultaneously.
- Stores data in pages for efficient memory use.
- Provides memory protection to ensure process isolation.
- Managed through methods like paging and segmentation.
- Acts as temporary storage alongside RAM for processes.
-
๐ง Purpose:
Enhances system performance by allowing efficient use of available memory and supporting multitasking.
-
๐ Multiple Programs:
Allows multiple programs to run on a single processor. -
๐ Addresses Underutilization:
Tackles underutilization of the CPU and main memory by keeping the CPU busy with multiple jobs. -
๐ง Coordination:
Coordinates the execution of several programs simultaneously. -
โก Continuous Execution:
Main objective is to have processes running at all times, improving CPU utilization by organizing multiple jobs for continuous execution.
-
๐ Definition:
Loads pages into memory only when they are needed, which occurs when a page fault happens. -
๐ Operation:
- Pages are fetched from secondary memory to physical memory on demand.
- Reduces memory usage by loading only necessary pages.
-
๐ฏ Purpose:
Optimizes memory usage and improves system performance by avoiding loading entire processes into memory upfront.
- ๐ฏ Purpose:
Manage how pages are swapped in and out of physical memory when a page fault occurs.
- **๐ Replaces the page that has not been used for the longest time.
- ๐ Keeps track of page usage over time to make informed replacement decisions.
- **๐ Replaces the oldest page in memory.
- ๐ ๏ธ Simple to implement but can lead to suboptimal performance due to the Convoy Effect.
- **๐ฎ Replaces the page that will not be used for the longest period in the future.
- ๐ Provides the best performance but is impractical to implement since future requests are unknown.
- **๐ Replaces the page with the lowest access frequency.
- **๐ Tracks pages based on the number of accesses over time to determine replacements.
-
๐ Definition:
Excessive swapping between memory and disk, leading to significant system slowdown. -
๐ Occurrence:
Happens when a computer spends more time handling page faults than executing transactions, resulting in degraded performance. -
โ ๏ธ Cause:
High page fault rate due to insufficient physical memory, causing frequent swapping. -
๐ง Impact:
- Longer service times.
- Reduced system efficiency.
- Potential system unresponsiveness.
- Fragmentation is a critical concept in memory management, with internal and external fragmentation requiring different solutions like best-fit allocation, compaction, and paging.
- Defragmentation and compaction are essential for optimizing memory usage, ensuring that memory is used efficiently.
- Spooling and overlays enhance resource management by buffering data and loading only necessary program parts.
- Understanding paging, segmentation, and their differences is vital for effective memory management and system performance.
- Virtual memory and demand paging enable efficient use of physical memory, supporting multitasking and large applications.
- Page replacement algorithms like LRU, FIFO, Optimal, and LFU are crucial for maintaining system performance by managing memory efficiently.
- Thrashing is a severe issue that occurs due to high page fault rates, emphasizing the importance of adequate memory management.
- Multiprogramming aims to maximize CPU utilization by running multiple programs simultaneously, addressing resource underutilization.