Permalink
Browse files

added more noteS

  • Loading branch information...
omarsar committed Sep 10, 2016
1 parent d182c62 commit cf86a12ced6a427c46b87a2ef19c5b70b90391d9
@@ -1,84 +1,84 @@
## Chapter 1: Introduction to Operating System Concepts
- What is the operating system? The operating system is a program that efficiently manages the resources (hardware) of a computer system. The OS is in charge of efficiently managing several resources, such as CPU, main memory, secondary storage and I/O devises, so that users can simultaneously work and complete the desired tasks.
- > The operating system provides an environment in which the user can execute programs in an efficient and convenient manner.
- **User View:** an OS designed for ease-of-use, some performance and no resource utilization.
- Resource Utilization is needed when having multiple user connecting to a minicomputer or mainframe through a terminal.
- **System View:** the operating system is the resource allocator, managing CPU time, memory space, I/O devices and file-storage space. The operating can also be a control program, where there is a need to control the various I/O devises and user programs.
- > Main memory is a **volatile** storage devise that loses its contents when power is turned off or otherwise lost.
- Although there is no universally accepted definition of an OS, it is also called the kernel. Besides the kernel there are two other types of programs: **system programs**, associated with the OS but are not part of the kernel, and **application programs**, which include all programs not associated with the operations of the system.
- The occurrence of an event is usually called an **interrupt**. Software interrupts are referred to as system calls or monitor calls.
- A **device driver** provides the OS with an interface to the devise through the **device controller**. The device controller moves data between the peripheral devises that it controls and its local buffer storage.
- The advantages of multi-processor systems: increased throughput, economy of scale, and increased reliability.
- Two kinds of multi-processor systems:
+ **Asymmetric MP**: each processor is assigned a task and there is a boss processor which controls the other processors.
+ **Symmetric MP (SMP)**: each processor performs all tasks within the OS.
- **Clustered Systems:** can be structured asymmetrically or symmetrically. Such systems provide high availability and high-performance computing. Clustered systems are useful to achieve parallelization as in the case of map/reduce programs.
- **Distributed Lock Manager (DLM):** a function used in clustering technology to supply access control and locking to ensure that no conflicting operations occur.
- **Job pool:** storage of jobs, usually on disk rather than in main memory because of the size (usually too large to fit on main memory).
- **Multiprogramming** is essential to avoid idle state of CPU; it allows for maximum and efficient use of the CPU since a job is always under execution.
- **Time sharing (multitasking)** - is an extension to multiprogramming where there is usually an interface where the user can interact with the computer system.
- > A program loaded into memory and executing is called a process.
- **Job scheduling** - technique used by a system to decide which job to bring to memory for execution. This is important in time-shared OSs where multiple users are submitting multiple job which required some type of CPU time. Managing the jobs in memory also requires memory management. CPU scheduling involves efficiently deciding which job will run first from a list of READY jobs.
- Reasonable response time can be achieved by **swapping** or **virtual memory**.
- Privileged instructions are used to tag those instructions that are illegal to be executed in user mode. Such instructions must only run on kernel mode. Examples of such instructions include switch to kernel mode, I/O control, timer management, and interrupt management.
- > System calls provide the means for a user program to ask the operating system to perform tasks reserved for the operating system on the user program's behalf.
- OS for Process management:
+ Scheduling processes and threads on the CPUs
+ Creating and deleting both user and system processes
+ Suspending and resuming processing
+ Providing mechanisms for process synchronization
+ Providing mechanisms for process communication
- OS for Memory Management:
+ Keeping track of which parts of memory are currently being used and who is using them
+ Decide which process and data to move into and out of memory
+ Allocating and deallocating memory space as needed
- An OS can be viewed from several points of view: **services**, **interface**, and **component and interconnections**.
- The CPU cannot execute anything stored in secondary storage (e.g., magnetic tapes and hard disks). The CPU only executes what is in main memory. The program stored on the hard disk must be brought into memory first in order for the CPU to access and execute it.
- How is the CPU time shared among users (multiprogramming system)? We need to think about this. How to manage the memory blocks is handled by the operating system.
- The way in which main memory and secondary memory is accessed is different or have different access mechanism. The **devise driver** is responsible to look for instructions or data from the secondary storage and pass it to main memory, so that it can be accessed or executed by the CPU.
- Input/Output devises - The devises used to input/output information or instructions to/from a computer system (e.g., keyboard and monitor). Hard disks are also known as input devises since they contain data blocks that are read and input into main memory for execution. Similarly, they serve as a output devise as they are used to save the output of an executed program, for example. Essentially, secondary storages could be considered some sort of special type of I/O devises.
- How does the OS operates in a distributed computing system?
- CPU is also known as **process manager**. A **process** is a program in execution. The OS manages the CPU so that is can be shared by more than one process simultaneously. The CPU cannot garuantee the immediate execution of a job that contains a **waiting time** and **turnaround time**. The waiting time is known as the time taken for the CPU to start the execution a job after it has first been submitted *(ti -t0)*; *ti* representing the time the CPU starts the execution of the job and *t0* being the time the job was submitted for execution. The turnaround time is known as the time taken for the CPU to finish the execution of a job after it has first been submitted *(tp - t0)*.
- Ideally, the user requires that the waiting and turnaround time be as minimum as possible, despite the fact that the CPU is busy executing other tasks.
- Usually, the jobs are scheduled to execute in a **"first come, first serve"** manner, also knows as sequence of execution. The OS has control on what is executed next but not really on how the job arrives, which are specified by the user. Changing the scheduling of how the job are executed based on the amount of time taken and not in order in which they arrive, the total waiting average time for each job is reduced significanlty. That is, the OS tries to optimize for the best average waiting time and choose that schedule to execute specificied jobs by the user/s. The jobs with shorter time are executed first while the jobs with the longer times are executed last. This type of scheduling is commonly known as **"Shortest Job First"**.
## Chapter 1: Introduction to Operating System Concepts
- What is the operating system? The operating system is a program that efficiently manages the resources (hardware) of a computer system. The OS is in charge of efficiently managing several resources, such as CPU, main memory, secondary storage and I/O devises, so that users can simultaneously work and complete the desired tasks.
- > The operating system provides an environment in which the user can execute programs in an efficient and convenient manner.
- **User View:** an OS designed for ease-of-use, some performance and no resource utilization.
- Resource Utilization is needed when having multiple user connecting to a minicomputer or mainframe through a terminal.
- **System View:** the operating system is the resource allocator, managing CPU time, memory space, I/O devices and file-storage space. The operating can also be a control program, where there is a need to control the various I/O devises and user programs.
- > Main memory is a **volatile** storage devise that loses its contents when power is turned off or otherwise lost.
- Although there is no universally accepted definition of an OS, it is also called the kernel. Besides the kernel there are two other types of programs: **system programs**, associated with the OS but are not part of the kernel, and **application programs**, which include all programs not associated with the operations of the system.
- The occurrence of an event is usually called an **interrupt**. Software interrupts are referred to as system calls or monitor calls.
- A **device driver** provides the OS with an interface to the devise through the **device controller**. The device controller moves data between the peripheral devises that it controls and its local buffer storage.
- The advantages of multi-processor systems: increased throughput, economy of scale, and increased reliability.
- Two kinds of multi-processor systems:
+ **Asymmetric MP**: each processor is assigned a task and there is a boss processor which controls the other processors.
+ **Symmetric MP (SMP)**: each processor performs all tasks within the OS.
- **Clustered Systems:** can be structured asymmetrically or symmetrically. Such systems provide high availability and high-performance computing. Clustered systems are useful to achieve parallelization as in the case of map/reduce programs.
- **Distributed Lock Manager (DLM):** a function used in clustering technology to supply access control and locking to ensure that no conflicting operations occur.
- **Job pool:** storage of jobs, usually on disk rather than in main memory because of the size (usually too large to fit on main memory).
- **Multiprogramming** is essential to avoid idle state of CPU; it allows for maximum and efficient use of the CPU since a job is always under execution.
- **Time sharing (multitasking)** - is an extension to multiprogramming where there is usually an interface where the user can interact with the computer system.
- > A program loaded into memory and executing is called a process.
- **Job scheduling** - technique used by a system to decide which job to bring to memory for execution. This is important in time-shared OSs where multiple users are submitting multiple job which required some type of CPU time. Managing the jobs in memory also requires memory management. CPU scheduling involves efficiently deciding which job will run first from a list of READY jobs.
- Reasonable response time can be achieved by **swapping** or **virtual memory**.
- Privileged instructions are used to tag those instructions that are illegal to be executed in user mode. Such instructions must only run on kernel mode. Examples of such instructions include switch to kernel mode, I/O control, timer management, and interrupt management.
- > System calls provide the means for a user program to ask the operating system to perform tasks reserved for the operating system on the user program's behalf.
- OS for Process management:
+ Scheduling processes and threads on the CPUs
+ Creating and deleting both user and system processes
+ Suspending and resuming processing
+ Providing mechanisms for process synchronization
+ Providing mechanisms for process communication
- OS for Memory Management:
+ Keeping track of which parts of memory are currently being used and who is using them
+ Decide which process and data to move into and out of memory
+ Allocating and deallocating memory space as needed
- An OS can be viewed from several points of view: **services**, **interface**, and **component and interconnections**.
- The CPU cannot execute anything stored in secondary storage (e.g., magnetic tapes and hard disks). The CPU only executes what is in main memory. The program stored on the hard disk must be brought into memory first in order for the CPU to access and execute it.
- How is the CPU time shared among users (multiprogramming system)? We need to think about this. How to manage the memory blocks is handled by the operating system.
- The way in which main memory and secondary memory is accessed is different or have different access mechanism. The **devise driver** is responsible to look for instructions or data from the secondary storage and pass it to main memory, so that it can be accessed or executed by the CPU.
- Input/Output devises - The devises used to input/output information or instructions to/from a computer system (e.g., keyboard and monitor). Hard disks are also known as input devises since they contain data blocks that are read and input into main memory for execution. Similarly, they serve as a output devise as they are used to save the output of an executed program, for example. Essentially, secondary storages could be considered some sort of special type of I/O devises.
- How does the OS operates in a distributed computing system?
- CPU is also known as **process manager**. A **process** is a program in execution. The OS manages the CPU so that is can be shared by more than one process simultaneously. The CPU cannot garuantee the immediate execution of a job that contains a **waiting time** and **turnaround time**. The waiting time is known as the time taken for the CPU to start the execution a job after it has first been submitted *(ti -t0)*; *ti* representing the time the CPU starts the execution of the job and *t0* being the time the job was submitted for execution. The turnaround time is known as the time taken for the CPU to finish the execution of a job after it has first been submitted *(tp - t0)*.
- Ideally, the user requires that the waiting and turnaround time be as minimum as possible, despite the fact that the CPU is busy executing other tasks.
- Usually, the jobs are scheduled to execute in a **"first come, first serve"** manner, also knows as sequence of execution. The OS has control on what is executed next but not really on how the job arrives, which are specified by the user. Changing the scheduling of how the job are executed based on the amount of time taken and not in order in which they arrive, the total waiting average time for each job is reduced significanlty. That is, the OS tries to optimize for the best average waiting time and choose that schedule to execute specificied jobs by the user/s. The jobs with shorter time are executed first while the jobs with the longer times are executed last. This type of scheduling is commonly known as **"Shortest Job First"**.
Oops, something went wrong.

0 comments on commit cf86a12

Please sign in to comment.