-
Operating System design is concerned with the management of processes and threads:
- Multiprogramming
- Multiprocessing
- Distributed Processing
-
Concurrency refers to:
any form of interaction among processes or threads.
-
Concurrency includes these 6 items:
- communication among processes/threads
- sharing of, and competition for system resources
- cooperative processing of shared data
- synchronization of process/thread activities
- organized CPU scheduling
- solving deadlock and starvation problems
-
Concurrency arises in the same way at different levels of execution streams:
multiprogramming
--interaction between multiple processes running on one CPU (pseudoparallelism)
-
Concurrency arises in the same way at different levels of execution streams:
multithreading
--interaction between multiple threads running in one process
-
Concurrency arises in the same way at different levels of execution streams:
multiprocessors
--interaction between multiple CPUs running multiple processes/threads (real parallelism)
-
Concurrency arises in the same way at different levels of execution streams:
multicomputers
--interaction between multiple computers running distributed processes/threads
-
Concurrency arises in 3 different contexts:
- Multiple Application
- Structured Applications
- Operating System Structure
-
Multiple Application context:
invented to allow processing time to be shared among active applications
-
Structured Applications context:
extension of modular design and structured programming
-
Operating System Structure context:
OS themselves implemented as a set of processes or threads
-
atomic operation (concurrency):
A function or action implemented as a sequence of one or more intstructions that appears to be indivisible, that is, no other process can see an intermediate state or interrupt the operation.
-
atomic operation: two points:
- The sequence of instruction is guaranteed to execute as a group, or not execute at all, having no visible effect on system state.
- Atomicity guarantees isolation from comcurrent processes.
-
critical section (concurrency):
A section of code within a process that requires access to shared resources and that must not be executed while another process is in a corresponding section of code.
-
deadlock (concurrency):
A situation in which two or more processeses are unable to proceed because each is waiting for one of the others to do something.
-
livelock (concurrency):
A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful.
-
mutual exclusion (concurrency):
The requirement that when one process is in a critical section that accesses shared resources, no other process may be in a critical section that accesses any of those shared resources.
-
race condition(concurrency):
A situation in which multiple threads of processes read and write to a shared data item and the final result depends on the relative timing of the their execution.
-
starvation(concurrency):
A situation in which a runable process is overlooked indefinably by the scheduler, although it is able to (ready) to proceed, it is never chosen.
-
Interleaving and overlapping
- can be viewed as examples of concurrent processing
- both present the same problems
-
Uniprocessor
the relative speed of execution of processes cannot be predicted
-
Uniprocessor depends on:
- on activities of other processes
- the way the OS handles interrupts
- scheduling policies of the OS
-
Difficulties of concurency (list 3)
- Sharing of global resources
- Difficult for the OS to manage the allocation of resources optimally
- Difficult to locate programming errors as results are not deterministic and reproducible
-
A race condition occurs when
multiple processes or threads read and write data items
-
In a race condition the final result depends on
the order of execution.
-
In a race condition the “loser” of the race is...
the process that updates last and will determine the final value of the variable
-
Consequential Race Condition: Lucky CPU Scheduling Example:
-
Consequential Race Condition: Unluck CPU Scheduling
-
Consequential Race Condition: Unluck CPU Scheduling-->Local variables
-
-
How to avoid race conditions:
Find a way to keep the instructions together, which means actually reverting from too much interleaving and returning to "indivisible" blocks of executions.
-
Indivisible blocks of code are __?__
critical regions.
-
A critical region is a section of code that maybe...
executed by only one process or thread at a time.
-
critical regions are not necessarily the
- same region of memory or section of code in different processes:
-
Critical regions can be protected from concurrent access by.....
padding them with exit and entrance gates.
-
Design and management issues raised by the existence of concurrency:
- be able to keep track of various processes
- allocate and de-allocate resources for each active process
- protect the data and physical resources of each process against interference by other process
- ensure that the processes and outputs are independent of the processing speed
-
3 basic interactions: of concurrency:
processes unaware of each other—
they must use shared resources independently, without interfering, and leave them intact for the others
-
3 basic interactions of concurrency:
processes indirectly aware of each other--
they work on common data and build some result together via the data (“stigmergy” in biology)
-
3 basic interactions of concurrency:
processes directly aware of each other—
they cooperate by communicating, e.g., exchanging messages
-
Processes unaware of each other: Influence that one process has on the other:
- Results of one process independent of the action of the others
- Timing of process maybe affected.
-
Process indirectly aware of each other: Influence that one process has on the other:
- Results of one process may depend on information from others
- Timing of process maybe affected.
-
Process directly aware of each other: Influence that one process has on the other:
- Results of one process may depend on information from others
- Timing of process maybe affected.
-
Potential control problems for processes unaware of each other:
- Mutual Exclusion
- Deadlock (renewable resource)
- Starvation
-
Potential control problems for processes indirectly unaware of each other:
- Mutual Exclusion
- Deadlock (renewable resource)
- Starvation
- Data coherence
-
Potential control problems for processes directly aware of each other:
- Deadlock (consumable resource)
- Starvation
-
Concurrent processes come into conflict when they are
competing for use of the same resource.
-
Examples of resources that would possibly be competed over:
- I/O devices
- memory
- processor time
- clock
-
In the case of competing processes three control problems must be faced:
- the need for mutual exclusion
- deadlock
- starvation
-
Two points of Cooperation Among Processes by Sharing
- Writing must be mutually exclusive
- Critical sections are used to provide data integrity
-
Cooperation Among Processes by Communication: Messages are passed -- __?__ is not a control requirement
mutual exclusion
-
Cooperation Among Processes by Communication: Two processes sending message to each other while another process waits for a message so its possible to have __?__.
starvation
-
6 Rules: Chart of Mutual Exclusion:
- 1) Mutual exclusion inside: Only one process at a time may be allowed in a critical region.
- 2) No exclusion outside:
A process stalled in a non-critical region may not exclude other processes from their critical regions. - 3) No indefinite occupation: A critical region may only be occupied for a finite amount of time.
- 4) No indefinite delay when barred: A process may only be excluded for a finite amount of time (no deadlock or starvation).
- 5) No delay when about to enter: A critical region free of access may be entered immediately.
- 6) Non-deterministic scheduling: No assumption should be made about the relative speeds of processes
-
Mutual Exclusion by Busy Waiting: simple lock variable implementation (Good/Bad).
-
Mutual Exclusion by Busy Waiting: disabling hardware interrupts (Good/Bad jQuery110105936421087551753_1509485060763?)
-
Whats so bad about disabling hardware interrupts to achieve mutual exclusion?
- Nothing to guarantee that the user process is going to ever exit the critical region.
- Meanwhile, the CPU cannot interleave any other task, even unrelated to this race condition.
- Critical region becomes one physically indivisible block NOT logically.
- This approach does not work in multi-processors.
-
Mutual Exclusion: Does the disabling hardware interrupts approach work? YES/NO?
YES But its a foolish approach because there are too many pitfalls.
-
Mutual Exclusion: Disabling Interrupts two positives:
- works on uni-processor system
- disabling interrupts guarantees mutual exclusion
-
Mutual Exclusion: Disabling Interrupts two negatives:
- the efficiency of execution could be noticeably degraded
- this approach will not work in a multiprocessor architecture
-
Mutual Exclusion by Busy Waiting: simple lock variable (Good/Bad ???)
- BAD:
-
Mutual Exclusion:why is the simple lock variable approach NOT a good one?
- suffers from the very flaw that needs to be avoided: a race condition.
-
Mutual Exclusion: Simple Lock approach:
- The problem comes from the small gap between testing that the lock is OFF and setting the lock.It is possible that another thread gets scheduled in between these two actions; ergo falls into the gap.So they both find the lock OFF and then they both set it and enter.
-
Monitor: Synchronization is achieved by the use of __?__ __?__ that are contained within the monitor and accessible only within the monitor
condition variables
-
Condition variables are operated on by two functions:
-
Monitor: cwait(c):
cwait(c): suspend execution of the calling process on condition c
-
Monitor: csignal(c):
csignal(c): resume execution of some process blocked after a cwait on the same condition
-
Draw out a correct bounded buffer solution using Semaphores.
-
Draw a correct version of the Infinite Buffer Producer Consumer solution using Semaphores.
-
Monitors: Mesa semantics vs Hoar Semantics:
- For Mesa-semantics, you always need to check the condition after wait (use “while”).
- For Hoare-semantics you can change while to “if” --other process could sneak in.
-
In the semantics of executing a wait in the monitor is that:
several processes can be waiting “inside” the monitor at any given time but only one is executing
- wait queues are internal to the monitor
- there can be multiple wait queues
-
Write a readers and writers solution using readers have priority Message Passing:
-
What is major drawback (minus) of the Reader has priority design for Reader's/Writer's.
- Once one reader has access, other readers can access the file without having to acquire the wsem semaphore
- Can lead to writer starvation
-
Reader /Writers problem: Example Writer has priority.
-
Message Passing: Indirect Addressing: 3 points/steps of:
- Messages are sent to a shared data structure consisting of queues that can temporarily hold messages
- Queues are referred to as mailboxes
- One process sends a message to the mailbox and the other process picks up the message from the mailbox.
-
Message Passage: Direct Addressing:
- Send primitive includes a specific identifier of the destination process
- Receive primitive can be handled in one of two ways:
- (1) require that the process explicitly designate a sending process --effective for cooperating concurrent processes
- (2) implicit addressing --source parameter of the receive primitive possesses a value returned when the receive operation has been performed
-
Requirements for Mutual Exclusion:
- Must be enforced.
- A process that halts must do so without interfering with other processes.
- No deadlock or starvation.
- A process must not be denied access to a critical section when there is no other process.
- No assumptions are made about relative process speeds or number of processes
- A process remains inside its critical section for a finite time only.
-
Concurrency: multiprogramming (short defn):
interaction between multiple processes running on one CPU (pseudoparallelism)
-
Concurrency: multithreading (short defn):
interaction between multiple threads running in one process
-
Concurrency: multiprocessors (short defn):
interaction between multiple CPUs running multiple processes/threads (real parallelism)
-
Concurrency: multicomputers (short defn):
interaction between multiple computers running distributed processes/threads
-
mutual exclusion (defn):
The requirement that when one process is in a critical section that accesses shared resources, no other process may be in a critical section that accesses any of those shared resources.
-
atomic operation (defn):
A function or action implemented as a sequence of one or more instructions that appears to be indivisible; that is, no other process can see an intermediate state or interrupt the operation.
-
deadlock (defn):
A situation in which two or more processes are unable to proceed because each is waiting for one of the others to do something.
-
livelock (defn):
A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful work.
-
race condition (defn):
A situation in which multiple threads or processes read and write a shared data item and the final result depends on the relative timing of their execution.
-
starvation (defn):
A situation in which a runnable process is overlooked indefinitely by the scheduler; although it is able to proceed, it is never chosen.
|
|