Operating System Organization Chapter 5 CSE 422s

  1. Operating System design is concerned with the management of processes and threads:
    • Multiprogramming
    • Multiprocessing
    • Distributed Processing
  2. Concurrency refers to:
    any form of interaction among processes or threads.
  3. Concurrency includes these 6 items:
    • communication among processes/threads
    • sharing of, and competition for system resources
    • cooperative processing of shared data
    • synchronization of process/thread activities
    • organized CPU scheduling
    • solving deadlock and starvation problems
  4. Concurrency arises in the same way at different levels of execution streams:
    multiprogramming
    --interaction between multiple processes running on one CPU (pseudoparallelism)
  5. Concurrency arises in the same way at different levels of execution streams:
    multithreading
    --interaction between multiple threads running in one process
  6. Concurrency arises in the same way at different levels of execution streams:
    multiprocessors
    --interaction between multiple CPUs running multiple processes/threads (real parallelism)
  7. Concurrency arises in the same way at different levels of execution streams:
    multicomputers
    --interaction between multiple computers running distributed processes/threads
  8. Concurrency arises in 3 different contexts:
    • Multiple Application
    • Structured Applications
    • Operating System Structure
  9. Multiple Application context:
    invented to allow processing time to be shared among active applications
  10. Structured Applications context:
    extension of modular design and structured programming
  11. Operating System Structure context:
    OS themselves implemented as a set of processes or threads
  12. atomic operation (concurrency):
    A function or action implemented as a sequence of one or more intstructions that appears to be indivisible, that is, no other process can see an intermediate state or interrupt the operation.
  13. atomic operation: two points:
    • The sequence of instruction is guaranteed to execute as a group, or not execute at all, having no visible effect on system state.
    • Atomicity guarantees isolation from comcurrent processes.
  14. critical section (concurrency):
    A section of code within a process that requires access to shared resources and that must not be executed while another process is in a corresponding section of code.
  15. deadlock (concurrency):
    A situation in which two or more processeses are unable to proceed because each is waiting for one of the others to do something.
  16. livelock (concurrency):
    A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful.
  17. mutual exclusion (concurrency):
    The requirement that when one process is in a critical section that accesses shared resources, no other process may be in a critical section that accesses any of those shared resources.
  18. race condition(concurrency):
    A situation in which multiple threads of processes read and write to a shared data item and the final result depends on the relative timing of the their execution.
  19. starvation(concurrency):
    A situation in which a runable process is overlooked indefinably by the scheduler, although it is able to (ready) to proceed, it is never chosen.
  20. Interleaving and overlapping
    • can be viewed as examples of concurrent processing
    • both present the same problems
  21. Uniprocessor
    the relative speed of execution of processes cannot be predicted
  22. Uniprocessor depends on:
    • on activities of other processes
    • the way the OS handles interrupts
    • scheduling policies of the OS
  23. Difficulties of concurency (list 3)
    • Sharing of global resources
    • Difficult for the OS to manage the allocation of resources optimally
    • Difficult to locate programming errors as results are not deterministic and reproducible
  24. A race condition occurs when
    multiple processes or threads read and write data items
  25. In a race condition the final result depends on
    the order of execution.
  26. In a race condition the “loser” of the race is...
    the process that updates last and will determine the final value of the variable
  27. Consequential Race Condition: Lucky CPU Scheduling Example:
    Image Upload 2
  28. Consequential Race Condition: Unluck CPU Scheduling
    Image Upload 4
  29. Consequential Race Condition: Unluck CPU Scheduling-->Local variables
  30. Image Upload 6
  31. How to avoid race conditions:
    Find a way to keep the instructions together, which means actually reverting from too much interleaving and returning to "indivisible" blocks of executions.
  32. Indivisible blocks of code are __?__
    critical regions.
  33. A critical region is a section of code that maybe...
    executed by only one process or thread at a time.
  34. critical regions are not necessarily the
    • same region of memory or section of code in different processes:
    • Image Upload 8
  35. Critical regions can be protected from concurrent access by.....
    padding them with exit and entrance gates.
  36. Design and management issues raised by the existence of concurrency:
    • be able to keep track of various processes
    • allocate and de-allocate resources for each active process
    • protect the data and physical resources of  each process against interference by other process
    • ensure that the processes and outputs are independent of  the processing speed
  37. 3 basic interactions: of concurrency:
    processes unaware of each other
    they must use shared resources independently, without interfering, and leave them intact for the others
  38. 3 basic interactions of concurrency:
    processes indirectly aware of each other--
    they work on common data and build some result together via the data (“stigmergy” in biology)
  39. 3 basic interactions of concurrency:
    processes directly aware of each other—
    they cooperate by communicating, e.g., exchanging messages
  40. Processes unaware of each other: Influence that one process has on the other:
    • Results of one process independent of the action of the others
    • Timing of process maybe affected.
  41. Process indirectly aware of each other: Influence that one process has on the other:
    • Results of one process may depend on information from others
    • Timing of process maybe affected.
  42. Process directly aware of each other: Influence that one process has on the other:
    • Results of one process may depend on information from others
    • Timing of process maybe affected.
  43. Potential control problems for processes unaware of each other:
    • Mutual Exclusion
    • Deadlock (renewable resource)
    • Starvation
  44. Potential control problems for processes indirectly unaware of each other:
    • Mutual Exclusion
    • Deadlock (renewable resource)
    • Starvation
    • Data coherence
  45. Potential control problems for processes directly aware of each other:
    • Deadlock (consumable resource)
    • Starvation
  46. Concurrent processes come into conflict when they are
    competing for use of the same resource.
  47. Examples of  resources that would possibly be competed over:
    • I/O devices
    • memory
    • processor time
    • clock
  48. In the case of competing processes three control problems must be faced:
    • the need for mutual exclusion
    • deadlock
    • starvation
  49. Two points of Cooperation Among Processes by Sharing
    • Writing must be mutually exclusive
    • Critical sections are used to provide data integrity
  50. Cooperation Among Processes by Communication: Messages are passed -- __?__ is not a control requirement
    mutual exclusion
  51. Cooperation Among Processes by Communication: Two processes sending message to each other while another process waits for a message so its possible to have __?__.
    starvation
  52. 6 Rules: Chart of Mutual Exclusion:
    • 1) Mutual exclusion inside: Only one process at a time may be allowed in a critical region.
    • 2) No exclusion outside: A process stalled in a non-critical region may not exclude other processes from their critical regions.
    • 3) No indefinite occupation: A critical region may only be occupied for a finite amount of time.
    • 4) No indefinite delay when barred: A process may only be excluded for a finite amount of time (no deadlock or starvation).
    • 5) No delay when about to enter: A critical region free of access may be entered immediately.
    • 6) Non-deterministic scheduling: No assumption should be made about the relative speeds of processes
  53. Mutual Exclusion by Busy Waiting: simple lock variable implementation (Good/Bad).
    Image Upload 10
  54. Mutual Exclusion by Busy Waiting: disabling hardware interrupts (Good/Bad jQuery110105936421087551753_1509485060763?)
    Image Upload 12
  55. Whats so bad about disabling hardware interrupts to achieve mutual exclusion?
    • Nothing to guarantee that the user process is going to ever exit the critical region.
    • Meanwhile, the CPU cannot interleave any other task, even unrelated to this race condition.
    • Critical region becomes one physically indivisible block NOT logically.
    • This approach does not work in multi-processors.
  56. Mutual Exclusion: Does the disabling hardware interrupts approach work? YES/NO?
    YES But its a foolish approach because there are too many pitfalls.
  57. Mutual Exclusion: Disabling Interrupts two positives:
    • works on uni-processor system
    • disabling interrupts guarantees mutual exclusion
  58. Mutual Exclusion: Disabling Interrupts two negatives:
    • the efficiency of execution could be noticeably degraded
    • this approach will not work in a multiprocessor architecture
  59. Mutual Exclusion by Busy Waiting: simple lock variable (Good/Bad ???)
    • BAD:
    • Image Upload 14
  60. Mutual Exclusion:why is the simple lock variable approach NOT a good one?
    • suffers from the very flaw that needs to be avoided: a race condition.
  61. Mutual Exclusion: Simple Lock approach:
    • The problem comes from the small gap between testing that the lock is OFF and setting the lock.
    • It is possible that another thread gets scheduled in between these two actions; ergo falls into the gap.
    • So they both find the lock OFF and then they both set it and enter.
  62. Monitor: Synchronization is achieved by the use of __?__ __?__ that are contained within the monitor and accessible only within the monitor
    condition variables
  63. Condition variables are operated on by two functions:
    • cwait(c)
    • csignal(c)
  64. Monitor: cwait(c):
    cwait(c): suspend execution of the calling process on condition c
  65. Monitor: csignal(c):
    csignal(c): resume execution of some process blocked after a cwait on the same condition
  66. Draw out a correct bounded buffer solution using Semaphores.
    Image Upload 16
  67. Draw a correct version of the Infinite Buffer Producer Consumer solution using Semaphores.
    Image Upload 18
  68. Monitors: Mesa semantics vs Hoar Semantics:
    • For Mesa-semantics, you always need to check the condition after wait (use “while”).
    • For Hoare-semantics you can change while to “if” --other process could sneak in.
  69. In the semantics of executing a wait in the monitor is that:
    several processes can be waiting “inside” the monitor at any given time but only one is executing

    • wait queues are internal to the monitor
    • there can be multiple wait queues
  70. Write a readers and writers solution using readers have priority Message Passing:
    Image Upload 20
  71. What  is major drawback (minus) of the Reader has priority design for Reader's/Writer's.
    • Once one reader has access, other readers can access the file without having to acquire the wsem semaphore
    • Can lead to writer starvation
  72. Reader /Writers problem: Example Writer has priority.
  73. Message Passing: Indirect Addressing: 3 points/steps of:
    • Messages are sent to a shared data structure consisting of queues that can temporarily hold messages
    • Queues are referred to as mailboxes
    • One process sends a message to the mailbox and the other process picks up the message from the mailbox.
  74. Message Passage: Direct Addressing:
    • Send primitive includes a specific identifier of the destination process
    • Receive primitive can be handled in one of two ways:
    • (1) require that the process explicitly designate a sending process --effective for cooperating concurrent processes
    • (2) implicit addressing --source parameter of the receive primitive possesses a value returned when the receive operation has been performed
  75. Requirements for Mutual Exclusion:
    • Must be enforced.
    • A process that halts must do so without interfering with other processes.
    • No deadlock or starvation.
    • A process must not be denied access to a critical section when there is no other process.
    • No assumptions are made about relative process speeds or number of processes
    • A process remains inside its critical section for a finite time only.
  76. Concurrency: multiprogramming (short defn):
    interaction between multiple processes running on one CPU (pseudoparallelism)
  77. Concurrency: multithreading (short defn):
    interaction between multiple threads running in one process
  78. Concurrency: multiprocessors (short defn):
    interaction between multiple CPUs running multiple processes/threads (real parallelism)
  79. Concurrency: multicomputers (short defn):
    interaction between multiple computers running distributed processes/threads
  80. mutual exclusion (defn):
    The requirement that when one process is in a critical section that accesses shared resources, no other process may be in a critical section that accesses any of those shared resources.
  81. atomic operation (defn):
    A function or action implemented as a sequence of one or more instructions that appears to be indivisible; that is, no other process can see an intermediate state or interrupt the operation.
  82. deadlock (defn):
    A situation in which two or more processes are unable to proceed because each is waiting for one of the others to do something.
  83. livelock (defn):
    A situation in which two or more processes continuously change their states in response to changes in the other process(es) without doing any useful work.
  84. race condition (defn):
    A situation in which multiple threads or processes read and write a shared data item and the final result depends on the relative timing of their execution.
  85. starvation (defn):
    A situation in which a runnable process is overlooked indefinitely by the scheduler; although it is able to proceed, it is never chosen.
Author
geschw66
ID
317507
Card Set
Operating System Organization Chapter 5 CSE 422s
Description
Chapter 5 of William Stallings book for CSE 442s Operating System Organization class at WUSTL Sprint 2016
Updated