Reader and Writer Problem Solution using semaphores
2 semaphores, one for the acquiring and releasing the writer , another for only the readers and a variable that keeps tracks of the readers
Dining Philosopher Problem Using Monitor
Dining philosopher Problem using semaphores
Bounded Buffer Consumer and Producer Code, Using semaphores
(1) occurs when two or more processes are stuck in a state where they are waiting for each other to release a resource, and none of them can move forward or complete their tasks.
dead lock
Semaphores implementation with no busy waiting
typedef struct{
int value;
struct process *list;
}semaphore;
Implementation of Semaphores
mutex lock aquire() and release() code
while(true){
acquire();
//critical section
release();
}
atomic variable increment() function
using compare_and_swap()
Bounded Waiting mutual exclusion using compare_and_swap
These are variables that support atomic operations, meaning that read, write, or update actions occur without interruption from other threads or processes. They help achieve synchronization by ensuring safe concurrent access to shared data.
Atomic Variable
Some CPUs provide special instructions, like Test-and-Set, Compare-and-Swap, or Fetch-and-Add, that allow atomic (uninterruptible) read-modify-write operations. These instructions can be used to implement locks or synchronization primitives.
Hardware instructions:
(1): These are special instructions that enforce a specific order of memory operations, preventing reordering and ensuring data consistency between different CPU cores or threads. They help synchronize shared data access.
Memory Barrier
3 Hardware Solutions to Synchronization
1. Memory barrier
2. Hardware Instruction
3. Atomic Variable
Peterson's Critical section problem code(Works well for 2 processes but can't scale)
What are the 3 conditions needed to solve critical section
Mutual Exclusion - Only one process can use the critical section at a time.
Progress - Processes wanting to enter the critical section will eventually get a turn.
Bounded Waiting - After requesting access, a process must wait a limited time to enter the critical section.
Producer and Consumer Race condition problem code
If we want to cause a racing condition we can only have a counter==BUFFER_SIZE for the producer and counter==0 for the consumer. This might produce a race condition
The entire program does NOT need to be in memory to execute
True
A hardware device that: at run time maps virtual to physical
address
Memory Management Unit
(1) the set of all logical addresses generated by a program, and (2) the set of all physical addresses generated by a program
Logical address space
Physical address space
Multi-Step processing of a user program
Memory Protection
we need to make sure that the process can only access the addresses that it has been addressed to.
Earliest deadline First served, what do I have to look for
Write down all the deadlines and when we reach a deadline we have to ask ourselves, which deadline is the earliest from what I from the 2 processes. If the current process has a later deadline at when we reach a deadline point: this later gets preempted
Rate Monotonic Formulas
if (1)CPU utilization < (2)Worst case CPU utilization for N process
Then we know that the Monotic Scheduling might be able to satisfy all the process
(1) : ∑ i = Ci/Pi
(2) : N(2^(1/N)-1)
Soft-Real Time Vs Hard-Real Time
Soft-Real Time: Tasks have the highest priority but do not guarantee to when they will be scheduled.
Hard-Real Time: The process must be serviced by its deadline
(1) - when a thread accesses memory, it may need to wait for the data to become available from memory.
(1) -- Memory stall
(1)– each processor is self-scheduling, all processes in common ready queue, or each has its
own private queue of ready processes
Symmetric Multiprocessing
(1)– only one processor accesses the system data structures, alleviating the need for data sharing
Asymmetric multiprocessing
Multi-level Queue
The ready queue is partitioned into separate queues,
Foreground
Background
Once the CPU has been allocated to a process, the process keeps the CPU until it voluntarily releases the CPU. Is this a case of preemptive or non-preemptive scheduling?
Non-Preemptive
What are 4 examples of CPU scheduling when we know for sure that it will be a nonpreemptive scheduling, meaning: the CPU is going to wait for the process to finish before moving on
(1)When a process moves from running to ready
(2)When a process moves from waiting to ready
(3)When a process terminates
(4)When a process moves from running to waiting
Threads Code Anything you remember from the syntaxes
<#include pthread.h>
int sum = 0;
pthread_mutex lock;
int main(int argc , char *argv[]){
pthread_attr_t attr;
pthread_t thread[2];
pthread_create(&thread[0], &att,runner, argv[1]);
pthread_create(&thread[1], &att,runner, argv[1]);
pthread_join(thread[0],NULL);
pthread_join(thread[1],NULL);
printf(sum);
}
void *runner(*param){
int num = atoi(¶m);
pthread_mutex_lock(&lock);
for(int i = 0; i<=num; i++){
sum+= i;
}
pthread_mutex_unlock(&lock);
pthread_exit();
}
Multi-threaded Server Architecture
What do threads Share (1) and what do they have of their Own (2)
(1) data, code, OS resources eg. files
(2) stack, register
What is a Socket in Client server Communication
Defined endpoint communication: Concatenation of IP address or Host + Port Number
Example: 161.25.19.8:1625 where 1625 is the port # and 161.25.19.8 is the IP address.
port below 1024 are well known
Buffering: Messages attached to links can be implemented in different ways
*Rendez-Vous:0 capacity, sender must wait for the receiver
*Bounded Capacity: finite length of n messages, If link is full, the send must wait
*Unbounded Capacity: The sender never wait
Interprocess Message Passing
send(message)
receive(message)
Direct Message
send(P, message) // Send message to process P
receive(Q, message) // Receive a message from process Q
Undirect Message: Uses Mailbox as an Intermediary, mailboxes have unique Ids they act as a Socket
send(A, message) // Send message to mailbox A
receive(A, message) // Receive message from mailbox A
Bounded Buffer Shared Memory Solution The Consumer Code
#define BUFFER_SIZE = 10
typedef struct {
...
}item;
item buffer[BUFFER_SIZE];
int in=0;
int out=0;
item next_consumed;
while(true){
while(in == out );
next_consumed = buffer[out];
out = (out+1)%BUFFER_SIZE;
}
Bounded Buffer Shared Memory Solution The producer Code
#define BUFFER_SIZE = 10
typedef struct {
...
}item;
item buffer[BUFFER_SIZE];
int in=0;
int out=0;
item next_produced;
while(true){
while((in+1)%BUFFER_SIZE == out );
buffer[in] = next_produced;
in = (in+1)%BUFFER_SIZE;
}
The cooperating process needs IPC (Interprocess Communication). What are 2 models IPC?
Shared Memory: Requires Synchro, moves larger data
Message Passing: useful for exchanging smaller amount of data
How many ways can parent and child process run?
Either they run concurrently
Or the parent has to wait for the child to finish running
Parent and children process have 3 sharing resources option
Share all the resources
Parent share a subset of their resources to the child
Or they do not share any resources
Medium-term scheduler
can be added if the degree of multiple programming needs to decrease
selects which processes should be brought into the ready queue
Long Term Scheduler
Which scheduling queue is the set of process waiting for an I/O device
Device queue
which scheduling queue which are the set of all processes residing in main memory, ready and waiting to execute
Ready queue
Which scheduling queue which are the set of all processes in the system
Job Queue
Context Switch Diagram
What is some information related to the process
Process State,
program counter,
CPU registers
CPU scheduling info: Priorities
What is the memory allocated for this process
I/O information
Holds information relative to each process
PCB: Process Control Block
What are the process states
New: Process just being created
Ready: The process is ready to be sent to the processor
Waiting: Process waiting for some other event to occur
Running: Some instructions are being run on the processor
Terminated: The process has finished its execution
When does a program become a process
When it is executable files are loaded into the memory
A program in execution; The execution must progress
in sequential fashion
Process
Linux System System
fi - ne - block - C PU - Sch - meMa - chaDe
Microkernel System structure:
After many years of research, we have realized what was most important for the Kernel, the rest can be executed in user mode
What are the 5 Structure(Architecture) Possible for OSes
1 --> Monolithic
2 --> Layered
3 --> Modular
4 --> Hybrid
5 --> Microkernel
System Calls Parameters passing
There are 3 main System Calls Parameter Passing
1 --> Simplest: Passing the parameter directly to the registers, but it might happen that we have more parameters than registers.
2--> Storing the parameters in a block or table in memory, and then passing the address of that memory block to the register
3--> Using a stack, pushing the parameters onto the stack by the user program, and the stack getting popped by the OS.
System Call Sequence to Copy from one file to another
AWAAWAOICILRWUCWT
Acquire Input file name
Write Prompt to screen
Accept Input
Acquire the Output file name
Write Prompt on the screen
Accept Input
Open input file
if the file doesn't exist, abort
Create output file
if file exists abort
Loop
Read from the input file
Write to the output file
Until read fails
Close output file
Write completion message
Terminate
Virtualization
Allows OSes to run applications within other OSes. It is a technique used in computing to create a virtual version of something, such as a computer, an operating system, or an application. This virtual version behaves like the real thing, but it is actually a software-based representation of it.
Computing Environment: Cloud Computing
Delivers computing, storage, and even apps as a service across a network
Computing Environment:
Peer-to-Peer Computing
Every client is connected to every client and they share data that way
Computing Environment:
Client Server
Server --> Network -->(Branches to ) many clients
Clustered System
Much like a multiprocessor: it is multiple computer work at the same time that shares the same Storage called Storage-Area Network
(3) a fixed-size group of bytes, typically based on the CPU's architecture
Used for high-speed I/O devices that enable data transfer at near-memory speeds. It allows a device controller to transfer data blocks from buffer storage to main memorywithout CPU intervention, reducing overhead. DMA generates one interrupt per block instead of per byte, improving efficiency.
Direct Memory Access (DMA)
Transition From user to Kernel Mode
---(1)--- operation, consisting of user mode and kernel mode, helps protect an OS and system components. A hardware-provided ---- (2)---- differentiates between user and kernel code execution. Privileged instructions are only executable in kernel mode, while the system calls to switch to kernel mode and return to user mode afterward.
(1): Dual-mode
(2): mode bit
Process Scheduling queueing diagram (There is 2 versions)
Start with Ready Queue ---> CPU
Process State Graph
Interrupt Driven I/O Cycle Diagram
* Selects a process from the processes in memory that are ready to execute
* Allocates the CPU to that process
* Determines which process to run next, based on scheduling algorithm and process priority levels
CPU Scheduler
special instruction to invoke the OS, by generating an interrupt, to perform an OS-related service
Or
Interface to the services provided by OS
System Calls
* Contains the address of the instruction to be executed next by the CPU
* A hardware CPU register
* The interrupt hardware architecture automatically saves PC value whenever an interrupt occurs
Program counter
*Stores important system information about each process
*Includes process state, program counter, CPU registers, and memory management information
*Used by the operating system to manage and schedule processes
PCB, process control block
* Identifies the type of interrupt
* Used as an index into the interrupt vector
* Looks up the address of the service routine for each interrupt
Interrupt number
A system bus connects the major components of a computer system, including:
data bus: to carry information
address bus: to determine where it should be sent or read from
control bus: to determine its operation
Device controller informs CPU that it has finished its operation by
causing an -------
interrupt
The device driver for each device moves data from/to ---(1)---
to/from ---(2)---
(1) main memory
(2) local buffers
Computer Syst Orginization: One or more CPUs and device controllers connect through a common bus providing access to shared memory. What is the graph
“The one program running at all times on the computer” Other might be system program or Application program
is the kernel.
Some computers have little or no user interface, we find them in cars and microwaves ...
Embedded system
Computer System Diagram
Computer System Components
1. Hardware: provides basic computing resources like CPU, memory, and I/O devices.
2. Operating system: controls and coordinates the use of the hardware among various applications and users.
3. Application programs: define the ways in which the system resources are used to solve computing problems like word processors, compilers, web browsers, database systems, and video games.
4. Users: refer to people, machines, and other computers that interact with the system.