Process

Breaking down a sequential algorithm:

  1. Human programmer: Decomposition and scheduling
  2. OS and libraries: Some scheduling and mapping of tasks on cores

A single process:

Multiprogramming/Multitasking processes

To switch between contexts, need to incur some overhead. This is wasted time.

Communication between processes expensive.

To support multiprogramming

Unix commands

int child_pid = fork();
if (child_pid == 0) {
    cout << "i'm in child" << endl; 
} else {
    cout << "i'm the parent with child " << child_pid << endl;
}

See the diagram here

Threads

Multiple independent control flows in one process

A sequential execution stream in one process, with different

No need to copy address space, only need to copy

Corresponds with shared-memory architecture, without extra setup.

In the library: you make user-level threads.

Kernel threads are made by the OS

int main() {
    pthread_t thread1, thread2;
    char *message1 = "Thread 1";
    char *message2 = "Thread 2";
    int iret1, iret2;
    /* Create independent threads each of which will execute function */
    iret1 = pthread_create( &thread1, NULL, print_message_function, (void*) message1);
    iret2 = pthread_create( &thread2, NULL, print_message_function, (void*) message2);

    // NOTE: HERE ONLY 1 THREAD IS RUNNING THIS! The rest are in print_msg_fn

    /* Wait till threads are complete before main continues. Unless we */
    /* wait we run the risk of executing an exit which will terminate */
    /* the process and all threads before the threads have completed. */
    pthread_join( thread1, NULL);
    pthread_join( thread2, NULL);
    printf("Thread 1 returns: %d\n",iret1);
    printf("Thread 2 returns: %d\n",iret2);
    exit(0);
}
void *print_message_function( void *ptr ) {
    char *message;
    message = (char *) ptr;
    printf("%s \n", message);
}

Number of threads: Usually 2x the number of cores At some point the overhead > time saved. One good point: What is the inflection point?

Synchronization

When two concurrent threads are accessing a shared variable, the access must be controlled to prevent wrong behaviour (race condition).

Race Condition

  1. 2 threads/processes access shared resource without synch
  2. At least 1 thread modfies

Create a critical section.

Mutex (Mutual exclusion)

Creates large atomic blocks (cannot be interleaved).

Critical sections must have

  1. Mutex [Safety]
  2. Progress [Liveness]
  3. Bounded waiting (no starvation) [Liveness]
  4. Performance

Mechanisms

Problems

Deadlock: Every process is waiting

Starvation: One process cannot make progress because another process has the resource.

Livelock: the 2 ppl in the corridor trying to walk past each other problem.

Patterns (to solve classical synch problems)

NEVER WAIT IN THE CRITICAL SECTION.