Category: Notes

  • Barrier Synchronization (Part 2/2)

    Barrier Synchronization (Part 2/2)

    Part 1 of barrier synchronization covers my notes on the first couple types of synchronization barriers including the naive centralized barrier and the slightly more advanced tree barrier. This post is a continuation and covers the three other barriers: MCS barrier, tournament barrier , dissemination barrier.

    Summary

    In the MCS tree barrier, there are two separate data structures that must be maintained. The first data structure (a 4-ary tree, each node containing a maximum of four children) handling the arrival of the processes and the second data structure handling the signaling and waking up of all other processes. In a nutshell, each parent node holds pointers to their children’s structure, allowing the parent process to wake up the children once all other children have arrived.

    The tournament barrier constructs a tree too and at each level are two processes competing against one another. These competitions, however, are fixed: the algorithm predetermines which process will advanced to the next round.  The winners percolate up the tree and at the top most level, the final winner signals and wakes up the loser. This waking up of the loser happens at each lower level until all nodes are woken up.

    The dissemination protocol reminds me of a gossip protocol. With this algorithm, all nodes detect convergence (i.e. all processes arrived) once every process receives a message from all other processes (this is the key take away); a process receives one (and only one) message per round. The runtime complexity of this algorithm is nlogn (coefficient of n because during each round n messages, one message sent from one node to its ordained neighbor).

    The algorithms described thus far share a common requirement: they all require sense reversal.

    MCS Tree Barrier (Binary Wakeup)

    MCS Tree barrier with its “has child” vector

    Summary

    Okay, I think I understand what’s going on. There are two separate data structures that need to be maintained for the MCS tree barrier. The first data structure handles the arrival (this is the 4-ary tree) and the second (binary tree) handles the signaling and waking up of all the other processes. The reason why the latter works so well is that by design, we know the position of each of the nodes and each parent contains a pointer to their children, allowing them to easily signal the wake up.

    Tournament Barrier

    Tournament Barrier – fixed competitions. Winner holds the responsibility to wake up the losers

    Summary

    Construct a tree and at the lowest level are all the nodes (i.e. processors) and each processor competes with one another, although the round is fixed, fixed in the sense that the winner is predetermined. Spin location is statically determined at every level

    Tournament Barrier (Continued)

    Summary

    Two important aspects: arrival moves up the tree with match fixing. Then each winner is responsible for waking up the “losers”, traversing back down. Curious, what sort of data structure? I can see an array or a tree …

    Tournament Barrier (Continued)

    Summary

    Lots of similarity with sense reversing tree algorithm

    Dissemination Barrier

    Dissemination Barrier – gossip like protocol

    Summary

    Ordered communication: like a well orchestrated gossip like protocol. Each process will send a message to ordained peer during that “round”. But I’m curious, do we need multiple rounds?

    Dissemination Barrier (continued)

    Summary

    Gossip in each round differs in the sense the ordained neighbor changes based off of Pi -> P(I + 2^k) mod n. Will probably need to read up on the paper to get a better understanding of the point of the rounds ..

    Quiz: Barrier Completion

    Summary

    Key point here that I just figured out is this: every processor needs to hear from every other processor. So, it’s log2N with a ceiling since N rounds must not be a power of 2 (still not sure what that means exactly)

    Dissemination Barrier (continued)

    Summary

    All barriers need sense reversal. Dissemination barrier is no exception. This barrier technique works for NCC and clusters.Every round has N messages. Communication complexity is nlogn (where N is number of messages) and log(n). Total communication nlogn because N messages must be sent every round, no exception

    Performance Evaluation

    Summary

    Most important question to ask when choosing and evaluating performance is: what is the trend? Not exact numbers, but trends.

  • Barrier Synchronization (Part 1/2)

    Barrier Synchronization (Part 1/2)

    As mentioned previously, there are different types of synchronization primitives that us operating system designers offer.  If as an application designer you nee to ensure only one thread can access a piece of shared memory at a time, use a mutual exclusion synchronization primitive. But what about a different scenario in which you need all threads to reach a certain point in the code and only once all threads reach that point do they continue? That’s where a barrier synchronization comes into play.

    This post covers two types of barrier synchronizations. The first is the naive, centralized barrier and the second is the a tree barrier.

    In a centralized barrier, we basically have a global count variable and as each thread enters the barrier, they decrement the shared count variable.  After decrementing the count, threads will hit a predicate and branch: if the count is not zero, then the thread enters a busy spin loop, spinning while the count is greater than zero. However, if after decrementing the counter equals zero, then that means all threads have arrived at the end of the barrier synchronization.

    Simple enough, right? Yes it is, but the devil is in the details because there’s a subtle bug, a subtle edge case. It is entirely possible (based off of the code snippet below) that when the last thread enters the barrier and decrements the count, all the other threads suddenly move beyond the barrier (since the count is not greater than zero). In other words, the last thread never gets to reset the count back to N number of threads.

    How to avoid this problem? Simple: add another while loop that guarantees that the threads do not leave the barrier until the counter gets reset. Very elegant. Very simple.

    One way to optimize the centralized barrier is to introduce a sense reversing barrier (as I described in “making sense of the sense reversing barrier”).

    The next type of barrier is a tree barrier. The tree barrier groups multiple process together at multiple levels (number of levels is logn where n is the number of processors), each group maintaining its own count and local sense variables. The benefit? Each group spins on its own locksense. Downside? The spin location is dynamic, not static and can impede performance on NUMA architectures.

    Centralized Barrier

    Centralized Barrier

    Summary

    Centralized barrier synchronization is pretty simple: keep a counter that decrements as each thread reaches the barrier. Every thread/process will spin until the last thread arrives, at which point the last thread will reset the barrier counter so that it can be used later on

    Problems with Algorithm

    Summary

    Race condition: last thread, while updating the counter, all other threads move forward

    Counting Barrier

    Summary

    Such a simple and elegant solution by adding a second spin loop (still inefficient, but neat nonetheless). Sense reverse barrier algorithm

    Sense Reversing Barrier

    Sense Reversing Barrier

    Summary

    One way to optimize the centralized barrier is to introduce a sense reversing barrier. Essentially, each process maintains its own unique local “sense” that flips from 0 to 1 (or 1 to 0) each time synchronization barrier is needed. This local variable is compared against a shared flag and only when the two are equal can all the threads/processes proceed past the current barrier and move on to the next

    Tree Barrier

    Tree Barrier

    Summary

    Group processes (or threads) and each group has its own shared variables (count and lock sense). Before flipping the lock sense, the final process needs to move “up to the next level” and check if all other processors have arrived at the next level. Things are getting a little more spicy and complicated with this type of barrier

    Tree Barrier (Continued)

    Summary

    With a tree barrier, a process arrives at its group (of count and lock sense), and will decrement the count variable and will then check the lock sense variable. If lock sense is not equal, then spin. If last

    Tree Barrier (continued)

    Summary

    Once the last process reaches the root, it’s their responsibility to begin waking up the lower levels, traversing back down the tree. At each level, they will be flipping the lock sense flag

    Tree Barrier (Continued)

    Summary

    As always, there’s a trade off or hidden downside with this implementation. First, the spin location is not statically determined. This dynamic allocation may be problematic, especially on NUMA (non uniformed memory access architecture) architecture, because a process may be spinning on a remote memory location. But my question is, are there any systems that do not offer coherence?

  • Making sense of the “sense reversing barrier” (synchronization)

    Making sense of the “sense reversing barrier” (synchronization)

    What’s the deal with a sense reversing barrier? Even after watching the lectures on the topic, I was still confused as to how a single flag could toggle between two values (true and false) can communicate whether or not all processes (or threads) are running in the same critical section. This concept completely baffled me.

    Trying to make sense of it all, I did some “research” (aka google search) and landed on a great article titled Implementing Barriers1.  The article contains source code written in C that helped demystify the topic.

    Sense Reversal Barrier. Credit: (Nkindberg 2013)

     

    The article explains how each process must maintain its own local sense variable and compare that variable against a shared flag. That was the key, the fact that each process maintains its own local variable, separate from the shared flag. That means, each time a process enters a barrier, the local sense toggles, switching from 1 to 0 (or from 0 to 1), depending on the variable’s last value. If the local sense variable and the shared flag differ, then the process (or thread) must wait before proceeding beyond the current barrier.

    For example, let’s say process 0 initializes its local sense variable of 0. The process enters the barrier, flipping the local sense from 0 to 1. Once the process reaches the end of the critical section, the process compares the shared flag (also initialized to 0) with its local sense variable and since the two do not equal to one another, then that process waits until all other processes complete.

    References

    1. Nkindberg, Laine. 2013. “Implementing Barriers.” Retrieved September 17, 2020 (http://15418.courses.cs.cmu.edu/spring2013/article/43).
  • Synchronization notes (part 2/2) – Linked Based Queuing lock

    Synchronization notes (part 2/2) – Linked Based Queuing lock

    In part 1 of synchronization, I talked about the more naive spin locks and other naive approaches that offer only marginally better performance by adding delays or reading cached caches (i.e. avoiding bus traffic) and so on. Of all the locks discussed thus far, the array based queuing lock offers low latency, low contention and low waiting time. And best of all, the this lock offers fairness. However, there’s no free lunch and what we trade off performance for memory: each lock contains an array with N entries where N is the number of physical CPUs. That high number of CPUs may be wasteful (i.e. when not all processors contend for a lock).

    That’s where linked based queuing locks come in. Instead of upfront allocating a contiguous block of memory reserved for each CPU competing for lock, the linked based queuing lock will dynamically allocate a node on the heap and maintain a linked list. That way, we’ll only allocate the structure as requests trickle in. Of course, we’ll need to be extremely careful with maintaining the linked list structure and ensure that we are atomically updating the list using instructions such as fetch_and_store during the lock operation.

    Most importantly, we (as the OS designer that implement this lock) need to carefully handle the edge cases, especially while removing the “latest” node (i.e. setting it’s next to NULL) while another node gets allocated. To overcome this classic race condition, we need to atomically rely on a new (to me at least) atomic operation: compare_and_swap. This instruction will be called like this: compare_and_swap(latest_node, me, nil). If the latest node’s next pointer matches me, then set the next pointer to NIL. Otherwise, we know that there’s a new node in flight and will have to handle the edge case.

    Anyways, check out the last paragraph on this page if you want a nice matrix that compares all the locks discussed so far.

    Linked Based Queuing Lock

    Summary

    Similar approach to Anderson’s array based queueing lock, but using a linked list (authors are MCS: mellor-crummey J.M. Scott). Basically, each lock has a qnode associated with it, the structure containing a got_it and next. When I obtain the lock, I point the qnode towards me and can proceed into the critical section

    Link Based Queuing Lock (continued)

    Summary

    The lock operation (i.e. Lock(L, me)) requires a double atomic operation, and to achieve this, we’ll use a fetch_and_store operation, an instruction that retrieves the previous address and then stores mine

    Linked Based Queuing Lock

    Summary

    When calling unlock, code will remove current node from list and signal successor, achieving similar behavior as array based queue lock. But still an edge case remains: new request is forming while current thread is setting head to NIL

    Linked Based Queuing Lock (continued)

    Summary

    Learned about a new primitive atomic operation called compare and swap, a useful instruction for handling the edge of updating the dummy node’s next pointer when a new request is forming

    Linked Based Queuing Lock (continued)

    Summary

    Hell yea. Correctly anticipated the answer when professor asked: what will the thread spin on if compare_and_swap fails?

    Linked Based Queuing Lock (continued)

    Summary

    Take a look at the papers to clarify the synchronization algorithm on a multi-processor. Lots of primitive required like fetch and store, compare and swap. If these hardware primitives are not available, we’ll have to implement them

    Linked Based Queuing Lock (continued)

    Summary

    Pros and Cons with linked based approach. Space complexity is bound by the number of dynamic requests but downside is maintenance overhead of linked list. Also another downside potentially is if no underlying primitives for compare_and_swap etc.

    Algorithm grading Quiz

    Comparison between various spin locks

    Summary

    Amount of contention is low, then use spin w delay plus exponential backoff. If it’s highly contended, then use statically assigned.

  • Synchronization (Notes) – Part 1

    Synchronization (Notes) – Part 1

    I broke down the synchronization topic into two parts and this will cover material up to and including the array based queuing lock. I’ll follow up with part 2 tomorrow, which will include the linked list based queuing lock.

    There a couple different types of synchronization primitives: mutual exclusion and barrier synchronization. Mutual exclusion ensures that only one thread (or process) at a time can write to a shared data structure, whereas barrier synchronization ensures that all threads (or processes) reach a certain point in the code before continuing. The rest of this summary focuses on the different ways we as operating system designers can implement mutual exclusion and offer it as a primitive to system programmers.

    To implement a mutual exclusion lock, three instructions need to be bundled together: reading, checking, writing. These three operations must happen atomically, all together and at once. To this end, we need to leverage the underlying hardware’s instructions such as fetch_and_set, fetch_and_increment, or fetch_and_phi.

    Whatever way we implement the lock, we must evaluate our implementation with three performance characteristics:

    • Latency (how long to acquire the lock)
    • Waiting time (how long must a single thread must wait before acquiring the lock)
    • Contention (how long long to acquire lock when multiple thread compete)

    In this section, I’ll list each of the implementations ordered in most basic to most advanced

    • Naive Spin Lock – a simple while loop that calls test_and_set atomic operation. Downside? Lots of bus traffic for cache invalidation or cache updating. What can we do instead?
    • Caching Spin Lock – Almost identical to the previous  implementation but instead of spinning on a while loop, we spin on a read, followed by a check for test_and_set. This reduces noisy bus traffic of test_and_test (which again, bypasses cache and writes to memory, every time)
    • Spinlock with delay – Adds a delay to each test_and_test. This reduces spurious checks and ensures that not all threads perform test_and_set at the same time
    • Ticket Lock – Each new request that arrives obtains a ticket. This methodology is analogous to a how a deli hands out tickets for their customers; it is fair but still signals all other threads to wake up
    • Array based queuing lock – An array that is N in size (where N is the number of processors). Each entry in the array can be in one of two states: has lock and must-wait. Fair and efficient. But trades off space: each lock gets its own array with N entries where N is number of processors. Can be expensive for machines with thousands of processors and potentially wasteful if not all of them contend for the lock

    Lesson Summary

    Summary

    There are two types of locks: exclusive and shared lock. Exclusive lock guarantees that one and only one thread can access/write data to a location. Shared allows multiple readers, but guaranteeing no other thread is writing at that time.

    Synchronization Primitives

    Summary

    Another type of synchronization primitive (instead of mutual exclusion) is a barrier. A barrier basically ensures that all threads reach a certain execution point and only then can the threads advance. Similar to real life when you get to a restaurant and you cannot be seated until all your party arrives. That’s an example of a barrier.

    Programmer’s Intent Quiz

    Summary

    We can write that enforces barrier with a simple while loop (but its inefficient) using atomic reads and writes

    Programmer’s Intent Explanation

    Summary

    Programmer’s Intent Quiz

    Because of the atomic reads and writers offered by the instruction set, we can easily use a flag to coordinator between processes. But are these instructions sufficient for creating a mutual exclusion lock primitive?

    Atomic Operations

    Summary

    The instruction set ensures that each read and write operation is atomic. But if we need to implement a lock, there are three instructions (read, check, write) that need to be bundled together into a single instruction by the hardware. This grouping can be done in one of three ways: test_and_set or (generically) fetch_and_increment or fetch_and_phi

    Scalability Issues with Synchronization

    Summary

    What are the three types of performance issues with implementing a lock? Which of the two are under the control of the OS designer? The three scalability issues are as follows: latency (time to acquire lock) and waiting time (how long does a thread wait until it acquires the lock) and contention (how long does it take, when all threads are competing to acquire lock, for lock to be given to a single thread). The first and third (i.e. latency and contention) are under the control of the OS designer. The second is not: waiting time is largely driven by the application itself because the critical section might be very long.

    Naive Spinlock (Spin on T+S)

    Summary

    Why is the SPIN lock is considered naive? How does it work? Although the professor did not explicitly call out why the SPIN lock is naive, I can guess why (and my guess will be confirmed in the next slide, during the quiz). Basically, each thread is eating up CPU cycles by performing the test and test. Why not just signal them instead? That would be more efficient. But, as usual, nothing is free so there must be a trade off in terms of complexity or cost. But returning back to how the naive scheduler works, it’s basically a while loop with the test_and_test (semantically guaranteeing on a single thread will receive the lock).

    Problems with Naive Spin Lock

    Problems with naive spinlock

    Summary

    Spin lock is naive for three reasons: too much contention (when lock is released, all threads, maybe thousands, will jump at the opportunity) and does not exploit cache (test and set by nature must bypass cache and go straight to memory) and disrupts useful work (only one process can make forward progress)

    Caching Spinlock

    Caching Spin lock (spin on read)

    Summary

    To take advantage of the cache, processors will first call while(L == locked), taking advantage of the cache. Once test_and_set occurs, all processors will then proceed with test_and_set. Can we do better than this?

    Spinlocks with Delay

    Summary

    Any sort of delay will improve performance of a spin lock when compared to the naive solution

    Ticket Lock

    Summary

    A fair algorithm that uses a deli or restaurant analogy: people who come before will get lock before people who come after. This is great for fairness, but performance still lacking in the sense that when a thread releases its lock, an update will be sent across the entire bus. I wonder: can this even be avoided or its the reality of the situation?

    Spinlock Summary

    Array-based queuing lock

    Summary

    A couple differentiations toptions 1) Read and Test and Test (no fairness) 2) Test and Set with Delay (no fairness) 3) Ticket Lock (fair but noisy). How can we do better? How can we only signal one and only thread to wake up and attempt to gain access to the lock? Apparently, I’ll learn this shortly, with queueing locks. This lesson reminds me of a conversation I had with my friend Martin around avoiding a busy while() loop for lock contention, a conversation we had maybe about a year or so ago (maybe even two years ago)

    Array Based Queuing Lock

    Summary

    Create an array that is N in size (where N is the number of processors). Each entry in the array can be in one of two states: has lock and must-wait. Only one entry can ever be in has-lock state. I’m proud that I can understand the circular queue (just by looking at the array) with my understanding of the mod operator. But even so, I’m having a hard time understanding how we’ll use this data structure in real life.

    Array Based Queuing Lock (continued)

    Summary

    Lock acquisition looks like: fetch and increment (queue last) followed by a while(flags[mypalce mod N] == must wait). Still not sure how this avoids generating cache bus traffic as described earlier…

    Array-based queue lock

    Summary

    Array-basd queuing lock (mod)

    Using mod, we basically update the “next” entry in the array and set them to “has_lock”

  • Shared Memory Machine Model (notes)

    Shared Memory Machine Model (notes)

    You need to take away the following two themes for shared memory machine model:

    • Difference and relationship between cache coherence (dealt with in hardware) and cache consistency (handled by the operating system)
    • The different memory machine models (e.g. dance hall, symmetric multiprocessing, and distributed shared memory architecture)

    Cache coherence is the promise delivered by the underlying hardware architecture. The hardware guarantees employs one of two techniques: write invalidate or write update. In the latter, when a memory address gets updated by one of the cores, the system will send a message on the bus to invalidate the cache entry stored in all the other private caches. In the former, the system will instead update all the private caches with the correct data. Regardless, the mechanism in which cache is maintained is an implementation detail that’s not privy to the operating system.

    Although the OS has not insight into how the hardware delivers cache coherence, the OS does rely on the cache coherence to build cache consistency, the hardware and software working in harmony.

    Shared Memory Machine Model

    Summary

    Shared memory model - Dance Hall Architecture
    Shared memory model – Dance Hall Architecture
    Symmetric multiprocessing – each processor’s time to access to the memory is the same
    Shared Memory Machine Model – each CPU has its own private cache and its own memory, although they can access each others addresses

    There are three memory architecture: dance hall (each CPU has its own memory), SMP (from perspective of each CPU, access to memory is the same as other CPU), and Distributed shared memory architecture (some cache and some memory is faster to access for a given CPU). Lecture doesn’t go much deeper than this but I’m super curious as to the distributed architecture.

    Shared Memory and Caches

    Summary

    Because each CPU has its own private cache, we may run into a problem known as cache coherence. This situation can occur if the caches tied to each CPU contain different values for the same memory address. To resolve this, we need a memory consistency model.

    Quiz: Processes

    Summary

    Quiz tests us by offering two processes, each using shared memory, each updating different variables. Then the quiz asks us what are the possible values for the variables. Apart from the last one, all are possible, the last one would break the intuition of a memory consistency model (discussed next)

    Memory consistency model

    Summary

    Introduces the sequential consistency memory model, a model that exhibits two characteristics: program order and arbitrary interleaving. This Isi analogous to someone shuffling a deck of cards.

    Sequential Consistency and cache coherence (Quiz)

    Summary

    Cache Coherence Quiz – Sequential Consistency

    Which of the following are possible values for the following instructions

    Hardware Cache Coherence

    Summary

    Hardware Cache Coherence

    There are two strategies for maintaining cache coherence. Write invalidate and write update. In the former, the system bus will broadcast an invalidate message if one of the other cache’s modifies an address in its private cache. In the latter, the system must will send an update message to each of the cache’s, each cache updating it’s private cache with the correct data. Obviously, the lecture oversimplifies the intricacies of maintaining a coherent cache (and if you want to learn more, check out the high performance computing architecture lectures — or maybe future modules in this course will cover this in more detail)

    Scalability

    Summary

    Scalability – expectation with more processors. Pros: Exploit Parallelism. Cons : Increased overhead

    Adding processors increase parallelism and improves performance. However, performance will decrease due to additional overhead of maintaining the bus (another example of making trade offs and how nothing is free).

  • Memory Virtualization (notes)

    Memory Virtualization (notes)

    The operating system maintains a per process data structure called a page table, creating a protection domain and hardware address space: another virtualization technique. This page table maps the virtual address space to physical frame number (PFN).

    The same virtualization technique is adopted by hypervisors (e.g. VMWare). They too have the responsibility of mapping the guest operating system’s “physical address” space (from the perspective of the guest VM) to the underlying “machine page numbers”.  These physical address to machine page number mappings are maintained by the guest operating system in a para virtualized system, and maintained by the hypervisor in a fully virtualized environment.

    In both fully virtualized and para virtualized environments, memory mapping is done efficienctly. In the former, the guest VM will send traps and in response, the hypervisor will perform its mapping translation. But in a para virtualized environment, the hypervisor leans on the guest virtual machine much more, via a balloon driver. In a nutshell, the balloon driver pushes the responsibility of handling memory pressure from the hypervisor to the guest machine. The balloon driver, installed on the guest operating system, inflates when hypervisor wants the guest OS free memory. The beauty of this approach is that If the guest OS is not memory constrained, no swapping occurs: the guest VM just removes an entry in its free-list. When the hypervisor  wants to increase the memory for a VM, it signals (through the same private communication channel) the balloon driver to deflate, signaling the guest OS to page in and increase the size of its memory footprint.

    Finally, the hypervisor employs another memory technique known as oblivious page sharing. With this technique, the hypervisor runs a hashing algorithm in a background process, creating a hash of each of memory contents of each page. If two or more virtual machines contain the same page, then the hypervisor just creates a “copy-on-write” page, reducing the memory footprint.

    Memory Hierarchy

    Memory Hierarchy

    Summary

    The main thorny issue of virtual memory is translating virtual address to physical mapping; caches are physically tagged so not a major source of issues there.

    Memory Subsystem Recall

    Summary

    A process has its own protection domain and hardware address space. This separation is made possible thanks to virtualization. To support memory virtualization, we maintain a per process data structure called a page table, the page table mapping virtual addresses to physical frame numbers (remember: page table entries contain the PFN)

    Memory Management and Hypervisor

    Summary

    The hypervisor has no insight into the page tables for the processes running on the instances (see Figure). That is, Windows and Linux serve as the boundary, the protection domain.

    Memory Manager Zoomed Out

    Summary

    Although the virtual machines operating systems think that they allocate contiguous memory, they are not: the hypervisor must partition the physical address space among multiple instances and not all of the operating system instances can start at the same physical memory address (i.e. 0x00).

    Zooming Back in

    Summary

    Hypervisor maintains a shadow page table that maps the physical page numbers (PPN) to the underlying hardware machine page numbers (MPN)

    Who keeps the PPN MPN Mapping

    Summary

    For fully virtualization, the PPN->MPN lives in the hypervisor. But in a para virtualization, it might make sense to live in the guest operating system.

    Shadow page table

    Zooming back in: shadow page table

    Summary

    Similar to a normal operating system, the hypervisor contains the address of the page table (stored in some register) that lives in memory. Not much different than a normal OS, I would say

    Efficient Mapping (Full Virtualization)

    Summary

    In a fully virtualized guest operating system, the hypervisor will trap calls and perform the translation from virtual private number (VPN) to the machine private number (MPN), bypassing the guest OS entirely

    Efficient Mapping (Para virtualization)

    Summary

    Dynamically Increasing Memory

    Summary

    What can we do when there’s little to no physical/machine memory left and the guest OS needs more? Should we steal from one guest VM for another? Or can we somehow get a guest VM to voluntarily free up some of its pages?

    Ballooning

    VMWare’s technique: Ballooning

    Summary

    Hypervisor installs a driver in the guest operating, the driver serving as a private channel that only the hypervisor can access. The hypervisor will balloon the driver, signaling to the guest OS to page out to disk. And then it can also signal to the guest OS to default, signaling to page in.

    Sharing memory across virtual machines

    Summary

    One way to achieve memory sharing is to have the guest operating system cooperate with the hypervisor. The underlying guest virtual machine will signal, to the hypervisor, that a page residing in the guest OS will be marked as copy on write, allowing other guest virtual machines to share the same page. But when the page is written to, then hypervisor must copy that page. What are the trade offs?

    VM Oblivious Page Sharing

    VMWare and Oblivious Sharing

    Summary

    VMWare ESX maintains a data structure that maps content hash to pages, allowing the hypervisor get a “hint” of whether or not a page can be shared. If the content hash matches, then the hypervisor will perform a full content comparison (more on that in the next slide)

    Successful Match

    Summary

    Hypervisor process runs in the background, since this is a fairly intensive operation, and will update guest VMs pages as “copy-on-write”, only running this process when the hypervisor is lightly loaded

    Memory Allocation Policies

    Summary

    So far, discussion has focused on mechanisms, not policies. Given memory is such a precious resource, a fair policy with me dynamic-idle adjusted shares approach. A certain percentage (50% in the case of ESX) of memory will be taken away if its idle, a tax if you will.

  • Introduction to virtualization (notes)

    Introduction to virtualization (notes)

    As system designers, our goal is to design a “black box” system that create an illusion that our users have full and independent access to the underlying hardware of the system. This is merely an abstraction since we are building multi-tenant systems with many applications and many virtual guest machines running on a single piece of hardware, all at the same time.

    To this end, we build what is called a “hypervisor” (code that runs directly on the physical hardware), the software supporting multiple guest machines that run on of it. The guest operating system either be virtualized guest operating system (that has no clue it’s a virtual guest and so that underlying OS binary is the same as it is if you were to install it on a physical server) or be para virtualized operating system (that is aware of the fact that it is virtualized, similar to how the “hosts” in Westworld gain awareness that they are in fact robots)

    Quiz: Virtualization

    Summary

    the concept of virtualization is prolific. We see it in the 60s and 70s when IBM invented the VM/370. We also see it in cloud computing and modern data centers.

    Platform virtualization

    Summary

    As aspiring operating system designers, we want to be able to build the “black box” in which the applications ride on top of, the black box being an illusion of an entire independent hardware system, when really it is not.

    Utility Company

    Summary

    End users resource usage is bursty and we want to amortize the cost of the shared hardware resources. End users have access to large available resources

    Hypervisors

    Hypervisors

    Summary

    Inside the black box, there are two types: native and hosted. Native is bare metal, the hypervisors running directly on the hardware. Hosted, on the other hand, runs as a user application on the Host operating system

    Connecting the dots

    Summary

    Concepts of virtualization date back as far as the 70s, when IBM first invented it with IBM VM 370. Fast following was microkernels, extensibility of OS and SIMOS (late (0s, and then most recently, Xen + VMware (in the 2000s). Now we are looking at virtualizing the data center.

    Full Virtualization

    Full virtualization

    Summary

    With full virtualization, the underlying OS binaries are untouched, no changes required of the OS code itself. To make this work, hypervisor needs to employ some strategies for some system calls that silently fail

    Para Virtualization

    Summary

    Paravirtualization can directly address some of the issues (like silently failing calls) that happens with full virtualization. But OS needs to be modified ; at the same time, can take advantage of optimizations like page coloring.

    Quiz: What percentage of guest OS code may need modification

    Quiz – Guest OS changes < 2%

    Summary

    Less than 2% of code needs modification to support paravirtualization, very minuscule (proof of construction, using Xen)

    Para Virtualization (continued)

    Summary

    With paravirtualization, not many code changes are needed and almost sounds like a no brainer (to me)