Author: mattchung

  • Synchronization (Notes) – Part 1

    Synchronization (Notes) – Part 1

    I broke down the synchronization topic into two parts and this will cover material up to and including the array based queuing lock. I’ll follow up with part 2 tomorrow, which will include the linked list based queuing lock.

    There a couple different types of synchronization primitives: mutual exclusion and barrier synchronization. Mutual exclusion ensures that only one thread (or process) at a time can write to a shared data structure, whereas barrier synchronization ensures that all threads (or processes) reach a certain point in the code before continuing. The rest of this summary focuses on the different ways we as operating system designers can implement mutual exclusion and offer it as a primitive to system programmers.

    To implement a mutual exclusion lock, three instructions need to be bundled together: reading, checking, writing. These three operations must happen atomically, all together and at once. To this end, we need to leverage the underlying hardware’s instructions such as fetch_and_set, fetch_and_increment, or fetch_and_phi.

    Whatever way we implement the lock, we must evaluate our implementation with three performance characteristics:

    • Latency (how long to acquire the lock)
    • Waiting time (how long must a single thread must wait before acquiring the lock)
    • Contention (how long long to acquire lock when multiple thread compete)

    In this section, I’ll list each of the implementations ordered in most basic to most advanced

    • Naive Spin Lock – a simple while loop that calls test_and_set atomic operation. Downside? Lots of bus traffic for cache invalidation or cache updating. What can we do instead?
    • Caching Spin Lock – Almost identical to the previous  implementation but instead of spinning on a while loop, we spin on a read, followed by a check for test_and_set. This reduces noisy bus traffic of test_and_test (which again, bypasses cache and writes to memory, every time)
    • Spinlock with delay – Adds a delay to each test_and_test. This reduces spurious checks and ensures that not all threads perform test_and_set at the same time
    • Ticket Lock – Each new request that arrives obtains a ticket. This methodology is analogous to a how a deli hands out tickets for their customers; it is fair but still signals all other threads to wake up
    • Array based queuing lock – An array that is N in size (where N is the number of processors). Each entry in the array can be in one of two states: has lock and must-wait. Fair and efficient. But trades off space: each lock gets its own array with N entries where N is number of processors. Can be expensive for machines with thousands of processors and potentially wasteful if not all of them contend for the lock

    Lesson Summary

    Summary

    There are two types of locks: exclusive and shared lock. Exclusive lock guarantees that one and only one thread can access/write data to a location. Shared allows multiple readers, but guaranteeing no other thread is writing at that time.

    Synchronization Primitives

    Summary

    Another type of synchronization primitive (instead of mutual exclusion) is a barrier. A barrier basically ensures that all threads reach a certain execution point and only then can the threads advance. Similar to real life when you get to a restaurant and you cannot be seated until all your party arrives. That’s an example of a barrier.

    Programmer’s Intent Quiz

    Summary

    We can write that enforces barrier with a simple while loop (but its inefficient) using atomic reads and writes

    Programmer’s Intent Explanation

    Summary

    Programmer’s Intent Quiz

    Because of the atomic reads and writers offered by the instruction set, we can easily use a flag to coordinator between processes. But are these instructions sufficient for creating a mutual exclusion lock primitive?

    Atomic Operations

    Summary

    The instruction set ensures that each read and write operation is atomic. But if we need to implement a lock, there are three instructions (read, check, write) that need to be bundled together into a single instruction by the hardware. This grouping can be done in one of three ways: test_and_set or (generically) fetch_and_increment or fetch_and_phi

    Scalability Issues with Synchronization

    Summary

    What are the three types of performance issues with implementing a lock? Which of the two are under the control of the OS designer? The three scalability issues are as follows: latency (time to acquire lock) and waiting time (how long does a thread wait until it acquires the lock) and contention (how long does it take, when all threads are competing to acquire lock, for lock to be given to a single thread). The first and third (i.e. latency and contention) are under the control of the OS designer. The second is not: waiting time is largely driven by the application itself because the critical section might be very long.

    Naive Spinlock (Spin on T+S)

    Summary

    Why is the SPIN lock is considered naive? How does it work? Although the professor did not explicitly call out why the SPIN lock is naive, I can guess why (and my guess will be confirmed in the next slide, during the quiz). Basically, each thread is eating up CPU cycles by performing the test and test. Why not just signal them instead? That would be more efficient. But, as usual, nothing is free so there must be a trade off in terms of complexity or cost. But returning back to how the naive scheduler works, it’s basically a while loop with the test_and_test (semantically guaranteeing on a single thread will receive the lock).

    Problems with Naive Spin Lock

    Problems with naive spinlock

    Summary

    Spin lock is naive for three reasons: too much contention (when lock is released, all threads, maybe thousands, will jump at the opportunity) and does not exploit cache (test and set by nature must bypass cache and go straight to memory) and disrupts useful work (only one process can make forward progress)

    Caching Spinlock

    Caching Spin lock (spin on read)

    Summary

    To take advantage of the cache, processors will first call while(L == locked), taking advantage of the cache. Once test_and_set occurs, all processors will then proceed with test_and_set. Can we do better than this?

    Spinlocks with Delay

    Summary

    Any sort of delay will improve performance of a spin lock when compared to the naive solution

    Ticket Lock

    Summary

    A fair algorithm that uses a deli or restaurant analogy: people who come before will get lock before people who come after. This is great for fairness, but performance still lacking in the sense that when a thread releases its lock, an update will be sent across the entire bus. I wonder: can this even be avoided or its the reality of the situation?

    Spinlock Summary

    Array-based queuing lock

    Summary

    A couple differentiations toptions 1) Read and Test and Test (no fairness) 2) Test and Set with Delay (no fairness) 3) Ticket Lock (fair but noisy). How can we do better? How can we only signal one and only thread to wake up and attempt to gain access to the lock? Apparently, I’ll learn this shortly, with queueing locks. This lesson reminds me of a conversation I had with my friend Martin around avoiding a busy while() loop for lock contention, a conversation we had maybe about a year or so ago (maybe even two years ago)

    Array Based Queuing Lock

    Summary

    Create an array that is N in size (where N is the number of processors). Each entry in the array can be in one of two states: has lock and must-wait. Only one entry can ever be in has-lock state. I’m proud that I can understand the circular queue (just by looking at the array) with my understanding of the mod operator. But even so, I’m having a hard time understanding how we’ll use this data structure in real life.

    Array Based Queuing Lock (continued)

    Summary

    Lock acquisition looks like: fetch and increment (queue last) followed by a while(flags[mypalce mod N] == must wait). Still not sure how this avoids generating cache bus traffic as described earlier…

    Array-based queue lock

    Summary

    Array-basd queuing lock (mod)

    Using mod, we basically update the “next” entry in the array and set them to “has_lock”

  • Honoring my body’s internal alarm clock & Daily Review – Day ending in 2020/09/14

    Honoring my body’s internal alarm clock & Daily Review – Day ending in 2020/09/14

    This morning my body woke me up later than usual. After a few blinks, I squeezed the corner of my Casio G-Shock watch, the green background lighting up and shining the time: 05:55 AM. Ugh. About an hour later than I wanted to wake up.

    On one hand, I’m bummed because I won’t be able to squeeze in as much uninterrupted time before work but on the other hand, my body and brain probably needed the extra sleep. Otherwise, why “sleep in” ?  I try to honor and listen to my body’s signals, another reason why over the past 5 years I’ve stopped setting an alarm clock and instead permitted my body to wake up naturally, whenever my body is ready.

    Oh well. Let’s get cracking.

    Yesterday

    Writing

    Best parts of my day

    • My co-worker unintentionally making me chuckle. During my team’s daily stand up meeting yesterday, I had asked my co-workers how they were coping with all the smoke blanketing the Seattle skies. And my co-worker’s response caught me by surprise. She said that back home in India, there’s always a thick cloud of smoke always resting above their heads and really, the wildfire smoke in Seattle reminds her of home.

    Mental and Physical Health

    • Yesterday I resumed my ritual of switching back and forth between sitting and standing (thank you Jarvis standing desk). At the top of every hour, I try to hit the imaginary pause button and stretch my hamstrings by reaching for the ground with my finger tips. Not much exercise but every little bit counts.

    Graduate School

    • Started watching Barrier synchronization lectures. It’s so amazing how deep I am diving into computer science. I really do enjoy learning how to an OS system designer (maybe me somebody) implements primitive structures such as mutual exclusion and barrier synchronization.
    • Started updating virtual CPU scheduler code to support the notion of “convergence”. The idea is that if the underlying physical CPU utilization ever so slightly deviate (that exact percentage is yet to be determined) then the scheduler should leave them as is and not try to re-balance the work load.

    Work

    • Scheduled an ad-hoc meeting with a principle engineer that took place that day, the two of us troubleshooting a fuzzing failure, the worst kind of failures: failures that cannot be reproduced.
    • Read a design proposal written up by a different principle engineer who evaluated different types of data structures. Realized that the data structure that I had envisioned us using will not meet performance requirements for IPv6 traffic

    Family

    • Took dogs on a very short walk of about 15 minutes at the local park. I skipped walking them over the weekend because of the thick smoke but the two puppies were getting a bit restless so I compromised, sacrificing a (hopefully) very tiny bit of our long term health in order to get them the physical stimulation they needed for the rest of the day

    Today

    Quote of the day

    Have you ever felt out of place in your place?

    I pulled that quote from a rap line from the song Breathing (Part 2) by Hamzaa featuring Wretch 32 & Ghetts.

    Most of my life, I always felt “out of place”. One instance of being out of place was during my teenage years and being one of the only (or perhaps the only) Vietnamese kids attending Calabasas high school, a high school where mostly everyone is white and where everyone thinks they are white.

     

    Writing

    • Publish “Synchronization” notes
    • Publish daily review (this one that I’m writing right here)

    Mental and Physical Health

    • Won’t be running in this smoke so instead, let’s spend 2 minutes (really, that’s all) throughout the day to get the heart pumping. Maybe some jumping jacks or some push ups. And of course, stretching the hamstrings and hip rotators while working behind by standing desk

    Graduate School

    • Fix memory leak with my recent additions to the CPU scheduler
    • Start writing up documentation that will accompany the submission for project 1
    • Watch 15-20 minutes worth of lectures from the barrier synchronization series

    Work

    • Meetings and meetings and meetings (not a really fun day) of sprint planning, Asians@ planning and development meetings, interview debrief

    Miscellaneous

    • Hair cut at 4:00 PM. This will be the second hair cut this year, the last one taking place in June. Obviously I’m trying to minimize unnecessary interactions with other people but damn, I look like a shaggy dog with my heavy and coarse hair weighing me down.
  • Shared Memory Machine Model (notes)

    Shared Memory Machine Model (notes)

    You need to take away the following two themes for shared memory machine model:

    • Difference and relationship between cache coherence (dealt with in hardware) and cache consistency (handled by the operating system)
    • The different memory machine models (e.g. dance hall, symmetric multiprocessing, and distributed shared memory architecture)

    Cache coherence is the promise delivered by the underlying hardware architecture. The hardware guarantees employs one of two techniques: write invalidate or write update. In the latter, when a memory address gets updated by one of the cores, the system will send a message on the bus to invalidate the cache entry stored in all the other private caches. In the former, the system will instead update all the private caches with the correct data. Regardless, the mechanism in which cache is maintained is an implementation detail that’s not privy to the operating system.

    Although the OS has not insight into how the hardware delivers cache coherence, the OS does rely on the cache coherence to build cache consistency, the hardware and software working in harmony.

    Shared Memory Machine Model

    Summary

    Shared memory model - Dance Hall Architecture
    Shared memory model – Dance Hall Architecture
    Symmetric multiprocessing – each processor’s time to access to the memory is the same
    Shared Memory Machine Model – each CPU has its own private cache and its own memory, although they can access each others addresses

    There are three memory architecture: dance hall (each CPU has its own memory), SMP (from perspective of each CPU, access to memory is the same as other CPU), and Distributed shared memory architecture (some cache and some memory is faster to access for a given CPU). Lecture doesn’t go much deeper than this but I’m super curious as to the distributed architecture.

    Shared Memory and Caches

    Summary

    Because each CPU has its own private cache, we may run into a problem known as cache coherence. This situation can occur if the caches tied to each CPU contain different values for the same memory address. To resolve this, we need a memory consistency model.

    Quiz: Processes

    Summary

    Quiz tests us by offering two processes, each using shared memory, each updating different variables. Then the quiz asks us what are the possible values for the variables. Apart from the last one, all are possible, the last one would break the intuition of a memory consistency model (discussed next)

    Memory consistency model

    Summary

    Introduces the sequential consistency memory model, a model that exhibits two characteristics: program order and arbitrary interleaving. This Isi analogous to someone shuffling a deck of cards.

    Sequential Consistency and cache coherence (Quiz)

    Summary

    Cache Coherence Quiz – Sequential Consistency

    Which of the following are possible values for the following instructions

    Hardware Cache Coherence

    Summary

    Hardware Cache Coherence

    There are two strategies for maintaining cache coherence. Write invalidate and write update. In the former, the system bus will broadcast an invalidate message if one of the other cache’s modifies an address in its private cache. In the latter, the system must will send an update message to each of the cache’s, each cache updating it’s private cache with the correct data. Obviously, the lecture oversimplifies the intricacies of maintaining a coherent cache (and if you want to learn more, check out the high performance computing architecture lectures — or maybe future modules in this course will cover this in more detail)

    Scalability

    Summary

    Scalability – expectation with more processors. Pros: Exploit Parallelism. Cons : Increased overhead

    Adding processors increase parallelism and improves performance. However, performance will decrease due to additional overhead of maintaining the bus (another example of making trade offs and how nothing is free).

  • Losing 2 hours searching for a website bookmark & Weekly Review: week ending in 2020/09/06

    Losing 2 hours searching for a website bookmark & Weekly Review: week ending in 2020/09/06

    My weekly review that normally takes place first thing in the morning on Sundays was completely derailed this time around, all because I could find the URL to a website that I had sworn I bookmarked for my wife’s birthday present. I ended up coughing up two hours of searching: searching directly on Reddit’s website (where I was 100% confident I stumbled upon the post), searching through 6 months of my Firefox browser history, and searching through 20 or so pages of Google Results.

    I ultimately found the page after some bizarre combination of keywords using Google, the result popping up on the 6th page of Google (I would share the URL with you but I want to keep it tucked away for the next two week until my wife’s birthday or at least until her present arrives and I gift it to her).

    How about you — when you stumble on something interesting on the internet, what steps do you take to make sure that you can successful retrieve the page again in the future? Do you simply bookmark the page using your browser’s built in bookmark feature? Do you tag that the entry with some unique or common label? Or do you store it away in some third party bookmarking service like pinboard? Or maybe you archive the entire contents of the page offline to your computer using DevonThink? Or something else?

    So many options.

    Ultimately, I don’t think the tool itself really matters: I just need to save the URL in a consistent fashion.

    Writing

    Family and Friends

    [fvplayer id=”3″]

    • Got around to finally calling my Grandma and video chatting with her so that she could see Elliott, who has grown exponentially over the last couple months
    • Signed off on tons of paper work for the new house and pulled the trigger on selling a butt load of my Amazon stocks that will cover the down payment and the escrow costs that we’re going to get hit with on September 30th (my wife’s birthday)
    • Packed about 5 more boxes worth of our belongings (e.g. books, clothing, kitchen goods)

    Music

    • Recorded about 5 different melodies and harmonies using the voice memo app on my iPhone, moving the recordings off my phone and sending them to my MacBook using AirDrop)
    • Attended my (zoom) bi-weekly guitar lesson with Jared, the lessons focusing on three areas: song writing (creative aspect), jamming (connecting with other musicians, mainly my little brother), developing a deeper understanding of the guitar (mastery).

    Mental and Physical Health

    Graduate School

    • I’d estimate I put in roughly 15 hours into graduate school in order to read research papers, write code for project 1 (i.e. writing a virtual CPU scheduler and memory coordinator) and of course watch the Udacity lectures.
    • For the development project, majority of time gets eaten up trying to grok the API documentation to libvrt. In second place would be debugging crashes in my code (which is why I always riddle my code with assert statements, a practice I picked up working at Amazon).
    • I really enjoyed watching and taking notes for this past week’s lectures. I’m taking the class at the perfect time in my career and in my graduate studies, after taking graduate operating systems and after taking high performance computing architecture. Both these courses prepared me well and provided me the foundation necessary to more meaningfully engage with the lectures. What I mean by this is that instead of passively watching and scribbling down notes, I tend to frequently click on the video to pause the stream and try to anticipate what the professor is about to say or try to answer the questions he raises. This active engagement helps the material stick better.

    Organization

    Brother label maker
    • Tossed out the cheap $25.00 label marker from Target and instead invested in a high quality Brother PTD600V label maker. Well worth the investment.
    • Culled my e-mail inbox, dropping the unread count from hundreds down to zero (will need to perform same activity this week)

    Work

    • Wrapped up my design for a new feature long, getting sign off from the technical leadership team at work. Only open action item will be to benchmark the underlying Intel DPDK’s library against IPv6 look ups (which I think I already have data for)
  • Remembering September 11 & Daily Review (day ending on 09/11/2020)

    Remembering September 11 & Daily Review (day ending on 09/11/2020)

    Yesterday was September 11. On this day, every year, Americans grieve and we are all reminded of the unforgettable day back in 2001 when the New York twin towers collapsed to the ground after being struck by the hijacked planes.

    I sure remember the day.

    I was about 12 years old at the time and on that weekday morning — like every other morning —  I was sitting crossed legged in front of our living room television, eating cereal and watching cartoons (“Recess”, the best cartoon ever) before walking to school as a sixth grader. While balancing a spoonful of cereal and milk into my mouth, the channel on the CRT television switched unexpectedly to live news, news that was live streaming the planes nose diving into the New York twin towers, bringing the towers to their knees. As a child, I didn’t understand the implication of it all and I just remember burrowing my eyebrows and shrugging my shoulders, shutting off the television and heading to school.

    The day following September 11 were unforgettable: there was an uptick of both subtle and not so subtle racism against Muslims.

    Back then, my best friend’s name was Osama, and I recall an incident that still makes my blood boil 20 years later. Him and I along with 20 or so other innocent children were packed in the classroom, all of us waiting for our substitute teacher to arrive (not sure why exactly our teacher was absent that day). The teacher for the day, white male aged about 40-50 years old, and was taking roll call, working his way down the list of student’s names on the clipboard resting in his hands.

    As he was running his finger down the list, he paused on Osama’s name, slowly lifting his gaze. He then spat out some flagrant racist comment, asking whether or not my 12 year old friend was a Jihad. Us student were stunned, confusion rippled throughout the room. And poor Osama, his head down in shame.

    Being his best friend, I took it upon myself and I marched out the room, heading full speed towards the principle’s office. After arriving at his office, I explained the situation. What happens afterwords gets a bit fuzzy but I do recall never seeing that substitute teacher again.

    This story reminds me the importance of speaking up for others, something I wrestle with these days. Lately, I bite my tongue because as an adult, realizing that it’s easy these days to offend people and I’m constantly evaluating the unique situation, taking in the context and trying to determine whether or not me speaking up for someone is warranted. Eh, it’s a never ending learning process.

    Yesterday

    Writing

    Best parts of my day

    • Singing and playing guitar during lunch break with the the entire wolf pack. Sang my acoustic rendition of “Punching in a dream”
    • Watching an episode of “The Boys” with Jess while eating dinner.  We both found the episode to be unnecessarily violent (no spoilers).

    Mental and Physical Health

    • Sprinted full speed up and down the hill for about 2 minutes, all while wearing a mask (not only to protect myself again COVID-19, but because to prevent breathing in the wildfire smoke blanketing the entire pacific northwest). Apparently, cotton masks do not block smoke particles so I apparently inhaled some amount of smoke (I deserved my wife reprimanding me for running under these conditions)

    Graduate School

    • Finished the “balancing” aspect of memory optimizer. My initial code was riddled with bugs, the program dropping the memory too fast and too much, causing the underlying guest operating systems to (presumable) swap and crash

    Work

    • Back to back to meetings. Mostly administrative, a few with some value.
    • This was one of the rare (very rare) days where I ate lunch at my desk. I don’t want to make that a habit and cherish lunch time, the one hour of the day where my wife and I and get to (sort of) peacefully eat lunch with our daughter.
    • During sprint planning, our scrum master (a colleague on my team) was driving the conversation and asking during our retrospective how we could “improve” our velocity. I shared with her and everyone else that although I am always up for improving our performance and striving to deliver, I wanted to call out the big elephant in the room: we’re in the midst of a pandemic. Things are not okay. Things are not normal
    • Cherry-picked some of my git commits into other feature branches that our team will be deploying over the next few weeks

    Today

    Writing

    • Publish notes on CPU and device virtualization
    • Publish daily review (this one that I’m writing right here)
    • Publish the terminal output from the memory coordinator test cases and their expected outcomes

    Music

    • Upload all the little melodies and harmonies captured on my iPhone.

    Organization

    • Review OmniFocus’s forecast tab to get a sense of what meetings I have this week and any items that are due soon

    Mental and Physical Health

    Poor air quality due to wildfire smoke
    • Stay inside as much as possible and limit outdoor activity (will only walk the dogs) due to wildfire smoke. In lieu of outside exercise, I’ll throw down some push ups, some pull ups (with the door pull up bar) and some light hamstring stretches

    Graduate School

    • Finish the “optimizer” portion of my memory coordinator
    • Finish the lecture on “Synchronization” (fascinating and challenging topic that reminds me of high performance computing architecture course, the concepts very similar)

    Family

    • Pack up the house into our cardboard moving boxes
    • Bathe Elliott for our night time routine
    • Sing and play guitar during lunch again (what a treat that was yesterday)
    • Sign the final real estate contract for the new house that we are buying in Renton
    • Attempt to sell some of my Amazon stocks since we need the cash for our down payment for the house (not sure if socks can be sold over the weekend but let’s just and find out)
    • Follow up with landlord over text since they did not respond to my e-mail that I had sent around regarding ending our lease since we are moving
  • CPU and Device Virtualization (notes)

    CPU and Device Virtualization (notes)

    This post is a continuation of virtualization. In the previous post, I talked about memory virtualization. This post instead discusses CPU and device virtualization.

    Ultimately, as system designers, one of our chief aims is to provide an illusion to the underlying guest operating systems that they each individually won the underlying hardware, the CPU and IO devices. But how do will the guest OS transfer control back and forth to and from the hypervisor?

    In a fully virtualized environment (i.e. guest OS has no awareness that it is virtualized), the guest operating communicates to the hypervisor via the standard way: sending traps. Unfortunately, this leaves little room for innovation, which is why in a para virtualized environment (i.e. guest OS contains specific code to communicate with the hypervisor) optimizes the communication by sharing a ring buffer (i.e. producer and consumer model)

    Through this shared buffer, the guest VM and hypervisor send messages to one another, the messages containing pointers — not raw data — to buffer allocated by the either the guest operating or allocating by the hypervisor. This efficient method of communication prevents the need to copy buffers between the two entities.

    Introduction

    Summary

    With CPU virtualization, there are two main focuses: giving the illusion to the guest that it owns the CPU and having the hypervisor field events (but what the hell is a discontinuity) ?

    CPU Virtualization

    Summary

    Two techniques for CPU scheduling. Proportional share (commensurate with use) and fair share scheduler: everyone gets the same slice of the pie.

    Device Virtualization Intro

    Summary

    Want to give guest illusion that they own the IO devices

    Device Virtualization

    Device Virtualization: full virtualization vs Para

    Summary

    In a fully virtualized guest OS, there’s little room to innovate for device IO since the mechanism is really just a trap and emulate (by the hypervisor). In contrast, para virtualization offers lots of opportunities for optimization, including utilizing shared buffers. However, we need to figure out how to transfer control back and forth between the guest OS and the hypervisor

    Control Transfer

    Summary

    In a full virtualization environment, control from guest to hypervisor happens through a trap. Whereas in the para virtualization, the guest OS sends hyper calls. However, in both full and para virtualization, the hypervisor communicates to the guest OS via software interrupts.

    Data Transfer

    Summary

    Data Transfer in para virtualization uses a ring buffer for producer/consumer model

    Data transfer in a para virtualization system is fascinating: the guest VM and the hypervisor share a IO circular buffer. The guest VM produces data and the hypervisor consumes it, the guest VM maintaining a pointer to the position of the data. And in the response path, the hypervisor maintains a pointer, on the same IO circular buffer, that tracks the responses (and where the guest VM tracks a private buffer)

    Control and Data Transfer in Action

    Summary

    Control and Data Transfer in Action – ring buffers for transmitting and receiving packets

    Pretty amazing how the hypervisor and guest VM work together to avoid copying data between address spaces. To this end, the guest VM will transmit data by maintaining a circular buffer and copy the pointers into the hypervisor, so no data is copied (just pointers). Same approach in the opposite direction, when hypervisor receives packet and then forwards it to the guest.

    Disk IO Virtualization

    Summary

    Disk Virtualization – ring buffer

    Pretty much identical to data transfer in network IO virtualization. Except that Xen offers a reorder barrier to support higher level semantics like write ahead logging (a topic that seems really interesting to me: I’d be curious how to build a database from the ground up)

    Measuring Time

    Summary

    Fundamental tenet is utility computing so we need a way to accurately bill users

    Xen and guests

    Summary

    Xen and guest – paravirtualization

    Xen offers para virtualization and VMWare offers fully virtualization (i.e. guest VM has no clue). Regardless, virtualization focuses on protection and flexibility, trading off performance

  • Memory coordinator test cases (and expected outcome)

    Memory coordinator test cases (and expected outcome)

    I’m getting ready to begin developing a memory coordinator for project 1 but before I write a single line of (C) code, I want to run the provided test cases and read the output of the tests so that a get a better grip of the memory coordinator’s actual objective. I’ll refer back to these test cases throughout the development process to gauge whether or not I’m off trail or whether I’m heading in the right direction.

    Based off of the below test cases, their output, and their expected outcome, I think I should target balancing the unused memory amount. That being said, I now have a new set of questions beyond the ones I first jotted down prior to starting the project:

    • What specific function calls do I need to make to increase/decrease memory?
    • Will I need to directly inflate/deflate the balloon driver?
    • Does the coordinator need to inflate/deflate the balloon driver across every guest operating system (i.e. domain) or just ones that are underutilized?

    Test 1

    The first stage

    1. The first virtual machine consumes memory gradually, while others stay inactive.
    2. All virtual machines start from 512MB.
    3. Expected outcome: The first virtual machine gains more and more memory, and others give out some.

    The second stage

    1. The first virtual machine start to free the memory gradually, while others stay inactive.
    2. Expected outcome: The first virtual machine gives out memory resource to host, and up to policy others may or may not gain memory.

    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [257.21484375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.125]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.36328125]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [324.55859375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [246.1953125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12890625]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.12109375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [235.17578125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12890625]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [224.15625]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12890625]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [212.7734375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [201.75390625]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [190.61328125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [179.3515625]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [168.33203125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [157.3125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]

    Test 2

    The first stage

    1. All virtual machines consume memory gradually.
    2. All virtual machines start from 512MB
    3. Expected outcome: all virtual machines gain more and more memory. At the end each virtual machine should have similar memory balloon size.

    The second stage

    1. All virtual machines free memory gradually.
    2. Expected outcome: all virtual machines give memory resources to host.

    -------------------------------------------------- 
    Memory (VM: aos_vm1) Actual [512.0], Unused: [71.7578125] 
    Memory (VM: aos_vm4) Actual [512.0], Unused: [76.765625] 
    Memory (VM: aos_vm2) Actual [512.0], Unused: [73.5625] 
    Memory (VM: aos_vm3) Actual [512.0], Unused: [74.09765625]
    -------------------------------------------------- 
    Memory (VM: aos_vm1) Actual [512.0], Unused: [76.50390625] 
    Memory (VM: aos_vm4) Actual [512.0], Unused: [65.98828125] 
    Memory (VM: aos_vm2) Actual [512.0], Unused: [62.69921875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [63.078125]
    -------------------------------------------------- 
    Memory (VM: aos_vm1) Actual [512.0], Unused: [65.484375] 
    Memory (VM: aos_vm4) Actual [512.0], Unused: [66.4453125] 
    Memory (VM: aos_vm2) Actual [512.0], Unused: [69.015625] 
    Memory (VM: aos_vm3) Actual [512.0], Unused: [66.5390625]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [65.3984375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [63.19921875]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [68.2109375]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [66.71875]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [347.85546875]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [345.90234375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [347.515625]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [347.25390625]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [347.85546875]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [345.90234375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [347.515625]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [347.25390625]

    Test 3

    A comprehensive test

    1. All virtual machines start from 512MB.
    2. All consumes memory.
    3. A, B start freeing memory, while at the same time (C, D) continue consuming memory.
    4. Expected outcome: memory resource moves from A, B to C, D.

    Memory (VM: aos_vm1) Actual [512.0], Unused: [72.13671875]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [78.59375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [72.21484375]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [74.3125]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [77.609375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [67.6953125]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [78.4140625]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [63.29296875]
  • Daily Review – Day ending in 2020/09/10

    Daily Review – Day ending in 2020/09/10

    We often talk about work life balance, separating the two major parts of our lives. On some level, I agree with the philosophy, believe that work is work and life is life. But at least for me, what happens at work bleeds into my personal life, and vice versa.

    When I have a shitty day at work, I feel despondent and mope around after hours and that impacts the mood for my wife and my daughter and my two dogs. In contrast, when I end the day on a good note, I tend to radiate with happiness.  For example, yesterday I was really pleased with the work that I produced (i.e. finished delivering a design for a new feature) and as a result, when I closed the lid of my laptop shut at 05:00 pm sharp (this is the time Elliott and I bathe together), my energy levels were high and I was able to stay completely mentally and emotionally — not just physically — present with my daughter during our night time routine.

    Yesterday

    Writing

    Best parts of my day

    • Listening to my veterinarian deliver news (over the phone) that both of the puppies (they’ll always be puppies in my heart) are healthy. The the little bump in Metric’s ear was just a benign cyst that they simply popped.
    • Eating a kick ass lunch: Jess whipped together an aesthetically pleasing lunch (equally tasty) consisting of a roasted cauliflower drizzled with pesto sauce and glazed pasta. All of this topped off with a blue berry pancake for a (lunch) dessert.
    • Pushing my design document over the finish line. No major objections from the technical leadership team with moving forwards with the project that will need to launch by Q1 2021
    • Watching an episode of “The Community” with Jess while eating dinner. Jess didn’t have to cut the dinner short since Elliott didn’t wake up so I consider that a little victory

    Mental and Physical Health

    • Yesterday was extremely busy at work so not going to knock myself for this but looking back, I should’ve taken just 5 minutes out of the day to run up and down the hill to get blood flowing through my body, which (surprise surprise) helps with mental health.

    Graduate School

    • Skimmed the first two research papers on memory virtualization published by VMWare
    • Fixed a silly segfault in my memory_coordinator program. I had dynamically allocated memory on the heap (i.e. malloc system call) for my array of arrays but had incorrectly calculated the number of bytes, incorrectly passing in the wrong type when calling sizeof.

    Work

    • Edited by design document, incorporating numbers and figures from AWS Networking’s document that I had asked them to put together
    • Presented my design document (for the second round) to the technical leadership team within my organization (i.e. Blackfoot)

    Family

    • Moved us an inch closer towards finalizing the home loan, collecting documents and proof (e.g. lease contract, monthly mortgages, property tax) to provide for the underwriter
    • Took both puppies to the Vet to get checked up. Nothing major to report back: thank goodness.

    Today

    Song of the day

    Discovered Hamzaa while listening to Spotify, the song showing up in my “Discover Weekly” playlist. I’ll definitely cover this song this song with my acoustic guitar.

    Writing

    • Publish notes on memory virtualization
    • Publish daily review (this one that I’m writing right here)
    • Publish the terminal output from the memory coordinator test cases and their expected outcomes

    Organization

    • Review OmniFocus’s forecast tab to get a sense of what meetings I have this week and any items that are due soon

    Mental and Physical Health

    • Throw on shoes and run up and down our hill for 5 minutes (seriously better than nothing)

    Graduate School

    Work

    • Meetings and meetings and meetings (not a really fun day) of sprint planning, Asians@ planning and development meetings, interview debrief

    Family

    • Continuing packing the house with Jess. We’re a little more than 2 weeks away until we need to finish packing up the entire house and loading up the U-Haul
    • Today’s aim is to fill up 2 more packing boxes (and label them using my new Brother label maker) and tag them with a unique ID (I’m trying out this new system that I called the “The Global Index” … will report back on this on a separate post)
  • Memory Virtualization (notes)

    Memory Virtualization (notes)

    The operating system maintains a per process data structure called a page table, creating a protection domain and hardware address space: another virtualization technique. This page table maps the virtual address space to physical frame number (PFN).

    The same virtualization technique is adopted by hypervisors (e.g. VMWare). They too have the responsibility of mapping the guest operating system’s “physical address” space (from the perspective of the guest VM) to the underlying “machine page numbers”.  These physical address to machine page number mappings are maintained by the guest operating system in a para virtualized system, and maintained by the hypervisor in a fully virtualized environment.

    In both fully virtualized and para virtualized environments, memory mapping is done efficienctly. In the former, the guest VM will send traps and in response, the hypervisor will perform its mapping translation. But in a para virtualized environment, the hypervisor leans on the guest virtual machine much more, via a balloon driver. In a nutshell, the balloon driver pushes the responsibility of handling memory pressure from the hypervisor to the guest machine. The balloon driver, installed on the guest operating system, inflates when hypervisor wants the guest OS free memory. The beauty of this approach is that If the guest OS is not memory constrained, no swapping occurs: the guest VM just removes an entry in its free-list. When the hypervisor  wants to increase the memory for a VM, it signals (through the same private communication channel) the balloon driver to deflate, signaling the guest OS to page in and increase the size of its memory footprint.

    Finally, the hypervisor employs another memory technique known as oblivious page sharing. With this technique, the hypervisor runs a hashing algorithm in a background process, creating a hash of each of memory contents of each page. If two or more virtual machines contain the same page, then the hypervisor just creates a “copy-on-write” page, reducing the memory footprint.

    Memory Hierarchy

    Memory Hierarchy

    Summary

    The main thorny issue of virtual memory is translating virtual address to physical mapping; caches are physically tagged so not a major source of issues there.

    Memory Subsystem Recall

    Summary

    A process has its own protection domain and hardware address space. This separation is made possible thanks to virtualization. To support memory virtualization, we maintain a per process data structure called a page table, the page table mapping virtual addresses to physical frame numbers (remember: page table entries contain the PFN)

    Memory Management and Hypervisor

    Summary

    The hypervisor has no insight into the page tables for the processes running on the instances (see Figure). That is, Windows and Linux serve as the boundary, the protection domain.

    Memory Manager Zoomed Out

    Summary

    Although the virtual machines operating systems think that they allocate contiguous memory, they are not: the hypervisor must partition the physical address space among multiple instances and not all of the operating system instances can start at the same physical memory address (i.e. 0x00).

    Zooming Back in

    Summary

    Hypervisor maintains a shadow page table that maps the physical page numbers (PPN) to the underlying hardware machine page numbers (MPN)

    Who keeps the PPN MPN Mapping

    Summary

    For fully virtualization, the PPN->MPN lives in the hypervisor. But in a para virtualization, it might make sense to live in the guest operating system.

    Shadow page table

    Zooming back in: shadow page table

    Summary

    Similar to a normal operating system, the hypervisor contains the address of the page table (stored in some register) that lives in memory. Not much different than a normal OS, I would say

    Efficient Mapping (Full Virtualization)

    Summary

    In a fully virtualized guest operating system, the hypervisor will trap calls and perform the translation from virtual private number (VPN) to the machine private number (MPN), bypassing the guest OS entirely

    Efficient Mapping (Para virtualization)

    Summary

    Dynamically Increasing Memory

    Summary

    What can we do when there’s little to no physical/machine memory left and the guest OS needs more? Should we steal from one guest VM for another? Or can we somehow get a guest VM to voluntarily free up some of its pages?

    Ballooning

    VMWare’s technique: Ballooning

    Summary

    Hypervisor installs a driver in the guest operating, the driver serving as a private channel that only the hypervisor can access. The hypervisor will balloon the driver, signaling to the guest OS to page out to disk. And then it can also signal to the guest OS to default, signaling to page in.

    Sharing memory across virtual machines

    Summary

    One way to achieve memory sharing is to have the guest operating system cooperate with the hypervisor. The underlying guest virtual machine will signal, to the hypervisor, that a page residing in the guest OS will be marked as copy on write, allowing other guest virtual machines to share the same page. But when the page is written to, then hypervisor must copy that page. What are the trade offs?

    VM Oblivious Page Sharing

    VMWare and Oblivious Sharing

    Summary

    VMWare ESX maintains a data structure that maps content hash to pages, allowing the hypervisor get a “hint” of whether or not a page can be shared. If the content hash matches, then the hypervisor will perform a full content comparison (more on that in the next slide)

    Successful Match

    Summary

    Hypervisor process runs in the background, since this is a fairly intensive operation, and will update guest VMs pages as “copy-on-write”, only running this process when the hypervisor is lightly loaded

    Memory Allocation Policies

    Summary

    So far, discussion has focused on mechanisms, not policies. Given memory is such a precious resource, a fair policy with me dynamic-idle adjusted shares approach. A certain percentage (50% in the case of ESX) of memory will be taken away if its idle, a tax if you will.

  • Daily Review – Day ending in 2020/09/09

    Daily Review – Day ending in 2020/09/09

    After freeing a spider into the front yard, I noticed that the coffee mug (in which I trapped the spider in) was decorated with a beautiful web that the spider must’ve spun overnight.

    Almost every other day, I spot a spider dancing across the white walls of my bedroom. Instead of squishing them to death like I did when I was a little boy, I catch them and release them into the front of back yard, trapping them underneath a cup and sliding a 8×11 piece of paper underneath.

    Yesterday

    Writing

    Best part of my day

    Received such a thoughtful and warm e-mail message from one of my recent subscribers, a classmate of mine.  We had connected over LinkedIn and I originally reached out to him, thanking him for subscribing and essentially asked him what drew him to my blog. In this e-mail response, he sent me a touching message stating how he finds it inspirational that I’m able to share my stories on my blog and that I’m able to be vulnerable.

    Mental Health

    BLM (black lives matter) Poster hanging up on window

     

    My therapist and I unexpectedly spent the entire 50 minute session untangling the tension between him and I, tension that developed when I brought that the most recent invoice he handed me included a charge for a session that I had cancelled.

    I had originally wanted to kick off the session with some good news with my therapist, share some moments that had brightened my days. But as mentioned above, this conversation got derailed because of the incorrect invoice.

    Cancelling my sessions happen regularly.  Unfortunately, I cancel every 5 weeks due to the cadence in which I am “on call” for work. History shows that, on my current team at work, I will get paged: sometimes at 12:00 AM, sometimes at 3:00 AM, sometimes at 10:00pm. You just never know. As a result, I give advance notice and cancel my therapy sessions to avoid getting dragged out mid way through to handle an operational issue.

    Under normal circumstances when cancelling a session, I take on the responsibility of making up for the sessions and finding another slot. But given that I’m a new father and we’re in the midst of the pandemic, I had asked him about 6 months ago if instead of trying to find another day to come into therapy (which is expensive in terms of time and just plain out infeasible with my schedule) and make up a missed session, he could just not charge me and instead we’ll just miss one sessions once every 5 weeks.

    Long story short, I felt blinded sided that he would suddenly charge me when for the past 6 months he has not been charging me. For the remainder of the session, we basically worked through the conflict, talking about why he did what he did, how that made me feel, how it impacts our relationship and trust (short and long term), and so on.

    Ironically, the emotional skills that I’ve developed with dealing with confrontation were learned right there in his office. That’s been a big part of my therapy over the last four years, learning how to take a stand and deal with conflict facing me, instead of always defaulting to my gut reaction: fleeing the situation. Of course, there are some situations in which some people simply cross boundaries and the conversation just needs to end. However, in most situations, conflict is not always a bad thing and in some ways, can nourish and grow a relationship.

    Graduate School

    Began working on the memory coordinator (part 2 of project 1), the assignment due in roughly 2 weeks (on September 21).

    Work

    Met for an hour over (Amazon) Chime with a colleague working in AWS Networking, an organization that we partner with — well they are sort of our customer — to deliver networking features for EC2 networking.

    Participated in a ticket bash, all of us focusing on closing tickets — some human cut and some auto-cut by monitoring system — that crept into our queue over the last couple months

    Today

    Writing

    Publish notes taken from watching yesterday’s lectures (or lectures from a few days ago)

    Organization

    Review OmniFocus’s forecast tab to get a sense of what meetings I have this week and any items that are due soon

    Graduate School

    Work

    • Update Quip document in preparation for design review meeting (next item below)
    • Lead the second design meeting for a document that I put together for the new feature we’ll be launching in Q1 2021.
    • Meet with my manager for our weekly 1:1

    Family

    • Take Metric and Mushroom both to the veterinarian at 11:15 today. Metric’s pointy German Shepherd (right) ear has sort of a little hole in it and looks as if some sort of inspect or parasite carved out pimple sized home. Mushroom developed some sort of allergic reaction on a small patch on her back, in the location where I had applied the flea medication (same one I’ve been applying every month for the last couple years).
    • Pour some of the smoothie that I make every morning into a sippy cup for Elliott and into a big girls cup for Jess
    • Finish collecting paperwork (i.e. statements for my rental property, etc) to nudge the underwriting process along