According to the omscentral reviews for the advanced operating systems course, the midterm exams are nearly identical to the previous semester’s exams and former students strongly suggest rote memorization as a the primary study method. In my opinion, these types of tests do not really serve as a great litmus test for evaluating a student’s understanding. Nonetheless, I’ll prepare for the exam by going over the material in three passes using space repetition1 and active recall and testing.
Pass One – Guess
Step through each question from Spring 2020 midterm, attempting to answer them myself without peeking at the answers. This helps me gauge my understanding, allowing me to honestly evaluate any gaps I need to fill.
Pass Two – Comparison against answer key
After pass one and attempting to answer the questions on my own, I will then compare my answers to the solution guide and reflect on what I got right and what I got wrong.
Pass Three – Spaced repetition
Anki screenshot – question 1a from midterm (advanced operating systems Spring 2020
After the first two passes, I’ll copy the questions (and their solutions) into my digital flash cards (i.e. Anki) and then start ramping up using spaced repetition.
References
Studies show that spaced repetition and testing are scientifically proven to help knowledge acquisition: https://artofmemory.com/wiki/Spaced_Repetition_and_Recall
If you are an online masters of computer science student (OMSCS) at Georgia Tech and enrolled in advanced operating systems (AOS) course, you might want to check out the notes I’ve taken for the lectures by clicking on the advanced operating systems category on my blog. For each video lecture, I’ve written down a summary and the key take away. These notes may help you out by giving you a quick overview of the topics or help you decide what sections to revisit or skip.
At the time of this writing, the write ups start from the refresher course (e.g. memory systems) all the way up until the end of parallel systems (i.e. last lectures included as part of the midterm).
Being fully present as a parent all the time seems like an impossible feat. Although I’d like to think that I’m always present with my daughter, I do find myself sometimes mentally checking out. For example, yesterday, Jess had reminded me during lunch I should be in the here and now instead of scrolling on my iPhone, searching for some funny video (found on Reddit) that I wanted her to watch (I did end up finding it and it’s a video of a failed attempt of shuffling).
On a separate note, I’ve been really enjoying doodling. If that’s something that you are interested in, I’d highly recommend checking out Cathy Wu’s courses on Skillshare. So far, I’ve watch these short lessons (between 10-30 minutes) — I watch them when winding down from a long day of parenting, work, and studying for graduate school — that combine helpful exercises and I must say that they are helping me unlock my creativity and reminded me that I too can draw:
Published notes on remote procedure call (from the perspective of the operating system)
Parenting and family matters
Jogged to Maple Leaf Park (maybe a mile away) while pushing Elliott in her stroller and when the two of us arrived, I lead her to the playground and swung her on the swings. After maybe 2 minutes of swinging back and forth, I carried her over to the kitty slide and then held her underneath her armpits as she slid down the slide for the first time. She loved it and had a blast. But really what she enjoyed the most was sitting crossed legged on the wood chips and watching all the other little kids running around. Now I normally don’t watch Elliott during the day but Jess had an important meeting at 04:00pm so I figured it would be helpful if I watch Elliott so Jess could focus her entire attention on that that video call with no interruptions and without feeling bad about propping Elliott in front of the television for an hour.
What I am grateful for
Good health. Something so simple is so easy to forget. That is until we are sick. Although I’ve packed on a little of that COVID weight, some extra flub sagging around my belly, I’m still grateful that overall nothing major concerning with my health. This is a good reminder to continue with eating a plant based diet and maybe cut down on oreo cookie (yes, they really are vegan).
What I learned
To build high performance parallel systems we want to limit sharing global data structures. By reducing sharing, we limit locking, an expensive operation.
Heavy use of typedef keyword with enums creates cleaner C code
Work
Built a prototype for a new feature that I’m delivering and next step for me is to benchmark the solution to ensure that the underlying algorithm scales
Thoughts
Just under two years ago I was not writing C code (neither during my personal leisure time and neither during my professional life) and now I’m loving the language, using it to build and prototype features for networking devices at work. Not only that, but developing the skill makes taking advanced operating systems during graduate school so much easier. So the two (academia and industry) feed into one another, a loop of learning and improvement (I like the way that sounds).
The key take away for scheduling is that as OS designers you want to follow this mantra: “keep the caches warm“. Following this principle will ensure that the scheduler performs well.
There are many different scheduling algorithms including first come first serve (FCFS), fixed processor (focus on fairness), fixed processor (thread runs on the same process every time), last processor scheduling (processor will select thread that last ran on it, defaulting to choosing any thread), and minimum intervening (checks what other threads ran between, focusing on cache affinity). One modification to minimum intervening is a minimum intervening with a queue (since threads sitting in the queue may pollute the cache as well, so we’ll want to choose the minimum between the two).
One major point of the different scheduling algorithms is that there’s no correct answer: we need to look at trends and look at the overhead. The latter option (i.e. minimum intervening thread with queues) seems like the best solution but really what about the hidden costs like keeping track of the internal data structures?
Regardless of which of the above policies are chosen, the OS designer must exercise caution during implementation. Although designing the policy with a single global queue may work for a small system with a handful of processors or threads, imagine a system with dozens of processors and hundreds of threads: what then? Perhaps build affinity-based local queues instead, each queue policy specific (nice for flexibility too).
Finally, for performance, think about how to avoid polluting the cache. Ensure that the memory footprint of the threads (either frugal or hungry threads, combination of the two) footprint do not exceed the L2 cache. Determining what threads are frugal or hungry requires the operating system to profile the processes, additional overhead to the OS. So we’ll want to minimize profiling.
Scheduling First Principles
Summary
Mantra is always the same: “keep the caches warm”. That being said, how do we (as OS designers), when designing our schedulers, choose which thread or process that should run next?
Quiz: Scheduler
Summary
How should scheduler choose the next thread – all of the answers are suitable. Remainder of lecture will focus on “thread whose memory contents are in the cpu cache”. So … what sort of algorithm will we come up with determine whether another processor’s have its contents in the cache. I can imagine a few naive solutions
Memory Hierarchy Refresher
Memory Hierarchy Refresher – L1 cache costs 1-2 cyces, L2 caches about 10 days, and memory about 100 cycles
Summary
Going from cache to memory is a heavy price to pay, more than two orders of magnitude. L1 cache takes 1-2 cycles, L2 around 10 cycles, and memory around 100 cycles
Cache affinity scheduling
Summary
Ideally a thread gets rescheduled on the same processor as it ran on before. However, this might not be possible due to intervening threads, the cache being polluted.
Scheduling Policies
Summary
Different types of scheduling policies including first come first serve (focuses on fairness, not affinity), fixed processor scheduling (every thread runs on the same processor every time), last processor scheduling (processor will pick thread that it previously ran, falling through to picking any thread), and minimum intervening (most sophisticated but requires state tracking).
Minimum Intervening Policy
Summary
For a given thread (say Ti), the affinity is the number of threads that ran in between Ti’s execution
Minimum Intervening Plus Queue Policy
Summary
Minimum Intervening with queue – choose minimum between intervening number and queue size
Attributes of OS is to quickly make a decision and get out of the way, hence why we might want to employ minimum intervening scheduler (with limits). Separately, we want our scheduler to take into account of the queued threads, not just the affinity, since it’s entirely possible for affinity to be low for a CPU but for the queue to contain other threads, polluting the cache. So the minimum intervening plus queue takes both the affinity and the queued threads into account
Summarizing Scheduling Policies
Summary
Scheduling policies can be categorized into processor centric (what thread should a particular processor should choose to maximize chance of cache amount of cache content will be relevant) and thread centric (what is the best decision for a particular thread with respects to its execution). Thread centric: fixed and last processor scheduling policies. Processor centric is minimum intervening and intervening plus queue.
Quiz: Scheduling Policy
Summary
With the Minimum interleaving with queues, we select the processor that has the minimum value between number of intervening threads and minimum number of items in the queue.
Implementation Issues
Implementation Issues – instead of having a single global queue, one queue per CPU
Summary
One way to implement the scheduling is to use a global queue. But this may be problematic for systems with lots of processors. Another approach is to have one queue per processor, and each queue can have its own policy (e.g. first come first served, fixed processor, minimum intervening). And within the queue itself, the threads position’s is determined by: base priority (when thread first launched), age (how long thread has been around) and affinity
Performance
Summary
How do we evaluate the scheduling policies? We can look at it from the system’s point of view: throughput. That is, what are the number of threads that get executed per unit of time. From another viewpoint: user centric, which consists of response time (end to end time) and variance (i.e. deviation of end to end times). With that in mind, the first come first serve would perform poorly due to high variance, given that policy does not distinguish one thread from another thread
Performance continued
Performance of scheduler – throughput (system centric) vs response time + variance (user centric)
Summary
A minimum intervening policy may not suitable for all work loads. Although the policy may work well for small to light to medium work loads, may not perform very well when system under stress because caches will get polluted. In this case, a fixed processing scheduling may be more performant. So no one size fits all. Also, may want to introduce delays in scheduling, a technique that works in both synchronization and in file systems (which I will learn later on in the course, apparently)
Cache Affinity and Multicore
Summary
Hardware can switch out threads seamlessly without the operating system’s knowledge. But there’s a partnership between hardware and the OS. The OS tries to ensure that the working set lives in either L1 or at L2 cache; missing in these caches and going to main memory is expensive: again, can be twice order of magnitude.
Cache Aware Scheduling
Cache aware scheduling – categorize threads into frugal vs hungry threads. Make sure sum of address space between two do not exceed size of L2 cache
Summary
For cache aware scheduling, we categorize threads into two: cache hungry and cache frugal. Say we have 16 hardware threads, we want to make sure that during profiling, we group them and make sure that cache size of hungry and cache size of frugal have a working set size less than cache (L2). But OS must be careful to profiling and monitoring — should not heavily interfere. In short, overhead needs to be minimal.
I’m (re) learning how to doodle! I’d like to incorporate art and sprinkle sketches into notes that I scribble down while studying for graduate school. Also, I just miss drawing, an activity I used to do a lot as little boy. But somewhere between then and becoming an adult I’ve lost my way, losing touch with that part of my artistic side.
Although it makes sense to just grab a pen (pr pencil) and give myself permission for the creativity to flow out, my instinct was to perform research online and find a doodling course or find the top doodling books. Now, I did end up signing up for a 30 minute online recorded course produced by local Seattle artist Cathy Wu and I did purchase two E-books authored by Mike Rohde, whom I discovered via Sacha Chua’s blog. However, at the end of the day, I did end up doodling (so did my wife) over dinner instead of watching television like we normally do during dinner.
Watched Elliott at 6:00 AM for about an hour so Jess (a tired mom) could squeeze in an extra hour of sleep. During this early morning, Elliott and I kept each other company while I packed up and unscrewed the wooden Ikea desk downstairs. I was originally using my drill to unscrew but Elliott let out a little pout that signaled to me that the drill was too loud. So I ending up switching to a Phillips screw driver, which took me probably twice as long to disassemble the table but who cares.
Witnessed poor Elliott throwing up mountains of avocado and blackberries, her poor body. She hasn’t thrown up that amount before (and hasn’t thrown up in general for the past 5 months).
Picked up two loaves of Challah from The Grateful Bread for Jess since it’s Rashashana weekend. I had called in to place a hold on Challah but none were available. But I ended up driving to the bakery anyways and low and behold they had just freshly baked three loaves!
Swung by Broadfork cafe and scooped us all up some vegan lunch: Egyptian lentil soup (probably my favorite soup ever).
Created a melody on the ukulele and sang lyrics to the book “You must never touch a porcupine”
Elliott and Jess and me spent (at least) an hour laying out on the lawn of University Village (thank goodness for their fake grass, otherwise I wouldn’t be able to sprawl out) after picking up dinner from Veggie Grill (as I type this out, I realized how often we are dining out but whatever, we’re in the process of packing and moving homes in two weeks)
Walked the dogs at Magnuson Park.
What I am grateful for
Metric being the best dog. Ever. Yesterday I took the dogs to the park with Elliot while Jess received in home physical therapy. The park was packed (everyone distancing themselves and wearing masks of course, apart from 1-2 people who think they are above everyone else) and not too far from the fenced entrance was a group of children, about three or four of them, between the ages of 6-10. Metric rushed to their little circle and greeted them, her long nose brushing up against their elbows for a little hello. Then for the next 10 minutes, while holding Elliott, I watched as the kids would toss a light green softball ball for Metric to fetch and watch Metric retrieve the ball for them and return slobbering it out at their feet. Over and over. The kids loved it. In fact, one of the kids jogged over to their mom and yanked on her jacket, asking if they could take the dog home. The little girl’s mom whispered that German Shepherds.
What I learned
Concept of cache affinity scheduling
Learned what hardware multi-threading . Basically allows hardware to switch out thread that’s currently running on its CPU, avoiding the need to get the OS involved
Where is my money going?
iPhone 11 Pro Portrait mode of Metric
(2) Watermarked PDFs on Sketch note taking by Mike Rohdes
The iPhone 11 Pro. I debated this purchase for over a week, feeling guilty about spending this amount of money — on a stinking phone. But given that I haven’t upgraded my phone for almost 5 years and given that I take lots of photos of Elliott and the two dogs, I figured a solid investment is worth it.
App for drawing (pretty relaxing and beautiful)
Thoughts
Learning about CPU affinity reminds me of the scheduling algorithm that I came up with for a large project at work. We implemented a “sticky” algorithm but really it was an affinity based algorithm similar to what I’m learning in OS. Cool to look back and say “I sort of got it right” without fully understanding or knowing the theoretical roots, relying on intuition instead
A more sophisticated algorithm may not always be preferable. Maintaining more state (trading off memory) may not be our ideal situation. It’s a trade off and explains why we may want to limit the number of metadata to store, especially when a system may run thousands of CPUs (although I’ve never worked with any of those systems before).
Remote procedure call (RPC) is a framework offered within operating systems (OS) to develop client/server systems and they promote good software engineering practices and promote logical protection domains . But without careful consideration, RPC calls (unlike simple procedure calls) can be cost prohibitive in terms over overhead incurred when marshaling data from client to server (and back).
Out of the box and with no optimization, an RPC costs four memory copy operations: client to kernel, kernel to server, server to kernel, kernel to client. On the second copy operation, the kernel makes an upcall into the server stub, unmarshaling the marshalled data from the client. To reduce these overhead, us OS designers need a way to reduce the cost.
To this end, we will reduce the number of copies by using a shard buffer space that gets set up by the kernel during binding, when the client initializes a connection to the server.
RPC and Client Server Systems
The difference between remote procedure calls (RPC) and simple procedure calls
Summary
We want the protection and want the performance: how do we achieve that?
RPC vs Simple Procedure Call
Summary
An RPC call, happens at run time (not compile time) and there’s a ton of overhead. Two traps involved. First is call trap from client; return trap (from server). Two context switches: switch from client to server and then server (when its done) back to client.
Kernel Copies Quiz
Summary
For every RPC call, there are four copies: from client address space, into kernel space, from kernel buffer to server, from server to kernel, finally from kernel back to client (for the response)
Copying Overhead
The (out of the box) overhead of RPC
Summary
Client Server RPC calls require the kernel to perform four copies, each way. Need to emulate the stack with the RPC framework. Client Stack (rpc message) -> Kernel -> Server -> Server Stack. Same thing backwards
Making RPC Cheap
Making RPC cheap (binding)
Summary
Kernel is involved in setting up communication between client and server. Kernel makes an up call into the server, checking if the client is bonafide. If validation passes, kernel creates a PD (a procedure descriptor) that contains the three following: entry point (probably a pointer, I think), stack size, and number of calls (that the server can support simultaneously).
Making RPC Cheap (Binding)
Summary
Key Take away here is that the kernel performs the one-time set up operation of setting up the binding, the kernel allocating shared buffers (as I had correctly guessed) and authenticating the client. The shared buffers basically contain the arguments in the stack (and presumably I’ll find out soon how data flows back). Separately, I learned a new term called “up calls”.
Making RPC Cheap (actual calls)
Summary
A (argument) shared buffer can only contain values passed by value, not reference, since the client and server cannot access each other’s address spaces. The professor also mentioned something about the client thread executing in the address space of the server, an optimization technique, but I’m not really following.
Making RPC Cheap (Actual Calls) continued
Summary
Stack arguments are copied from “A stack” (i.e. shared buffer) to “E” stack (execution). Still don’t understand the entire concept of “doctoring” and “redoctoring”: will need to read the research paper or at least skim it
Making RPC Cheap (Actual Calls) Continued
Summary
Okay the concept is starting to make sense. Instead of the kernel copying data, the new approach is that the kernel steps back and allows the client (in user space) copy data (no serialization, since semantics are well understood between client and server) into shared memory. So now, no more kernel copying, just two copies: marshal and unmarshal. Marshal copies from client to server. And from server to client.
Making RPC Cheap Summary
Making RPC calls cheap (summary)
Summary
Explicit costs with new approach: 1) client trap and validating BO (binding operation) 2) Switching protection domain from client to server 3) Return trap to go back into client address space. But there are also implicit costs like loss of locality
RPC on SMP
Summary
Can exploit multiple CPUs by keeping cache warm by dedicating processors to servers
RPC on SMP Summary
Summary
The entire gist is this: make RPC cheap so that we can promote good software engineering practices and leverage the protection domains that RPC offers.
Another installment of my weekly reviews. I think the practice of carving out around 30 minutes on Sunday to look back at the previous week and look forward for the next week helps me in several ways. First, the posts help me recognize my tiny little victories that I often neglect and they also help me appreciate everything in my life that are easy to take for granted (like a stable marriage and healthy children and a roof over our heads and not having to worry about my next pay check). Second, these weekly rituals tend to reduce my anxiety, giving me some sense of control over the upcoming week that will of course not go according to plan.
How static asserts are great way to perform sanity checks during compilation and are a great way to ensure that your data structures fit within the processor’s cache lines
Learned a new type of data structure called a n-ary tree (used for the MCS tree barrier)
Read C code at work that helped sharpen my data structure skills since I saw first hand how in production code we trade off space for performance by creating a index that bounds the range for a binary search
Family and Friends
Taught Elliott had to shake her head (i.e. say “No”). Probably the biggest mistake this week since she’s constantly shaking her head (even though she’s trying to say “yes”). Does teaching her how to touch her shoulders balance out teaching her how to say no?
Finally was able to book Mushroom an appointment to get her groomed (with COVID-19, Petsmart grooming was closed for time). I felt pretty bad and even tried cutting parts of her fur myself because patches of her hair were getting tangled and she was itching at them and giving herself heat rashes
Chased Elliott around the kitchen while trying to feed her spoonfuls of broccoli and potato that Jess cooked up for her. I believe the struggle of Elliott not sitting still during lunch is karma, payback for all the times when I was her age and made my parents chase me around
Watched several episodes of Fresh off the boat while eating dinner with Jess throughout the week. We are really enjoying this show and I find the humor relatable as a Vietnamese American man that grew up around the same time frame of the show.
Video chatted with some old familiar faces and these social interactions were actually the best parts of my day. I need to do this more: reach out to people and just play catch up
Packed packed packed and sorted out administrative stuff like printing out statements proving the transaction from Morgan Stanley to Wells Fargo and obtaining home insurance and reading through contracts for the new house that we’ll be moving into in less than 2 weeks
Did not exercise at all really because of the wildfire smoke blanketing the pacific north west. Luckily, yesterday the weather cleared up so I will take advantage of the fresh air
Got a hair cut. Originally, I categorized getting a hair cut underneath the miscellaneous section but upon reflection, I find grooming oneself and taking care of our physical appearance (not in vein) actually positively impacts our mental health (or negatively if we don’t treat ourselves). It really is easy to let oneself go, especially during the pandemic.
Graduate School
Submitted Project 1 on virtual CPU scheduler and memory coordinator
Learned a ton from watching lectures on mutual exclusion and barrier synchronization (see section above on What I Learned)
Finished watching most of the lecture series on parallel systems but still need to finish Scheduling and Shared Memory Multiprocessor OS
Music
Guitar lesson with Jared last Sunday. Focused on introducing inversions as a way to spice things up with my song writing.
Part 1 of barrier synchronization covers my notes on the first couple types of synchronization barriers including the naive centralized barrier and the slightly more advanced tree barrier. This post is a continuation and covers the three other barriers: MCS barrier, tournament barrier , dissemination barrier.
Summary
In the MCS tree barrier, there are two separate data structures that must be maintained. The first data structure (a 4-ary tree, each node containing a maximum of four children) handling the arrival of the processes and the second data structure handling the signaling and waking up of all other processes. In a nutshell, each parent node holds pointers to their children’s structure, allowing the parent process to wake up the children once all other children have arrived.
The tournament barrier constructs a tree too and at each level are two processes competing against one another. These competitions, however, are fixed: the algorithm predetermines which process will advanced to the next round. The winners percolate up the tree and at the top most level, the final winner signals and wakes up the loser. This waking up of the loser happens at each lower level until all nodes are woken up.
The dissemination protocol reminds me of a gossip protocol. With this algorithm, all nodes detect convergence (i.e. all processes arrived) once every process receives a message from all other processes (this is the key take away); a process receives one (and only one) message per round. The runtime complexity of this algorithm is nlogn (coefficient of n because during each round n messages, one message sent from one node to its ordained neighbor).
The algorithms described thus far share a common requirement: they all require sense reversal.
MCS Tree Barrier (Binary Wakeup)
MCS Tree barrier with its “has child” vector
Summary
Okay, I think I understand what’s going on. There are two separate data structures that need to be maintained for the MCS tree barrier. The first data structure handles the arrival (this is the 4-ary tree) and the second (binary tree) handles the signaling and waking up of all the other processes. The reason why the latter works so well is that by design, we know the position of each of the nodes and each parent contains a pointer to their children, allowing them to easily signal the wake up.
Tournament Barrier
Tournament Barrier – fixed competitions. Winner holds the responsibility to wake up the losers
Summary
Construct a tree and at the lowest level are all the nodes (i.e. processors) and each processor competes with one another, although the round is fixed, fixed in the sense that the winner is predetermined. Spin location is statically determined at every level
Tournament Barrier (Continued)
Summary
Two important aspects: arrival moves up the tree with match fixing. Then each winner is responsible for waking up the “losers”, traversing back down. Curious, what sort of data structure? I can see an array or a tree …
Tournament Barrier (Continued)
Summary
Lots of similarity with sense reversing tree algorithm
Dissemination Barrier
Dissemination Barrier – gossip like protocol
Summary
Ordered communication: like a well orchestrated gossip like protocol. Each process will send a message to ordained peer during that “round”. But I’m curious, do we need multiple rounds?
Dissemination Barrier (continued)
Summary
Gossip in each round differs in the sense the ordained neighbor changes based off of Pi -> P(I + 2^k) mod n. Will probably need to read up on the paper to get a better understanding of the point of the rounds ..
Quiz: Barrier Completion
Summary
Key point here that I just figured out is this: every processor needs to hear from every other processor. So, it’s log2N with a ceiling since N rounds must not be a power of 2 (still not sure what that means exactly)
Dissemination Barrier (continued)
Summary
All barriers need sense reversal. Dissemination barrier is no exception. This barrier technique works for NCC and clusters.Every round has N messages. Communication complexity is nlogn (where N is number of messages) and log(n). Total communication nlogn because N messages must be sent every round, no exception
Performance Evaluation
Summary
Most important question to ask when choosing and evaluating performance is: what is the trend? Not exact numbers, but trends.
My wife’s apple iPhone X saves images she captured with her camera as .heic format1 , a relatively new file format that compresses high quality images and I needed a way to convert these type of files to .jpeg (which apparently dates back to 1992) so that they can be uploaded to my blog and eventually rendered by your browser.
After some googling, I discovered that my MacBook ships with the sips command line (scriptable image processing system) and the tool provides all the functionality I need. Each image that sips process takes about a second so that’s not too bad when bulk converting photos.
SIPs Command
$ sips -s format jpeg --resampleWidth <resolution> <source> --out <destination>
Example
Below is an example of the command that I ran to convert images for my blog, converting the image to a minimum of 1200 pixels (in width).
Hooray! Today is the first day in a couple weeks that air quality is considered good, at least according to the EPA. I’m so pleased and so grateful for clean air because my wife and daughter have not left the house since the wild fires started a week ago (or was it two weeks — I’ve lost concept of time since COVID hit) and today marks the first day we can as an entire family can go for a walk at a local park (it’s the little things in life) and breathe in that fresh, crisp pacific northwest air. Of course, we’ll still be wearing masks but hey, better than staying cooped up inside.
Yesterday
What I learned yesterday
Static assertion on C structures. This type of assertion fires off not at at run-time but at compile time. By asserting on the size of a structure, we can ensure that they are sized correctly. This sanity check can be useful in situations such as ensuring that your data structure will fit within your cache lines.
Writing
Published my daily review that I had to recover in WordPress since I had accidentally deleted the revision
Best parts of my day
Video chatting at the end of the work day with my colleague who I used to work with in Route 53, the organization I had left almost two years ago. It’s nice to speak to a familiar face and just shoot the shit.
Graduate School
Finished lectures on barrier synchronization (super long and but intellectually stimulating material)
Started watching lectures on lightweight RPC (remote procedure calls)
Submitted my project assignment
Met with a classmate of mine from advanced operating systems, the two of us video chatting over Zoom and describing our approaches to project 1 assignment
Work
Finished adding a simple performance optimization feature that takes advantage of the 64 byte cache lines, packing some cached structs with additional metadata squeezed into the third cache line.
Miscellaneous
Got my teeth cleaned at the dentist. What an unusual experience. Being in the midst of the pandemic for almost 8 months now I’ve forgotten what it feels like to talk to someone up close while not wearing a mask (of course the dentist and dental hygienist were wearing masks) so at first I felt a bit anxious. These days, any sort of appointments (medical or not) are calculated risks that we must all decide for ourselves.
Today
Writing
Publish Part 1 of Barrier Synchronization notes
Publish this post (my daily review)
Review my writing pipeline
Mental and Physical Health
Slip on my Freebird shoes and jog around the neighborhood for 10 minutes. Need to take advantage of the non polluted air that cleared up (thank you rain)
Swing by the local Cloud City coffee house and pick up a bottle of their in house Chai so that I can blend it in with oat milk at home.
Review my tasks and project and breakdown the house move into little milestones
Family
Take Mushroom to her grooming appointment. Although I put a stop gap measure in place so that she stops itching and wounding herself, the underlying issue is that she needs a haircut since her hair tends to develop knots.
Walk the dogs at either Magnuson or Marymoore Park. Because of the wild fires, everyone (dogs included) have been pretty much stuck inside the house.
Pack pack pack. 2 weeks until we move into our new home in Renton. At first, we were very anxious and uncertain about the move. Now, my wife and I are completely ready and completely committed to the idea.