Tag: aos project 1

  • Memory coordinator test cases (and expected outcome)

    Memory coordinator test cases (and expected outcome)

    I’m getting ready to begin developing a memory coordinator for project 1 but before I write a single line of (C) code, I want to run the provided test cases and read the output of the tests so that a get a better grip of the memory coordinator’s actual objective. I’ll refer back to these test cases throughout the development process to gauge whether or not I’m off trail or whether I’m heading in the right direction.

    Based off of the below test cases, their output, and their expected outcome, I think I should target balancing the unused memory amount. That being said, I now have a new set of questions beyond the ones I first jotted down prior to starting the project:

    • What specific function calls do I need to make to increase/decrease memory?
    • Will I need to directly inflate/deflate the balloon driver?
    • Does the coordinator need to inflate/deflate the balloon driver across every guest operating system (i.e. domain) or just ones that are underutilized?

    Test 1

    The first stage

    1. The first virtual machine consumes memory gradually, while others stay inactive.
    2. All virtual machines start from 512MB.
    3. Expected outcome: The first virtual machine gains more and more memory, and others give out some.

    The second stage

    1. The first virtual machine start to free the memory gradually, while others stay inactive.
    2. Expected outcome: The first virtual machine gives out memory resource to host, and up to policy others may or may not gain memory.

    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [257.21484375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.125]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.36328125]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [324.55859375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [246.1953125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12890625]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.12109375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [235.17578125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12890625]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [224.15625]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12890625]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [212.7734375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [201.75390625]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [190.61328125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [179.3515625]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [168.33203125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [157.3125]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [343.12109375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [328.2421875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [325.15234375]

    Test 2

    The first stage

    1. All virtual machines consume memory gradually.
    2. All virtual machines start from 512MB
    3. Expected outcome: all virtual machines gain more and more memory. At the end each virtual machine should have similar memory balloon size.

    The second stage

    1. All virtual machines free memory gradually.
    2. Expected outcome: all virtual machines give memory resources to host.

    -------------------------------------------------- 
    Memory (VM: aos_vm1) Actual [512.0], Unused: [71.7578125] 
    Memory (VM: aos_vm4) Actual [512.0], Unused: [76.765625] 
    Memory (VM: aos_vm2) Actual [512.0], Unused: [73.5625] 
    Memory (VM: aos_vm3) Actual [512.0], Unused: [74.09765625]
    -------------------------------------------------- 
    Memory (VM: aos_vm1) Actual [512.0], Unused: [76.50390625] 
    Memory (VM: aos_vm4) Actual [512.0], Unused: [65.98828125] 
    Memory (VM: aos_vm2) Actual [512.0], Unused: [62.69921875]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [63.078125]
    -------------------------------------------------- 
    Memory (VM: aos_vm1) Actual [512.0], Unused: [65.484375] 
    Memory (VM: aos_vm4) Actual [512.0], Unused: [66.4453125] 
    Memory (VM: aos_vm2) Actual [512.0], Unused: [69.015625] 
    Memory (VM: aos_vm3) Actual [512.0], Unused: [66.5390625]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [65.3984375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [63.19921875]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [68.2109375]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [66.71875]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [347.85546875]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [345.90234375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [347.515625]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [347.25390625]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [347.85546875]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [345.90234375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [347.515625]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [347.25390625]

    Test 3

    A comprehensive test

    1. All virtual machines start from 512MB.
    2. All consumes memory.
    3. A, B start freeing memory, while at the same time (C, D) continue consuming memory.
    4. Expected outcome: memory resource moves from A, B to C, D.

    Memory (VM: aos_vm1) Actual [512.0], Unused: [72.13671875]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [78.59375]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [72.21484375]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [74.3125]
    --------------------------------------------------
    Memory (VM: aos_vm1) Actual [512.0], Unused: [77.609375]
    Memory (VM: aos_vm4) Actual [512.0], Unused: [67.6953125]
    Memory (VM: aos_vm2) Actual [512.0], Unused: [78.4140625]
    Memory (VM: aos_vm3) Actual [512.0], Unused: [63.29296875]
  • Papers to read for designing and writing up the C memory coordinator

    Papers to read for designing and writing up the C memory coordinator

    Below are some memory management research papers that my classmate shared with the rest of us on Piazza1. Quickly scanning over the papers, I think the material will point me in the right direction and will paint a clearer picture of how I might want to approach writing my memory coordinator. I do wonder if I should just experiment on my own for a little and take a similar approach for part 1 project, when I wrote a round robin (naive) scheduler for CPU scheduling. We’ll see.

    Recommended readings

    References

    1 – https://piazza.com/class/kduvuv6oqv16v0?cid=221

     

     

  • A snapshot of my understanding before tackling the memory coordinator

    A snapshot of my understanding before tackling the memory coordinator

    Now that I finished writing the vCPU scheduler for project 1, I’m moving on to the second part of the project called the “memory coordinator” and here I’m posting a similar blog post to a snapshot of my understanding of project 1 , the motivation being that I take for granted what I learned throughout graduate school and rarely celebrate these little victories.

    This post will focus on my unknowns and my knowledge gaps as it relates to the memory coordinator that we have to write in C using libvrt. According to project requirements, the memory coordinator should:

    dynamically change the memory of each virtual machine, which will indirectly trigger balloon driver.

    Questions I have

    As I chip away at the project, more questions will inevitably pop up and when they do, I’ll capture them (but in a separate blog post). So here’s my baseline understanding of what a memory coordinator:

    • What are the relevant memory statistics that should be collected?
    • Will the resident set size (RSS) be relevant to the decision making algorithm? Or it is  irrelevant?
    • What is the upper and lower bounds of memory consumption that triggers the ballooning driver to page memory out or page memory in?
    • Will the balloon driver only trigger for virtual machines that are memory constrained?
    • Does the hypervisor’s memory footprint matter (I’m guessing yes, but to what extent)?

     

  • A naive round robin CPU scheduler

    A naive round robin CPU scheduler

    A couple days ago, I spent maybe an hour whipping together a vary naive CPU scheduler for project 1 in advanced operating systems. This naive scheduler pins each of the virtual CPUs in a round robin fashion, not taking utilization (or any other factor) into consideration. For example, say we have four virtual CPUs and two physical CPUs; the scheduler will assign virtual CPU #0 to physical CPU #0, virtual CPU #1 to physical CPU #1, virtual CPU#3 to physical CPU #0 and virtual CPU#0 to physical CPU#1.

    This naive schedule is far from fancy — really the code just performs a mod operation to wrap around the length of the physical CPUs and avoid an index error and carries out a left bit shift operation to populate the bit map — but performs surprisingly well based off monitoring results (below) that measure the utilization of each physical CPU.

    Of course, my final scheduler will pin virtual CPUs to physical CPUS more intelligently,  taking the actual workload (i.e. time in nanoseconds) of the virtual CPUs into consideration.  But as always, I wanted to avoid pre-optimization and jump to some fancy algorithm published in some research paper and I’m glad I started with a primitive scheduler that, for the most part, evenly distributes the work apart from the fifth test (which generates uneven workloads), the only test in which the naive scheduler creates a more uneven workload.

    With this basic prototype in place, I should be able to come up with a more sophisticated algorithm that takes the virtual CPU utilization into consideration.

    Test Case 1

    In this test case, you will run 8 virtual machines that all start pinned to pCPU0. The vCPU of each VM will process the same workload.

    Expected Outcome

    Each pCPU will exhibit an equal balance of vCPUs given the assigned workloads (e.g., if there are 4 pCPUs and 8 vCPUs, then there would be 2 vCPUs per pCPU).

    --------------------------------------------------
    0 - usage: 103.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping []
    --------------------------------------------------
    0 - usage: 99.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping []
    --------------------------------------------------
    0 - usage: 49.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 47.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 50.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 49.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------
    0 - usage: 60.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 65.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 61.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 61.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------

    Test 2

    In this test case, you will run 8 virtual machines that start with 4 vCPUs pinned to pCPU0 and the other 4 vCPUs pinned to pCPU3. The vCPU of each VM will process the same workload.

    Expected Outcome

    Each pCPU will exhibit an equal balance of vCPUs given the assigned workloads.

    --------------------------------------------------
    0 - usage: 102.0 | mapping ['aos_vm1', 'aos_vm4', 'aos_vm5', 'aos_vm3']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 101.0 | mapping ['aos_vm8', 'aos_vm6', 'aos_vm2', 'aos_vm7']
    --------------------------------------------------
    0 - usage: 50.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 53.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 51.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 53.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------
    0 - usage: 102.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 100.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 95.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 99.0 | mapping ['aos_vm6', 'aos_vm7']

    Test Case 3

    In this test case, you will run 8 virtual machines that start with an already balanced mapping of vCPU->pCPU. The vCPU of each VM will process the same workload.

    Expected Outcome

    No vCPU->pCPU mapping changes should occur since a balance state has already been achieved.

    --------------------------------------------------
    0 - usage: 63.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 60.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 59.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 58.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------
    0 - usage: 57.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 60.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 60.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 61.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------
    0 - usage: 57.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 59.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 59.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 60.0 | mapping ['aos_vm6', 'aos_vm7']

    Test Case 4

    In this test case, you will run 8 virtual machines that start with an equal affinity to each pCPU (i.e., the vCPU of each VM is equally like to run on any pCPU of the host). The vCPU of each VM will process the same workload.

    Expected Outcome

    Each pCPU will exhibit an equal balance of vCPUs given the assigned workloads.

    3 - usage: 60.0 | mapping ['aos_vm3', 'aos_vm7']
    --------------------------------------------------
    0 - usage: 57.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 61.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 58.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 59.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------
    0 - usage: 59.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 60.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 60.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 61.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------

    Test Case 5

    In this test case, you will run 8 virtual machines that start with an equal affinity to each pCPU (i.e., the vCPU of each VM is equally like to run on any pCPU of the host). Four of these vCPUs will run a heavy workload and the other four vCPUs will run a light workload.

    Expected Outcome

    Each pCPU will exhibit an equal balance of vCPUs given the assigned workloads.

    --------------------------------------------------
    0 - usage: 50.0 | mapping ['aos_vm3']
    1 - usage: 70.0 | mapping ['aos_vm2', 'aos_vm7']
    2 - usage: 142.0 | mapping ['aos_vm1', 'aos_vm4', 'aos_vm6']
    3 - usage: 85.0 | mapping ['aos_vm8', 'aos_vm5']
    --------------------------------------------------
    0 - usage: 88.0 | mapping ['aos_vm1', 'aos_vm7']
    1 - usage: 87.0 | mapping ['aos_vm8', 'aos_vm4']
    2 - usage: 53.0 | mapping ['aos_vm5']
    3 - usage: 119.0 | mapping ['aos_vm6', 'aos_vm2', 'aos_vm3']
    --------------------------------------------------
    0 - usage: 182.0 | mapping ['aos_vm1', 'aos_vm5', 'aos_vm3', 'aos_vm7']
    1 - usage: 36.0 | mapping ['aos_vm8']
    2 - usage: 54.0 | mapping ['aos_vm4']
    3 - usage: 70.0 | mapping ['aos_vm6', 'aos_vm2']
    --------------------------------------------------
    0 - usage: 100.0 | mapping ['aos_vm1', 'aos_vm5']
    1 - usage: 73.0 | mapping ['aos_vm8', 'aos_vm2']
    2 - usage: 99.0 | mapping ['aos_vm4', 'aos_vm3']
    3 - usage: 74.0 | mapping ['aos_vm6', 'aos_vm7']
    --------------------------------------------------
  • Advanced Operating Systems (Project 1) – monitoring CPU affinity before launching my own scheduler

    Advanced Operating Systems (Project 1) – monitoring CPU affinity before launching my own scheduler

    Project 1 requires that we write a CPU scheduler and memory coordinator. Right now, I’m focusing my attention on the former and the objective for this part of the project is write some C code that pins virtual CPUs to physical CPUs based off of the utilization statistics gathered with the libvrt library (I was able to clear up some of my own confusion by doodling the bitmap data structure passed in as a pointer). We then launch our executable binary and its job is to maximize the utilization across all the physical cores.

    But before launching the scheduler, I want to see what the current scheduler (or lack thereof) is doing in terms of spreading load across the physical CPUs. At a glance, looks like a very naive scheduler (or no scheduler) runs, given that all the virtual guest operating systems are pinned to a single physical CPU:

    0 - usage: 102.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping []
    --------------------------------------------------
    0 - usage: 100.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping []
    --------------------------------------------------
    0 - usage: 101.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping []
    --------------------------------------------------
    0 - usage: 101.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping []
    --------------------------------------------------
    0 - usage: 102.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping []
    --------------------------------------------------
    0 - usage: 100.0 | mapping ['aos_vm1', 'aos_vm8', 'aos_vm4', 'aos_vm6', 'aos_vm5', 'aos_vm2', 'aos_vm3', 'aos_vm7']
    1 - usage: 0.0 | mapping []
    2 - usage: 0.0 | mapping []
    3 - usage: 0.0 | mapping [