Author: mattchung

  • Caretaking

    Every Wednesday morning, I attend therapy. Not the physical kind, where one goes to rehabilitate some injury afflicted by blunt trauma. Rather, I sit in a cushy sofa chair that’s positioned six or so feet away from Roy, my psychotherapist, and I voice not only what I’ve been thinking—the primary focus of cognitive behavioral therapy—since the past week, but most importantly, how I’m feeling.

    Oh yes, the feelings.

    After listening to countless stories, week after week, Roy discovered a recurring pattern, a behavior of mine that tends to bleed through my day to day life: taking on the role of a caretaker.  From speaking up to my co worker when they are unable to speak for themselves, to feeling embarrassed for someone when they feel awkward. And for the last six months, I had assumed that role was a positive characteristic—something to be proud of. Something I should pin to my shirt and flaunt to others. But recently, just a few days ago, I looked up the definition, searching Google and I eventually stumbled upon an article that defines caretaking as:

    Caretaking is a dysfunctional, learned behavior …

    Wait—what? A dysfunctional, learned behavior? How in the world can caretaking be a dysfuntional behavior, let alone a negative trait?

    As it turns out, my definition of caretaking is diametrically opposed to the real definition. Caretaking is one of many behaviors that fall under codependency, a group of behaviors that causes us to have unhealthy relationships. Someone who exhibits these behaviors is called a codependent, who may exhibit one or more of the following:

    • think and feel responsible for other people—for other people’s feelings, thoughts, actions, choices, watns, needs, well-being, lack of well-being, and ultimate destiny
    • anticipate other people’s needs and wonder why others don’t do the same for them
    • feel safest when giving and feel insecure and guilty when somebody gives to them
    • find it easier to feel and express anger about injustices done to others, rather than injustices done to themselves
    • abandon their routine to respond to or do something for someone else
    • not know what they need and need or, if they do, tell themselves what they want and need is not important
    • feel different from the rest of the world
    • fear rejection and take things personally

    This is not a complete list. Those are just a few codependent characteristics that ring true for me, characteristics of mine that I’ve always considered part of me.

    Can you relate to any of them?

    If so, I recommend you pick up this book: Codependent No More – How to stop controlling other sand start caring for yourself

    Coupled with therapy, reading the book has really helped me not only better understand and become aware of my anxieties and angers and frustrations, but also the source of all those feelings; taking care of everyone around me instead of taking care of the person who needs it the most.

    Me.

     

  • Be a man

    He wears a mask, and his face grows to fit it – George Orwell

    Are you a masculine man?

    How does one even define masculinity?  By the American, western definition, a masculine man is someone who carries a heavy beard on his chiseled chin, speaks in a deep Clint-Eastwood voice, commands respect from those around him, seductively winks at women from across the bar, enjoys drinking a six pack of Pabst Blue Ribbon, controls and suppresses his emotions, never revealing his feelings.  An alpha male.  A homophobe.  By those standards, I rate pretty low on the macho scale:  my chin grows two whiskers about every three weeks, my tone of my voice falls within the falsetto range, when I wink it looks like a nervous tick, I quit drinking beer and hard alcohol for close three years (part of my recovery), and since I started seeing a therapist (apparently, still taboo these days), I’m much more in touch with my emotions, crying more than all my childhood years. Combined.

    But what made even think about masculinity?

    To be honest, I’ve never really paused and contemplated my masculinity, let alone put words on (digital) paper.  However, I recently streamed a movie on Amazon Prime called “The Mask You Live In,” a documentary recommended by some of my wife’s friends from the “Viets who give a shiet” group, who joined us in our home for dinner a few weeks ago, when several deep conversations surfaced, one of them being on masculinity.  I had opened up to them, revealing my battle and recovery from addiction, a shameful part of my life that I had hid from everyone for many years. Including myself.  But that part of my life was something I came to terms with three to four years ago, when I began confronting my demons, facing them head on. Instead of dodging them. Cause you can never really quell your demons.  You cannot silence them through sheer force.  You can try and push them down, but like a slinky, it’ll eventually spring out.

    And this lead me to thinking about my future children.

    When I listen to my parents—divorced since I was a young age, about the age of three or four—share their view on having kids, their words basically boil down to “It’s love you cannot describe … it’s conditional.” They see how much I love the dogs, how I take them for walks every day (no matter the weather), how I feed them the ideal canine diet (all raw baby), how I sprawl on the carpet and smother them with kisses—but still, they say “Imagine that feeling, but 100 times more.”

    The fact that I’m thinking about kids makes me chuckle because I never imagined having kids until recently, now that I’m in my late twenties (I tell everyone that I’m 30 now, to soften the blow for future Matt).  And when I think about kids, I deeply think about how I (along with my wife) am going to raise them.  If we have a son (and I hope we do) I think about my future conversations with him, how he’ll repeat the words that flow from my mouth and mirror my behavior.

    What message do I want to send to him?

    Well, I suppose a few things. First, I want to teach him that it’s okay to cry.  Really, it is.  I’ll encourage it.  I’ll actively fight the words that have been inculcated through society and media, words like “man up” or “be a man.” What do those words even mean?  At best, they hold no value, at worst they’re damaging, teaching him that a masculine man swallows his emotions, instead of understanding and most importantly, honoring them.  I want him to be in touch with how he feels, allowing himself to just “feel” (that’s probably the biggest take away that I learned from therapy).  Second, I want him to feel comfortable under his own skin, never carrying an ounce of shame, which is different from guilt.  Guilt is feeling bad about something you’ve done, and shame is feeling bad about who you are.

    You see, I was never comfortable under my own skin until the last few years, and that lead me to adjusting my external, physical appearance—like tattooing my entire arm, from shoulder down to the edge of my wrist—to mask an internal insecurity, hoping people would perceive as some type of person that I’m not.

    But most importantly, I want to be there for my children, physically and emotionally.  I want to show them that I’m not only listening with my ears, but with my eyes.

    So, what message do you want to send to your children?

    What mask do you wear?

  • Friday night in

    My wife (Jess) and I were both dead tired from yesterday—friends had come over to our house and cooked a Vietnamese meal the night before and we fell asleep around just before midnight, a little over two hours past our bedtime—and we had decided to spend the Friday night staying in doors, eating leftover, vegan soup and streaming a movie.

    Whenever we plop ourselves down in front of the TV, a “smart” LED television, we spend what feels like an eternity searching for the perfect movie that matches our mood, loading Netflix and scrolling up and down through the vast collection of their originals, switching to Amazon Prime, watching trailer after trailer after trailer.

    Eventually, we settled on Hidden Figures.

    Hidden figures centers on three black women, all working for the prestigious NASA during the civil rights movement, in the 1960s.  The movie’s protagonist is named Katherine, a math prodigy who was widowed and left with raising three children while juggling a full time job as a “computer”.  Because of her mathematical genius, Katherine is pulled into Freedom 7, a project aiming to send an American astronaut into orbit, a response to Russia’s recent victory of sending the first man into space.

    But I’m not here to discuss the movie.

    I’m here to reflect on my feelings that immediately followed watching the film.  If I had to put a label on my emotions, I would say that I felt inspired, followed by disappointment.

    I was inspired by how three women accomplished such great feats: one woman petitioning to take night classes at a local, segregated high school and becoming the first African American engineer at NASA; another woman who saw her role as a computer becoming obsolete and ended up teaching herself Fortran, eventually leading the IBM team; another crunching numbers for the rocket’s landing coordinates.  How can you not be inspired?

    Until I started reflecting on my own, ordinary life.  A number of questions popped into my head: What have I done with my life so far?  What have I accomplished? How am I sitting here on the warm, leather couch after a “long” day of work, when these three, underprivileged, hardworking women were out hustling.

    But then I return to my blessings.  I’m lucky. I have a beautiful, loving, nurturing wife who I adore.  I have two, tail wagging dogs that snuggle with us in bed, keeping us warm on those unexpecting, cold nights.

  • A brief introduction to cache organization

    As a software programmer, I always had a vague understanding of how an operating system fetches data from memory.  At an abstract level, I understood that a processor requests data from the memory controller, sending a message (with the memory address encoded) across the bus.

    But I learned recently that in addition to the system’s physical memory—the same memory I used to squeeze into the motherboard when installing RAM—there are multiple hardware caches, completely abstracted from the programmer.

    These caches sit between the processor and main memory, called: L1 cache and L2 cache and L3 cache.  Each cache differs: in cost, in size, and in distance from the CPU. The lower the digit, the higher cost, the smaller in size, and the closer it sits to CPU.  For example, if we compare L1 and L2 cache, L1 costs more, holds less data, and sits closer to the processor.

    When the processor wants to retrieve data from memory, it sends a request first lands in L1 cache’s world.  If L1 has that memory page cached, it immediately sends that data back to the processor, preventing the request from unnecessarily flowing towards the memory controller.  This pattern of checking local cache and forwarding requests repeats until the request eventually reaches the memory controller, where data is actually stored.

    The further we allow the CPU’s request to travel down the bus, we penalize the CPU, forcing it to wait, like a car at a stop sign, for longer cycles. For example, the CPU waits 4 cycles for L1 cache, 12 cycles for L2, 36 cycles for L3, and—wait for it—62 cycles when accessing main memory.  Therefore, we strive to design systems that cache as much data as possible and as close to the CPU, increasing overall system performance.

    We break down a cache into the following components:

    • Blocks
    • Lines
    • Sets
    • Tag
    sets, lines, blocks
    Cache organized into sets, lines, and blocks

    As you can see from the image above, we organize our cache sets (S), lines (L), and blocks (B).  One block of data represents 8 bits (1 byte) and every block of data is represented by a physical memory address. For example, the memory address 0x0000 may store 010101010 and 0x0001 may store 01110111 another.  We group these blocks together into a line, which store sequential blocks.  A line may store two or four or eight or sixteen bytes—it all depends on how we design the system.  Finally, each line belongs to a set, a bucket that stores one or more lines.  Like the number of bytes a line stores, a set can store one or two or three or forty—again, it all depends on our design.

    Together, the total number of sets, number of lines, and number of bytes determine the cache’s size, calculated with the following formula: cache size = S x E x B.

    In the next post, I’ll cover how a cache processes a memory address, determining whether it retrieves memory from cache or forwards the request to the next cache (or memory controller).

  • Defusing a Binary Bomb (phase 1)

    http://blog.itsmemattchung.com/2017/02/28/csapp-defusing-the-bomb-phase-1/

    I password protected the original post (email me for the password if you are interested in reading it).  When I posted the original link on reddit/r/compsci, multiple commenters suggested that I delete the article to avoid students from cheating (which was not my intention).  I then sent an e-mail to the professors (at CMU) and they kindly replied, asking me to remove the post:

    Matt,

    Thanks so much for your kind words. It’s great to hear that the book is helpful to you. While every student gets a slightly different bomb, the solution strategies for each phase are very similar. So it would be good if you could remove those posts.
    Thanks!
    Dave
  • Protected: Defusing a Binary Bomb (phase 1)

    This content is password protected. To view it please enter your password below:

  • How does the modulus operator work?

    As a programmer, I’ve written a line or two of code that includes the modulus operator (i.e. “return x % 2“).  But, never have I paused to think: “How does the underlying system carry out this operation?” In this post, I limit “underneath the hood” to the lowest level (human readable) programming language: assembly.

    So, I’ll take a program (written in C language) and dump it into assembly instructions. Then, I’ll explain each instruction in detail.

    My background in assembly programming

    Up until a few weeks ago, I had never studied assembly—I did flip threw a page or two of an assembly book, when a colleague mentioned, about five years ago, that his uncle programmed in assembly—and I certainly underestimated the role that assembly plays in my career.

    Sure—the days of writing pure assembly language evaporated decades ago, but the treasure lies in understanding how programs, written in higher level languages (e.g “C, Perl, PHP, Python”), ultimately boil down to assembly instructions.

    Modulus program

    So, I contrived a trivial modulus program written in C—let’s dissect it:

    // modulus.c
    
    int modulus(int x, int y){
        return x % y;
    }

    Converting C code into assembly

    Before a program can be executed by an operating system, we must first convert the program into machine code—we call this compiling.  Before we run our program (modulus.c) through the compiler, we need to discuss two arguments that we’ll pass to the compiler, in order to alter the default behavior.  The first argument, -0g, disables the compiler from optimizing the assembly instructions.  By default, the compiler (we’re using gcc) intelligently optimizes code—one way is by reducing the number of instructions.  The second argument, -S, instructs the compiler to stop at  just before it creates the machine code (unreadable by humans), and, instead, directs the compiler to create a file (modulus.s) containing the assembly instructions.

    # gcc -Og -S modulus.c
    

    The command above outputs a file, modulus.s, with the contents (unrelated lines removed):

    modulus:
    movl	%edi, %eax
    cltd
    idivl	%esi
    movl	%edx, %eax
    ret

    Let’s step through and explain each of the assembly instructions, one-by-one.

    mov

    When we want our CPU to perform any action, such as adding, substracting, multiplying, or dividing numbers, we need to first move bytes of data (an intermediate value, or from memory) to a register.  We move data to registers with the mov command, which is capable of moving data from:

    • register to memory
    • memory to register
    • register to register
    • immediate value to memory
    • immediate value to register

    The assembly above first moves data from one register (%edi) to another register (%eax).  This is necessary since subsequent instructions, such as cltd, rely on data being present in the %eax register.

    cltd

    cltd stands for “convert long to double long.”  But before we dig into why we need this instruction, we must detour and briefly explain the next instruction in line, idivl.

    When we send an idivl (divide long) instruction, the processor calcutes the numbers and store the quotient in one register, and stores the remainder in another.  These two values stored in the register are half the size; if the dividend is 32 bits, the cpu stores (1) 16-bit value in one register, and the other 16-bit value in another.

    Therefore, if we expect a 32-bit quotient and 32-bit remainder, then the dividend (which is, presumably, 32-bits) must be doubled—to 64-bits.  In our modulus program, we set a return value of type int.

    idivl

    idivil divides the numerator (stored in %eax) by the argument (or denominator) — the argument that we passed to the instruction.  In our assembly example above, idivl divides the value stored in %rax, by the value in %esi—x and y, respectively.

    movl

    At the end of a sequence, assembly (by default) returns whatever value is stored in the register %rax.  Therefore, the final instruction moves the remainder, not the quotient, from %edx to %rax.

    Wrapping up

    I hope I was able to share a thing or two on how a higher level program ultimately breaks down into simple assembly instructions.

    [1] I was banging my head against the wall until I found a easy to understand reason why we must convert long to double long: http://www.programmingforums.org/post12771.html

  • Here’s some assembly instructions … now write the corresponding C code

    A wide grin formed on my face, after successfully completing an exercise (from Computer Systems a Programmer’s perspective) that required me to write C code, based off a sequence of a six assembly instructions:

    void decode1(long *xp, long *yp, long *zp) {
    /* xp stored in %rdi, yp in %rsi, zp in %rdx)
    
    decode1:
        movq (%rdi), %r8
        movq (%rsi), %rcx
        movq (%rdx), %rax
        movq %r8, (%rsi)
        movq %rcx, (%rdx)
        movq %rax, (%rdi)

    The exercise consists of solely of mov instructions, which is capable of moving bytes (movq specifically moves a quad word) from:

    • register to memory
    • register to register
    • immediate value to register
    • immediate value to memory

    So, I pulled a pen and paper from my backpack and began tracing the flow of data between the multiple registers and memory locations.  I then took that chicken scratch, and wrote the corresponding C code.  Finally, I flipped to the end of the chapter to compare my code.  Bingo—the C code I wrote mapped exactly to the author’s answer key.

    I celebrated the tiny victory.

    The purpose of the exercise is two fold.  First, the exercise illustrates that higher level programming languages—including C, Python, and Java—compile into a lower level language: assembly.  Moreover, this compels programmers, I think, to pause while writing code and question the performance implication; does this line require two instructions, three, four? Second, the exercise demystifies the concept of C pointers, a construct that many novice programmers stumble over. But after completing this exercise, I find pointers intuitive—nothing more than a value of a memory address.

  • Let’s get lower than Python

    Like a huge swath of other millennial, I dibbled and dabbled in building websites —writing in html, css, and javascript—during my youth, but these days, I primarily code (as a living) in favorite programming language: Python.

    I once considered Python as one of the lower level programming languages (to a certain degree, it is) but as a I dive deeper into studying computer science— reading Computer Systems from a programmer’s perspective, at my own pace, and watching the professors lecture online, for free—I find the language creates too big of a gap between me and system,  leaving me not fully understanding what’s really going on underneath the hood.  Therefore, it’s time to bite the bullet and dive a bit deeper into learning the next not-so-new language on my list: C.

    Why C?  One could argue that if you want to really understand the hardware, learn the language closest to the hardware: assembly (the compiler translates assembly into object code, which, ultimately, executed by the machine).  Yes—assembly is the closest one could get to programming the system, but C strikes a balance.  C can easily be translated into assembly, while maintaining it’s utility (many systems at work still run on C).

    Now, I’m definitely not stopping from writing and learning Python.  I love Python. I learn something new—from discovering standard libraries to writing more idiomatic code—every day.  I doubt that will ever change; I’ll never reach a point where I’ll say “Yup, that’s it, I learned everything about Python.”

    But, I am devoting a large chunk of my time (mostly outside of working hours) on learning C.

    So, my plan is this: finish “The C Programming Language” by Brian Keringhan and Dennis Ritchie. The de-facto book to learn C.

    [1] https://scs.hosted.panopto.com/Panopto/Pages/Sessions/List.aspx#folderID=%22b96d90ae-9871-4fae-91e2-b1627b43e25e%22

  • Calculating page table entry size

    How many page table entries are required given a virtual address size and page size?

    I was confronted with this question when reading Computer Systems from a programmer’s perspective on virtual memory management (VMM) which forced me to reread the chapter, over and over. It wasn’t until I forced myself to close the book, step away from the problem for a day, was I able to fully grasp what was being asked:

    Page table entries problem
    Page table entries problem

    Let’s narrow the scope and focus on a single instance, where n is 16 and P is 4K;  the question can be easily rephrased into something that makes more sense (to me): “How many 4k pages can fit into 16 bit memory address”?

    First, how any bits are required, for a 4K block size, to represent a single page? Well, 212 is 4096 so … that’s sufficient. Therefore, we can rephrase the abstract question that we can solve mathematically: “How many 12 bit pages can fit into a 16 bit virtual address size?”

    So, the problem now boils down simple division— 216 / 212,which can be expressed as 216-12: 24, which is sixteen (16). That’s 16 page table entries that required for a virtual address of size of 16 combined with a page size of 4k.

    The same approach can be taken for all the remaining practice problems. Convert the page (e.g. 4k, 8k) size into bits and divide that into the virtual address size (e.g. 16, 32)