Skip to content

tech

Letter to an incoming CS Undergrad

Dear Jiana,

I heard from your mother you are enrolling as a Computer Science major in undergraduate.

First of all, I want to congratulate you on successfully getting to college. Though it might seem "everyone goes to college these days", that does not diminish your achievement in the least. Comparisons matter, but by definition only relatively. The work you put in through 12 years of schooling and then college application and everything else were tasks given to you as a child explicitly or implicitly. Maybe you did or did not like them, but what matters is you saw things through to completion. So again, congratulations.

Second, I want to welcome you to the field of computers. It's a friendly field; the hacker ethos means there is always someone willing to reach out and help -- as long as you put in the work first ;). It's also, very surprisingly, very accessible. Programmers like nothing else than to extoll and trumpet their works; fortunately for programmers, they also invented the internet. You will soon hear and see and meet many many bright, talented and industrious persons in this space that you can learn from. Have fun making new friends!

Third, it's alright to take a while to being "good". Maybe you won't even want to be a good programmer. But if you do, it takes time. There's no instant cheat code. The only cheat codes I know are to study a lot, do side projects a lot, meet and follow interesting people and see what they're doing alot (find "Hacker News" and make it your daily ritual to skim through the headlines). Admittedly, I'm not very good or was very late at doing any of those, but maybe you can make use of it. There's no cheat code to being "good", work hard!

Fourth, I think the aspect that makes programmers fall in love with programming is the freedom. With software, you have freedom to do almost anything. If you can think of it, it can be done. I've been doing this for 5 years now (including time in college), I don't think I got it until this year, so don't fret if you don't get it right away. The freedom to do what I want is honestly intoxicating. I am only limited by my thoughts and transferring them to my fingers. I hope you will find that freedom as well.

Finally, it's ok to switch direction. I was in chemical engineering for 2 years in college before I landed on computers. At first, I thought CS would be my minor. That intro class got me hooked and I went all in. Maybe for you it would be the other way, you don't like computers at all, you hate looking at screens all day, your posture has gone bad and your eyes hurt, you just don't enjoy it the way others seem to. That's fine. Don't make decisions you feel you will regret later. Do things because they make sense to you and your priorities. Be careful of sunk-cost fallacy. People's advice are just that, advice. Remember that it's your life and your future. At 18, you became an adult legally, that comes freedom and responsibility to yourself. Lookout for yourself!

Finally, the only concrete advice I'll give you is going to be in this paragraph. get a Mac computer or install Linux on your machine and know that Windows sucks. Use the command line. Put all your homework and notes and diaries and projects on GitHub or something similar, even if they're only private to you. Use the command line. Self-marketing doesn't have to be icky, think of it as "increasing the surface area for luck to land on"; or, in other words, start writing a blog and share. Use the command line. Protect your eyes, I suggest doing something physical once every two days at least. Use the command line. Slow is smooth, smooth is fast, and we must be as fast as possible because it's always better to be fast; so learn how to type faster, learn how to read faster, learn how to learn faster. Use the command line. Read Hacker News. Use the command line.

Good luck and hack on!

from your mother's colleague,

Viet Than

the database discovery

This is probably my most interesting story so far at this job. No lie, I really did discover a database in production that no one else knew existed.

It starts when Kobi, AppCard's Operations Director, approached me one day and say, "Hey Viet, can you look into why one of our jbrains wasn't backed up?".

For context, jbrains are the on-prem devices AppCard deploys to our customers (the grocery stores). These brains sits in the grocery store's network and communicates with the various Point-of-Sales devices to administer coupons, loyalty system etc. The Jbrain is highly configurable, as each grocers have different needs and integrations.

I knew the jbrain's "backups" are really just daily copies of these configuration files, stored in a server on AWS (we will leave aside the question of why not S3). With these files, a jbrain replacement can be "built" with the same configurations if there are hardware failures or the likes.

After confirming that it looks like the jbrain in question has no backups, and actually there are other jbrains that are missing their backups too, the only suspects is a bug in the backup process or a bug with the backup server. Now I know this backup server, the tech support guys and I use it everyday to do our work, but it's a holy mess of scripts created by half a dozen sysadmins that I never got a knowledge transfer on, we can't start our search there. Howabout the process? Do we know how the backup process work? Of course we don't. And just as obviously, the guys that actually built it are long gone and didn't leave behind any documentation on both the process and the server. The only clue I had was someone mentioning: "I think it's scheduled to run daily at 1 or 2AM or something".

Now that could mean anything, but to me, that sounded like a crontab. At the very least, I hope the crontab exists on the backups server, and not some other server, cause oh boy do we have a lot of servers (as an aside, this monstrosity of complexity is being worked on, with no end in sight). I was able to find a way to output every possible cronjob (users, cron directories), and nestled in all those jobs was one labeled "daily jbrain backups". Aha!

But wait, that backup script is in perl. I didn't know perl, but I had the spirit of all engineers in that we know we can figure anything out. It's actually quite an intuitive language. And all you really need to know how to debug is the ability to print to stdout.

I quickly found that this backup perl script rely on a textfile with a list of stores to know what to backup. Grepping on that list, we can see it is certainly missing many many stores. So what populates this file?

At this point I could do a combination of find/grep, but thankfully I noticed that this textfile is last modified on the dot at 11PM the previous day. Lol, crontab again it is. Scanning the crontab output from the previous section, and what do you know, another perl script.

This time, I noticed something peculiar. The perl script started calling /usr/bin/mysql with some variables. Chasing down these variables leads to some env files. And at this point, I realized that it is calling a database that I didn't know about. This database wasn't in my training, it wasn't ever mentioned by the support engineers, it wasn't on the google sheet containing list of database maintained by the ex-database administrator, and it was not part of my knowledge transfer with the ex-system admin either.

I called Kobi and told him the situation, and then we simply shared a kind of chuckle reserved for situations of absurdity.

Back to work, I obviously started by logging into this MariaDB lost through time. There were only a few tables, nothing mind-blowing or anything. But combined with the perl script, I started tracing that perl script to see what it is doing with the database. And actually, once I figured out how to run the perl script, the error quickly became apparent as the result of an unhandled error by the script when it tries to insert rows into the database. For a moment, it was the developer happy debug loop of modify, run, read until eureka!

Anyway, what was the issue isn't important (it is fixed by now!)(there was missing ancillary data because new jbrains had a recent upgrade), but the discovery of the database is. This database, until we can move on from it, is a critical part of the company's infrastructure. The existence of this database, even mostly-unmanaged as it is now, changed how development for operations can move forward. We started documenting it. Though opportunities are few, future development did consider whether we can use that database. Once I figured out how to get myself superuser access, I even started adding new tables for my developmental needs.

Looking back, I think of this story as a fond discovery. The CTO was definitely pleased to hear about this find. And I think it's a lesson in how effective but forgotten scripts and software can quietly run for years until the day something breaks.

PS: We are starting to centralize the various perl and bash scripts across servers and versioning control them. Not forgetting those too!

the data recovery

Many developers will have done this, some probably do this as a daily routine, but a recent work of mine on a data recovery job felt like a latest expression of my career's progress so far.

The Problem

After being notified by some customers, AppCard discovered that a real-time SQS data queue provided by a third-party hasn’t provided real-time data in a while. Though we were able to quickly notify our third-party to bring that system back online, we still had an issue where approximately 4 days of data was missing and unprocessed.

{% include centerImage.html url="/assets/DataRecovery/not_my_problem.gif" desc="What I wanted us to say to them but they said this to us first" title="The 3rd-party didn't say this, but more like 'we don't want to deal with this'" alt="Jimmy Fallon on The Tonight Show saying 'This sounds more like a you problem'" %}

Based on business considerations, we decided that it would be best if we could recover the data without needing help from the third-party (instead of telling them that they should be doing this because after all it's their fault). When the integration lead hesitated to take on this responsibility due to allocation constraints, I volunteered to take on the challenge. There were two components I had to address before even committing (because free credits for customers are expensive but simple): 1. Is it possible to retrieve the data from the third-party's available API? 2. How long would it take to implement this?

{% include centerImage.html url="/assets/DataRecovery/give_the_money.gif" desc="How I imagine any average customer hearing about missing data" title="The greed of man is insatiable" alt="Scene from the show Friends where Phoebe grabs Ross then threateningly says 'Give me your money, Punk'" %}

The Fix

Firing up a jupyter notebook, I got to work. Quickly, I was able to confirm that with the right secrets pulled from the right place, and just reading the documentation, the third-party's API seems to be able to provide the data we need. ( We are leaving aside the question why we rely on an SQS instead of this API ;) ). Additionally, after quickly skimming through our integration subsystem, I was able to identify a location in the flow where the right data could be injected with the right dummy setup.

{% include centerImage.html url="/assets/DataRecovery/in_theory_possible.gif" desc="I was 70% sure I could do it" title="The line between confidence and arrogance is thin" alt="Some dude on a red couch saying 'In theory it's possible'" %}

Gauging my own speed of development, considering that realistically I only grasp maybe 60-70% of how to use the API or the integration subsystem, and adding some buffer, I estimated 2 days for implementation and 1 day to run the recovery process. I then presented my findings to the business and tech leads that afternoon, giving me the greenlight to go ahead.

{% include centerImage.html url="/assets/DataRecovery/you_got_this.gif" desc="I didn't include a few worrying discussions of possible side-effects" title="Bill Murray would make a great tech lead" alt="Bill Murray in a suit with left eyebrow raised while holding a wine glass on his left hand and pointing at the screen with his right hand at the viewer with caption 'You Got This'" %}

Our async infrastructure and integration is already built on the Python framework Celery, convenient grounds for this one-off development. The simple overview of the job is that it would pull data for 100 transactions at time, process it, and repeat until it hits a transaction outside the 4 day gap. I made sure to provide sufficient optional parameterization in case I needed to restart the jobs if it fail or stop unexpectedly. Since we can only deploy once a day, it would be better for a struggling but kept-running process than having to wait for the following day to fix the code and start over. This also meant an almost excessive amount of loggings, so as to have an intimate visibility on how the recovery task is going, and provide the necessary parameters if the job needed restarting.

{% include centerImage.html url="/assets/DataRecovery/laying_train_tracks.gif" desc="Conceptual visual of my architecture" title="I'm Gromit" alt="The beagle Gromit from the series Wallace and Gromit riding a toy train and laying down the train tracks for that toy train as fast as he can so he won't crash" %}

Once I felt comfortable, we had a pre-production environment that I made sure to test out my task. But admittedly our pre-production data is very different from real production. There were immediate hiccups once this was merged in production, one of our assumptions turned out to be incorrect and sometimes the async job didn’t automatically repeat even though there was more data in the gap to query. Thankfully, I could manually re-trigger the jobs with the right parameters because of the logging. This meant more human intervention but still allowed the job to finish.

{% include centerImage.html url="/assets/DataRecovery/phew.gif" desc="I didn't do this cause I was sitting next to the business, but I was this internally" title="A lot of internal self-praise" alt="Some guy wiping his brow" %}

The Conclusion

In the end, almost all our customers didn’t even notice the data gap. Shoppers got their points and we didn't need to give anyone any extra credit. My teammates could focus on other tasks while I proved to myself that I can sovle vague and unknown problems by myself. This mini-project was well-delivered, well-scheduled, and had real immediate business impact on the bottom line. Coming home that day, I felt like I earned my paycheck.

{% include centerImage.html url="/assets/DataRecovery/honest_work.jpg" desc="Professional pride feels good" title="Couldn't find the gif for this" alt="The meme with the farmer and caption 'It ain't much, but it's honest work'" %}

line goes up

Crypto and its problems

"Line Goes Up – The Problem With NFTs" - Folding Ideas

and

[M]arkets are distributed systems.

Even though there are, in fact, very strict regulators and regulations, I can still enter into a contract with you without ever telling anyone. I can buy something from you, in cash, and nobody needs to know. (Tax authorities merely want to know, and anyway, notifying them is asynchronous and lossy.) Prices are set through peer-to-peer negotiation and supply and demand, almost automatically, through what some call an "invisible hand." It's really neat.

As long as we're in the continuous control region.

As long as the regulators are doing their job.

Here's what everyone peddling the new trendy systems is so desperately trying to forget, that makes all of them absurdly expensive and destined to fail, even if the things we want from them are beautiful and desirable and well worth working on. Here is the very bad news:

Regulation is a centralized function.

The job of regulation is to stop distributed systems from going awry.

Because distributed systems always go awry

...

I find myself linking to this article way too much lately, but here it is again: The Tyranny of Structurelessness by Jo Freeman. You should read it. The summary is that in any system, if you don't have an explicit hierarchy, then you have an implicit one.

Despite my ongoing best efforts, I have never seen any exception to this rule.

Even the fanciest pantsed distributed databases, with all the Rafts and Paxoses and red/greens and active/passives and Byzantine generals and dining philosophers and CAP theorems, are subject to this. You can do a bunch of math to absolutely prove beyond a shadow of a doubt that your database is completely distributed and has no single points of failure. There are papers that do this. You can do it too. Go ahead. I'll wait.

\

Okay, great. Now skip paying your AWS bill for a few months.

Whoops, there's a hierarchy after all!

Tô Minh Sơn's comment on apenwarr:

"Men prefer to will nothingness than to not will"

and also pointed me to chainalysis's ranking of crypto adoption with Vietnam being on top

niche tech in film

I just want to post about this sexy beast that is currently situated at the Mono No Aware film lab in Brooklyn, New York. Let me try colorfully recount what Steve Cossman, Mono's director, tells me:

This is 1 of 18 machines in the world. The hardware is handbuilt by one guy and the software is handbuilt by another. Its full cost is \$250,000 but they made one at \$30,000 for Mono. It's got 32TB of hard drive temporarily as the guy will come next week to upgrade that. It's hooked up to a Windows PC that host the processing software, export to the data tower and we've got a Mac hooked up to that for ease of data transport. Scans 8 frame a second at 4K resolution. We drove it to the lab in the middle of a snow squall, and I have to thank a cinematography.com guru for helping set it up for us.

{% include centerImage.html url="/assets/niche_mono2.jpg" desc="what a sexy scanner" title="Xena Film Scanner" %}

{% include centerImage.html url="/assets/niche_mono1.jpg" desc="Xena control module" title="what are all those knobs?" %}

That's it, just niche tech that most will not get to see. Unless they come to Mono No Aware.

scaling with openvpn

You know your company is growing if your openvpn --max-client limit suddenly needs to be made bigger than the default 1024 or else the OpenVPN server suddenly dies and everyone thought it's a firewall issue.

It started around 2PM, our resident SysAdmin-Extraodinaire Dave, was sitting in the seat next to me when he suddenly says out loud "I can't connect to any jbrains". Ray who sits opposite me reached for his keyboard, typed a few things, and confirm "huh, I can't connect to any either". Hearing that, the other support engineers started doing the same thing, verifying that they too can't connect to any jbrains.

At this point I should explain what are the jbrains. They're AppCard's brains in the field, deploying one to each merchant that we work with, handles any business logic, watch and alter POS transactions, send back data to our remote servers. From our servers, we can ssh into any of the jbrains to do maintenance, debugging, retrieving logs, etc. In short, they're the most important component in the AppCard hardware system, and now none of us can connect to them.

Almost instantly, the entire Ops team was roused into a flurry of activity. The Ops Manager didn't notice at first (he's very plugged in), not until the first of the Account Manager started pinging him on Slack, and then physically walked over to ask what's going on. Dave has left only one message on the prods alert channel saying: "jbrains are down. looking into". Within 10 minutes, a Google Meet where every engineer that has not yet left work was invited to join (in Israel it was dinner time). The CTO couldn't join but he was able to guide debugging through Slack, with some very insightful questions on "are the iptables up?". Dave himself went into the jbrain provisioning room all by himself, for focus I guess, and did his furious typing there.

Are the jbrains down? No, Account Managers are reporting discounts are still being applied. Are the servers configured correctly? Yes, and no one was changing anything recently. Is OpenVPN up? yes, we have multiple and none died, and failover processes would have been triggered. Can we connect to jbrains we have in the office? No, well, yes, but not through our VPN servers. Is it AWS? No, we can get into the servers and we can connect to the jbrain through network IP. What do you see when you connect to the jbrain directly and look at its log? it says they cannot reach the VPN server. Can't reach or can't connect? Can't reach, "no route to host". Can you connect to the VPN server? Yes, lemme attempt restart of servers. This reminds me of something, is iptable on? No, it's not on, wait, it's only disabled, not stopped, and the server restart restarted iptable. Disabling now.

It was like the sunrise. Green numbers started popping up on all the dashboards, jbrains flood the connections, logs started sprinting, the database suddenly see a spike in CPU and memory utilization due to backlog of transactions needed processing. For a moment, things were alright. People could breathe and ask "what happened? why did it happen?". Only Dave here suddenly ping on slack, "wait, nightmare not over, jbrains connections started dropping".

At this point I phased out, I knew Dave could fix it, and he did, because with the second time having to restart servers, Dave noticed when the number of connections reached 1024, the VPN servers started dropping connections. It was because of the default --max-client on OpenVPN.

Fun.

how do you database?

At my previous job, govtech/tax-tech, the database was just as important as the code. Now what do I mean by that? Mooney explained it best on this exact topic:

Given how much thought and effort goes into source code control and change management at many of these same companies, it is confusing and a little unsettling that so much less progress has been made on the database change management front. Many developers can give you a 15 minute explanation of their source code strategy, why they are doing certain things and referencing books and blog posts to support their approach, but when it comes to database changes it is usually just an ad-hoc system that has evolved over time and everyone is a little bit ashamed of it.

I believe the quote above is true. Admittedly, I'm only a 1+ YOE software engineer, but having jumped ship from a govtech consultancy to a startup, I find there is a lot to compare between how databases is treated and how this leads to a better developer experience.

What follows is a series of features I found missing at my current place of work.

1. Version Control

Schemas have version control. The system detects any changes made to the schema (In fact, the company never taught you to alter tables with SQL) because you would make table structure changes through the system. Deleting/removing columns, adding or editing comments, addibg/editing indexes (and probably many more), local changes are "synced" with the shared work server, where it assigns a version number for your structure. Migrations from local to testing environments and then, ultimately, to Prod, is simply having the environment point to the right version.

nothing else to be done

After 25 years of career, I still have to see an organization where things are so perfect that: - no refactoring is needed - no additional documentation is useful, it's all there shiny and beautiful. And it updates itself nightly. - logging/monitoring/diagnostic tools are perfect - builds are so fast that you wonder if you did press enter - all necessary linters are configured and used - everything has unit tests - and integration tests - and there's enough time for exploring alternative technologies for future development - and enough time for contributing feature/fixes upstream for the open source things you use - and you cannot build tools to answer asks from customers even faster So yes, you may not get official tickets assigned to you, but it doesn't mean there's nothing else to be done. Perceiving that need is the first step for moving from junior to more senior role, acting on that need is the second step.

Now, depending on the country you're in, social norms may make you unpopular among co-workers and managers alike if you move too much, so there's that.

Beautiful lessons by /u/mavvam