Lessons, Books & Resources that shaped my thinking in 2020

It is an understatement & cliche to say that 2020 was a unique experience in our lives. Yet, I was fortunate to have access to some great books, newsletters, podcasts etc which taught me lot during this time. In this post, I am listing out some key learnings and few resources that taught me most in 2020.

Key Lessons I Learnt

On Health & Diet

  • Insulin resistance is underlying cause for most heart related health ailments (especially for South Asians like me).
  • Carbohydrate heavy diet leads to constant insulin spike. Over period it leads to two kinds of problems. Insulin resistance where muscle and other tissues stop responding to insulin and stop absorbing Glycogen from carbs or Low insulin where pancreas stops producing enough insulin. Both these causes, excess glycogen to enter blood stream or liver (where its converted to fat) leading to multiple chronic illnesses. – South Asian Health Solutions by Ronesh Sinha.
  • Pay Less attention to overall cholesterol. More attention should be given to Triglycerides. Make sure you maintain Triglyceride-to-HDL ratio < 3.0
  • Do bodyweight exercises at-least 1-2 times a week. Incorporate interval/HIIT training twice a week.
  • High stress levels, irregular sleep patterns, low Vitamin D can accelerate insulin resistance.
  • Shift carb intake to post work-out periods to allow muscles to use for energy instead of carbs shifting to fat cell storage. Workout before breakfast is effective for this reason.
  • Avoid processed foods, sugary drinks or any man-made foods (our natural body evolution did not prepare to consume those). Moderate the consumption of starchy foods, grains and legumes. Prefer low glycemic carbs over higher ones.

On Money, Finances & Investing

  • Live below means, not within means. In other words, try to limit life-style creep with income gain. The hardest financial skill is getting the goalpost to stop moving. Increasing lifestyle with income is never ending hedonic treadmill.
  • Saving money can give you more control over how you spend your time in the future. Time is the most valuable resource. Money is just a means to buy time.
  • Good investing isn’t necessarily about earning the highest returns through one off hits. It is about compounding above-average returns over a long term. Be patient!
  • Sell down to a point where you can sleep better. Health above wealth.
  • Equanimity & Humility: Leave Fear of missing out. Don’t act in haste. Have humility to acknowledge blind spots & tail risk events and don’t let greed wipe out your wealth.

On Parenting

  • Kids need autonomy, sense of competence and care/relatedness.
  • Kids should be encouraged to listen to their own intuition, to be spontaneous, to be creative in thoughts and actions. This gives them autonomy and willingness to try and not have fear of failure.
  • Discipline over punishment. Discipline means teaching them consequences of actions with love. This also tells them you care. Threats and punishment breeds a sense of authority and can hurt their self confidence.
  • Help kids reason through failure (even for small tasks) and appreciate the effort. This develops learning mentality and confidence that they can learn anything.

On Leadership

On Decision Making

On Markets & Business Strategy

  • Future monopolies never look like the current ones. Best way to compete is to pick a narrow orthogonal market to existing Monopoly and create immense value for users/customers. Google did not start out to as Yahoo clone, Facebook did not start out as New York Times competitor. – The Greatest game by Jeff Booth
  • Competing in new market is advantageous to disrupters because its almost always going to be expensive and self-destructive to compete for existing companies – Intel’s Disruption by Steven Sinofsky.
  • In enterprise software up-selling to existing customer is easier than acquiring new customer. A collection of good enough tools are integrated together is much more valuable proposition for IT department than tools which work great individually. This dynamic helps Microsoft Teams compete well with Slack despite Slack’s superior technology. – Salesforce Acquires Slack by Stratechery
  • Companies/Industries which have long time gap between generating supply and creating demand are more susceptible to natural disasters. Airlines need to front load supply of planes and recover cost from passengers in future, hotels have to pay for real estate and inventory first before recovering cost from travelers. These are the industries that suffered most during the pandemic.

Books

These are the books that I learnt most from in 2020.

This book is simple, short and effective at explaining why and how to optimize for long term. It introduces three main ideas – thinking through the second-order effects, learning how to best motivate ourselves, and becoming experts at delaying gratification as tools to help us play a long term game.

This is the best book I read on health and diet thus far. This book taught me lot about impact of carbs vs fat, how insulin plays a huge role in multiple health problems. Book explains how reducing carb intake, with regular exercises can most common chronic issues like diabetes, cardiac arrest and even reduce risk of cancer.
Also it is a great companion book to Thomas DeLauer and Dr. Eric Berg DC youtube series on intermittent fasting.

This is a book teaches how to balance being nice and giving an effective feedback. Author explains the concept of Caring Personally and Challenging Directly to build trust and enables open communication. It has some great tools and concepts which I personally found useful in my job as an Engineering Manager.

Decision making is a complex subject. This book simplifies this into few simple tools like – avoiding resulting, avoiding hindsight bias, using preferences, probabilities and payoffs as opposed to pros/cons. I personally found these tools to be easy to use and practical. I have even used them in some of them big decisions I had to make in 2020.

This book unlike most other investment books teaches how we should think about money, expenses, life style choices, retirement, value of time, compounding. An excerpt from the book – “The premise of this book is that doing well with money has a little to do with how smart you are and a lot to do with how you behave. And behavior is hard to teach, even to really smart people.”

Technology is by nature deflationary. Example take a look at Iphone. For the price of $600 dollars it bundles a computer, camera, phone, screen (tv) and others. But monetary policies of all of the central banks are inflationary. In this book, author explains the impact of these policies on society (two class system of haves and have nots), politics, economy and danger it poses.

Podcasts

These are the podcasts that influenced my thinking most in 2020.

Invest like the best

This is an excellent resources on business building and investing. It has great conversations with founders and investors walking through their journey. I learn a lot from their episodes on market dynamics, disruption, business moats, incentives in multi sided market place etc.

Great podcast on technology and its impact on society, regulation and frameworks for regulating tech.

The Investor’s Podcast

I have been a long time listener of this show. This podcast is best one out there when it comes to focusing on fundamentals, understanding market and great education on investing. Especially in 2020, where market movements have been crazy, this podcast was irreplaceable for me.

Blogs/Newsletters

Here are some news letters that influenced my thinking most in 2020.

https://stratechery.com/

I have been a long time reader of this blog. This is one of the best resource in understanding business, strategy, technology and internet’s impact on society. Apart from that Ben also has great thoughts on how to regulate technology.

Diff Substack

This newsletter covers two of my interests – Finance & Technology. Both business landscape and market dynamics have take a huge inflection in 2020 as distributed work and stay-at-home became prominent. Byrne does a excellent job of put these changes into a larger perspective. It is a great companion read to stratechery.

https://zeynep.substack.com/

2020 was a year of Covid. There has been a lot of information but I found Zeynep’s writing to be single most authoritative source in understanding Covid, its developments, public policy around it, vaccine etc.

Memos from Howard Marks

Thoughts from legendary investor. Bigger picture understanding of market dynamics and behaviors during this pandemic.

Bitcoin as an Asset

Note: I am long on Bitcoin. I am not a registered Security analyst. Views in this post should NOT be taken as an Investment advice. Before you proceed further, please see Disclaimer at the bottom of this post.

This is second post of two part series on my understanding of Bitcoin as an Asset. You can find the first part here.

In the previous post, I laid out my understanding of Money, Gold and Bitcoin from economic standpoint. In this post, I will trying list out my understanding of why Bitcoin is worthy of being an asset.

Price to What?

In traditional markets, assets (primarily stocks) are valued based on several ways like Price to Earnings, Discounted Cash Flow etc. But all these measures assume that underlying asset has earnings. But bitcoin doesn’t have any earnings. This is reason why I feel any method to measure Bitcoin is somewhat speculative in nature. Now, speculation by itself does not need to be wrong. But it means the measure has associated risks with it.

In the next sections, I try to understand the case for Bitcoin through lens of individual investor’s actions and also how the Corporate strategy seems to evolving around Bitcoin.

Case Through Lens of Individual Investor

Stock to Flow

Stock to flow (S2F) is a measure of scarcity of any commodity.

Stock to flow = Stock/Flow

Stock = Total amount of commodity available in the world
Flow = New amount of commodity produced every year.

Historically, there is a relationship between price of a given commodity to its S2F value. For example, here is S2F for Gold and Silver (two of the most precious commodities).

Gold S2F = ~58
Silver S2F = ~33.3

Another way to understand S2F is by inverting its value. The inverse of S2F is supply growth (or can be loosely termed inflation). If you notice, Gold’s inflation is ~2%. That is, if you gold 1 ton of Gold in a reserve. Next year, % of your reserve in relation to total amount of Gold available will reduce by 2%.
This is also happens to be reason why Central banks historically tried to keep inflation to 2%. Because in past Currencies were pegged to Gold, banks needed to make sure inflation is under 2% to make sure currencies dont lose their value with respect to gold.

Since price of commodity tends to be related to S2F, one of the popular techniques to measure Bitcoin is using S2F model popularized by an anonymous blogger Plan B . This method takes Bitcoin’s supply growth (which is not constant unlike Gold), model’s its price as factor of this.

Historically, Bitcoin’s pice followed this model. But history does not guarantee future. There are certain caveats in this approach of comparing Bitcoin to gold.

Gold has centuries of history. Also, as we discussed in previous post current financial system evolved from Gold Standard. So, gold was not the first entrant. Gold is a physical material. Some of the Gold’s value comes from industrial usage (thought not a huge percentage), jewelry (virtue signaling). Rest of it is a hedge against inflation. How much of this value will Bitcoin capture is yet to be known.

A Digital Scarce Asset

Bitcoin has been around for 10 yrs and during its time it proved to be secure, durable, scarce, transferrable. It has been widely considered as digital commodity which is scarce with all properties of sound money.

Even though there were other digital cryptocurrencies that popped up during this time, Bitcoin seems to have clearly distinguished from them. It is truly decentralized (trustless), scarce (fixed supply cadence), secure. Also technologies like Lightening Network are bringing Bitcoin mainstream increasing its utility. Network affect is very critical for cryptocurrencies especially for security and evolution of ecosystem. Because of huge head start is hard to catch up for new entrants.

Along with this, Bitcoin’s scarcity achieving through halving cycle makes it a good inflation hedge. Over 88% of all available bitcoin is already mined. More and more of existing bitcoin is coming out of exchanges into private wallets (which are usually long term holders). This means as adoption of bitcoin increase, new demand will chase after smaller supply of bitcoins which automatically increases price of bitcoin. Last two halving cycles have proven this to be right. Here is a tweet that explains this phenomena.

Inflationary Policies

In the post: Inflation, QE – Lemonade Economy, Government & Central Bank I detailed how current monetary policies have adopted inflation. Current US treasury yields are <2%. The rates have got so low and central banks are talking about negative interest yield. Most investment portfolios recommend bonds as a safe haven for money. But with such lower interest rates will force people to look for alternative forms of investment as hedge against inflation.

Apart from that, these inflationary measures have increased the asset prices disproportionately. For a new graduate, even with high paying salary owning a home in places like San Fransisco, New York in increasingly hard. Only way to upward mobility is to invest in assets which have higher risk/reward. So we can expect more and more people turn away from saving cash (at low/negative rates). I have touched on this in a previous post about inflation

It is hard to guess how much of that demand will look for bitcoin. But it is clear, that whoever turns to Bitcoin has to compete for limited supply of Bitcoin available.

Case Through Lens of Corporate Strategy

Corporations using money in BTC as hedge

Just like individual investors, even corporations have to save their treasury somewhere. In a negative interest rate environment these corporations have to put their cash in an inflation hedge because of shareholder pressure.

We are already seeing early signs of that. Microstrategy, an enterprise software company has already moved about $400 Million worth of its reserve to bitcoin. Square, followed up by allocating about 1.8% of its cash ($50M) in to bitcoin as long term investment vehicle. Such movements can cause spike in Bitcoin’s price and also bring Bitcoin more in to mainstream narrative.

Fintech Disrupters using BTC as Orthogonal Vector

Fintech is one of the most crowded place with lot of giant corporations like Banks, Payment processors etc. In such a market best place for  The best way to for some new entrant to compete is by picking a very small part of a market that goes relatively unnoticed by these big corporations. If that small market happens to be complete new technology then these big companies might not be initially interested in competing there. Bitcoin can very well be that orthogonal niche place for small fintech companies to compete.

In fact this already happened. Cash app (from Square which is not a small company) took this bet on Bitcoin recently. Cash now lets its customer buy and send bitcoin. This makes huge sense for Cash app. It is a great way to compete against existing companies like PayPal while also reducing transaction costs by letting customers store bitcoin in Cash and transact from there. PayPal immediately followed up and started supporting bitcoin.

This creates a network affect for Bitcoin ecosystem. More non-tech people have on-ramp to Bitcoin without having to go through effort of setting up hardware wallet. It means more fintech companies might have to follow suite (especially ones in peer-to-peer money market).

Institutional Adoption

As this adoption on bitcoin grows, Bitcoin will become more viable investment vehicle for investors. This might force, institutions to start supporting it. For example, a relatively new company – Choice IRA offers saving bitcoin in IRA (retirement) accounts. If Bitcoin price keeps going up, other larger institutions will have no choice to follow. We are already seeing some Hedge funds (more) investing in Bitcoin. This further spins the fly wheel of Bitcoin adoption.

Risks for Bitcoin

While last few sections listed out bull case for bitcoin (largely speculative), it doesn’t mean that it is without risks. Here are few I can think of.

Regulatory Risk

As bitcoin’s march towards form of money continues, it is unlikely that Governments will let their currencies be replaced. It is very likely that they will put more restrictions on exchanges and other on ramps to Bitcoin. But whether this will trigger Game theory mechanics and make other countries adopt Bitcoin as a competition will be interesting to watch.

Protocol Risk

As we saw earlier, one of the factors in Bitcoin’s superiority compared to another cryptocurrencies is its security. If Bitcoin was ever hacked or proven to be hacked, it could completely derail its narrative. Arguably such risks are more likely early in the lifecycle, with more years it gets harder and harder for hacking as network grows. But with technology we should be always be wary of security risks (quantum computing ?).

Adoption

Lot of current value in Bitcoin is derived from its potential as Digital Gold. But Gold is a real physical commodity with long history. It has proven resilient to inflation for centuries. If Bitcoin and efforts like Lightening network dont pan out, then Digital gold will be only narrative for Bitcoin. Will people really adopt Bitcoin as digital gold is a question hard to answer right now.

See this post from Peter Schiff for counter point to Digital Gold narrative.

Gold is an inflation hedge because it’s also a commodity. When inflation reduces the purchasing power of fiat currencies, it takes more units of the inflated currency to buy a given commodity. Since #gold is also a commodity, it maintains its value relative to other commodities.

Since gold retains 100% of its properties over time, and is easy to store, it’s an ideal asset for consumers to hold during periods of high inflation. While more units of currency are need to buy commodities, the same quantity of gold can still be exchanged for other commodities.

Unlike gold, #Bitcoin is not a commodity so it has no historic price relationship to any other commodity. As such it has no measurable purchasing power that can be stored for use as a medium of exchange. Its price exists in a vacuum. It’s only worth what the market will bear.

Originally tweeted by Peter Schiff (@PeterSchiff) on December 19, 2020.

Even if Bitcoin technology can eventually scale (through efforts like Lightening Network), it is hard to imagine how Bitcoin can became a default money without integrating into existing financial system. Because ultimately societies are controlled and served by governments. Financial system is one of the core pillars of a society with its own sets of rules and regulations. Some of these regulations (like Knew your customers) go against the ethos and one of the founding principles of Bitcoin.

If Bitcoin integrates into financial system; if so, how will its network react to this ? and will it be continued to be viewed as a hedge against currencies?

If Bitcoin does not, then will run the risk of being a “hard to adopt” technology?

These are some interesting questions that will surface in near future.

Here is an excellent write up by Tyler Cowan on why Crypto assets can be either useful hedges or useful forms of payment — but not both.

Conclusion

At this point, best case for Bitcoin seems to be a reserve asset (Digital Gold). It has strong security and scarcity along with being truly decentralized. There is a lot of momentum in the ecosystem (Lightening Network) which could potentially increase the use of Bitcoin beyond store of value but it is yet to be realized. In the meanwhile, we are seeing adoption of Bitcoin by both Institutional investors and corporations.

However, there is lot of uncertainty Bitcoin as an asset. My choice of work – “uncertainty” as opposed to “risk” in above sentence is intentional. Risk is measurable (you can attribute a probability of winning) but uncertainty is not measurable hence every more dangerous and could cause permanent loss of capital.

Disclaimer

I am long on Bitcoin. I might sell Bitcoin in future anytime. This post is NOT a recommendation to buy, sell, or hold Bitcoin. I wrote this post to organize my thoughts about Bitcoin and shared it so that you might find it useful. I am not a registered Security analyst. Views in this post should NOT be taken as an Investment advice.

References

  1. Inflation-QE-Lemonade-Economy-Government-Central-bank
  2. The Bitcoin Standard – By Saifedean Ammous
  3. Macro Impact On Bitcoin :: Pantera Blockchain Letter, April 2020
  4. Bitcoin as Reserve Asset
  5. Bull case for Bitcoin – Vijay Boyapati
  6. Podcast: BITCOIN & MICHAEL SAYLOR – A MASTERCLASS IN ECONOMIC CALCULATION
  7. Podcast: Once BITten – People-Know-Something-Is-Wrong
  8. The Greatest Game – Jeff Booth
  9. The Price of Tomorrow – Jeff Booth
  10. Youtube: Chamath Palihapitiya: Why Bitcoin Will Be ‘the Category Winner’

Money – Gold, Fiat, Bitcoin

Note: I am long on Bitcoin. I am not a registered Security analyst. Views in this post should NOT be taken as an Investment advice. Before you proceed further, please see Disclaimer at the bottom of this post.

This is first of my two part series on my understanding Bitcoin as an Asset. You can find the second part here.

What is Money?

Money has to be one of the greatest human inventions. It unlocked human potential to cooperate leading to acceleration of further innovations and thus leapfrogging humans ahead of all other species on this planet.

Human needs have evolved from food & shelter to innumerable conveniences that are part of our lifestyle today. This is possible because we as humans figured out how to exchange one valuable goods to another (barter system). Because of cooperation, different person can specialize and pursue different skill and still have their needs met by exchanging the goods they produced with ones they need. This has led to continuous innovation by humans as species taking us from hunter-gatherer days to species capable of exploring outer space.

But this system works only if any two people involved in trade have coincidence of wants. Let’s take an example, of person A who has chicken farm and person B who has a cow. Person A can exchange with B some (agreed upon) number of eggs in return for some milk. This trade will only work if B needs eggs. However, if B only needs (values) wheat, then A has to find a third person (C) who has wheat and values eggs, and then exchange wheat for milk with B.

This model of trade is not scalable once number of involved parties increase. It becomes too complex to find a counter party to your trade. This is because not every one values same good. Which brings to first property of medium of exchange:

1. It must be valued/desired by all parties in a trade

Another important criteria for this trade to happen is, coincidence of wants at same time. My hypothetical trade above only works if C is in need of eggs right now. If they need it a month later, this trade is not valuable to them, because eggs will be rot by then. Which brings to second required property of money or any medium of exchange:

2. Must retain value over time

It doesnt stop with this. It is quite possible that A might think 2 eggs is too much for a pound of wheat. While C might think 1 egg is too little for the trade. So A and C need to figure out a way to divide the egg. But your cant really break a raw egg into two. Even if you think we can cut it after boiling, what if the trade involves a chicken instead of egg. How can you cut chicken into 2 pieces. This leads to third property need for medium of exchange:

3. Must be divisible

It doesn’t stop with this either. What A and C and far away from each other? It must be possible for A and C to carry their goods to agreed place of exchange. Which is why a house as a medium of exchange is not a feasible. This leads to fourth property for a medium of exchange:

4. Must be carryable across space

There are many other ways (and many properties) to define money but for me these four are main ones.

Gold as Money (and then store of value)

If we study human history from early days of our evolution, we can find long list of experiments with different forms of money. There were attempts to use commodities like cattle, wheat and even coca as money. But these experiments kept failing because these commodities did not have some (or all) of the properties above.

As human skills advanced, we got good at producing metals from ground. These metals proved to be good choice for money over commodities because they are easier to carry over larger distances (property #4), divide into smaller pieces (humans figured out how to weigh metals. property #3), and their durability over time (property #2). While many metals were experimented with, ultimately Gold won because it was much rarer than most metals at same time more durable. This naturally made Gold a more desirable metal (property #1).

Because of this perceived value, gold became even a signal virtue (ornaments) which further propelled gold’s value. Through innovation humans figured out how to mint raw gold into coins, where each coin representing certain quantity of gold (humans figured how to weigh things). Many empires and governments through history standardized on government minted coins as agreed “money”.

But over a period of time, these empires misused their power as sole authority to mint these coins. In the quest to propel their economy, these governments started minting more coins (with reduced amount of gold per coin). While this increased money supply created a vibrant economy for a while, without proper control, this inflation soon became uncontrollable because of the greed of the empires. Soon, these empires became unstable because of loss of trust in currency and thus led to their eventual decline.

But the perceived value of gold never changed. It continued to be rare material which was treasured by humans, leading to continue to be a store of value.

Paper Currency (Fiat Money)

Paper currency was adopted my multiple governments and empires as better form of money. This paper money was durable over time (to an extent), was easier to carry over longer distances and also it was easy to print money in different denominations (divisibility). However, it lacked the fundamental property of perceived value (it is neither rare nor as durable as metals). And also, early incarnations of paper money had trouble with adoption, because people did not trust government printed money.

Governments were able to get around this value problem by pegging the paper money to Gold. Meaning, it was possible for anyone to bring their money to government and exchange for gold. This increased adoption of paper money. This is called “Gold Standard”. This standard continued for hundreds of years. Over a period of time, confidence in paper money increased. People no longer really redeemed gold from their currency making government printed paper as the de-facto money.

As is often case in human history, power leads to abuse. In early 1900s countries like Germany, started abusing this power and started printing more currency (than amount of gold they had). While this circulation of money improved commerce and propped up the economy, it eventually led to inflation (much like Gold era above) and eventual crash leading to 1929 great depression (history may not repeat but it rhymes).

In 1944 after the World War II, all of the allied nations met to solve the problem of currency inflations. This meeting took place in Bretton Woods, New Hampshire and hence the name – “Bretton woods agreement”. Under the agreement, all central banks would maintain fixed exchange rates between their currencies and USD (which was pegged to gold. US had control of lot of gold reserves at Fort Knox). This would mean, any inflation would essentially devalue the currency in relation to USD (and Gold). A side affect of this agreement is that USD became a de-facto global currency.

This eventually failed in 1971. In 1971, United states started seeing slowing economic growth and recession. To prop up the economy, USA had to increase money supply. This led eventual drop in value of USD with respect to Gold. This meant people started redeeming their USD for gold. To avoid this, President Nixon made a historic choice to unpeg USD to gold. This pretty much ended Bretton woods agreement (and Gold Standard).

Without this peg to Gold, governments were able to print more money into system leading to inflation in currency. So Gold’s perceived value continued to rise. To this day, gold is traded in most stock markets. See below chart from which shows increase in Gold value related to USD over last 5 decades.

Source: https://goldprice.org/gold-price-history.html

Bitcoin – Digital Currency and Store of Value

In early 1990s, with explosion of internet it soon became evident that commerce will move over to internet. Many people started seeing the need for a digital currency which can power commerce on internet. But creating a digital currency met with multiple challenges.

  • Who would authorize this digital currency? Physical money had centralized trusted authority (Central banks) to validate its authenticity. A digital currency is arguably global because of unique power of internet but there is no clear answer on which centralized entity can validate & control this currency.
  • How do avoid double spending? Without central authority how can we ensure that individuals are not double spending they money? Who maintains the ledger for the available balance of each individual?
  • How to maintain Security of the ledger? In traditional money, banks are responsible for securing your assets. Without centralized entity to hold your money, securing the ledger becomes a unique challenge .
  • How to create digital scarcity? Digital items are by very nature easier to copy. Look no further than music, movies and all the piracy on internet. This exposes any digital asset to massive re-production and duplication leading to inflation.

While these problems remained unanswered for a while, e-commerce continued to prosper. Banks started supported wire transfers; Visa, Mastercard etc built out massive payment networks acting as centralized authority to validate purchasing power (whether individual has money) of individuals. This was all backed under the hood by same physical currency. These payment networks and banks enabled these micro transactions between individuals and behind the doors doing a large settlements between the banks (once a day etc).

But the answer to a true digital global currency was left answered until 2009, when an anonymous hacker Satoshi Nakamoto published a seminal paper on a decentralized digital currency based on purely peer-to-peer technology which he coined as Bitcoin.

Satoshi used a combination of game theory, cryptography and peer-2-peer networks to solve the problem of creating digital scarcity, authorization, a decentralized ledger. While technical details of beyond the scope for this post, let us briefly look at how these problems are solved.

Authorization

Bitcoin leverages public key cryptography to solve the authorization issue. Using this technology each individual has two digital keys – private and public. As name indicates private key is a secret which is only known to the individual, while public key is available for everyone to see. Uniqueness of this technology is that a key which is signed & encrypted with private key can only be unlocked with public key. This helps ensure that money was spent by the rightful owner of the money.

Decentralized Network to Maintain Ledger and avoid double spending

Bitcoin leverages peer-2-peer network in which multiple volunteers maintain a copy of the ledger (balances and transactions). These volunteers are called miners (a term copied from Gold miners). To verify balance and whether a person initiating payment has right balance, majority of the volunteers needs to agree on the balance.

Through this unique insight and reliance on decentralized peer-2-peer network Bitcoin removes the need for a centralized trusted party. It is these centralized trusted parties that repeatedly through the course of history caused downfall of various forms of money.

Incentives as way to issuance and establish security

In bitcoin, the security of network is proportional to number of miners. So, in order to incentivize more minters to participate, the algorithm will issue certain amount of Bitcoin to miner who authorizes transactions. This is how new money is generated in the system and at same miners are incentivized to participate. Miners are also eligible to receive certain % of transaction amount as fee (much like visa or mastercard does today). The side affect of this incentive structure is increased security of the network. For any one to attack the network (and create unauthorized transactions), they need to own 51% of nodes in the network. Bitcoin’s unique incentive structure meant this is virtually impossible given the number of miners competing to validate the transactions in the network.

Difficulty Adjustment as a way to Scarcity

Any currency should have scarcity in order for its value to retain or appreciate. Bitcoin has a fixed number of bitcoins – 21 million. Its algorithm has fixed schedule in which new bitcoins are issued. Every 4 years, the reward given to the miners to validate transaction reduces by 50%.

The way this schedule is maintained is through Difficulty Adjustment algorithm. Every miner has to solve a cryptographic puzzle along with validating transactions to win the bounty. This puzzle is a cryptographic problem which is very difficult to solve requiring lot of computational resources. If more miners are attempting to solve the puzzle (to increase the supply of bitcoins), the difficulty automatically goes up to make sure schedule is maintained. This ensures only once every 10 mins a puzzle can be solved in the entire network maintaining the supply of bitcoin.

All the above 4 properties of the Bitcoin protocol makes it one of the most sound money ever invented. A money which is decentralized & trustless, a money that is secure through its network affect, and money whose supply is controlled.

Over the last 11 years, these properties helped increase bitcoin adoption both in developer community and usage as a digital cash. All this while it has proven to be censorship resistant, secure and reliable. As I write this blog bitcoin has a market cap of ~350 billions.

But it is not without any limitations

Current LImitations

Scale Issue

Bitcoin’s protocol dictates steady creation of supply. As we saw earlier this is achieved through the cryptographic puzzle which needs to be solved to validate transactions. Multiple transactions are grouped together into 1 block. Currently, each block creation takes 10 mins and blocks have a size limit of 1 megabyte. This means bitcoin’s rate of transaction is at best 6-7 transactions per sec.

This limits bitcoin’s usage as basic medium of exchange. Today Visa can process close to 4000 transactions per second. Its multiple orders of magnitude higher than what bitcoin’s protocol can potentially every achieve.

Today, majority of Bitcoin transactions are carried out off‐chain, and settled on‐chain on infrequent schedule (once a day). For example Coinbase (a crypto currency exchange platform), processes all transactions in a local network within their data centers. Coinbase eventually settles these on main bitcoin network at a later time.

There are really interesting initiatives like Lightening network which hope to increase the scale of bitcoin by running several orders of magnitude. The fundamental insight is to create several dedicated networks (block chains) where people can transact bitcoin for specific purpose. These chains will settle the money eventually on the main bitcoin network once a day. This is a very interesting space to watch.

Transaction Costs

Due to the current scale limitations, several transactions need to compete with each other to be included in a block. Due to the nature of free market’s this gives power for miners to charge higher transaction fees. This is much higher than transaction fees associated with traditional money.

These high costs make bitcoin prohibitively expensive for mainstream usage today.

Regulation

Bitcoin as an asset is still not regulated by SEC and other commissions. Because of this there is isnt yet adoption of Bitcoin in mainstream like fidelity, vanguard etc.

Steep On Ramp for Mainstream Adoption

Owning a Bitcoin is not straightforward today. In order to own a Bitcoin a individual needs to maintain hardware wallet which needs to be securely stored. Not many people have these technical abilities. While exchanges like Coinbase, Robinhood etc have made it easier to create account and buy and own bitcoin, there is still a lot left to be done to educate mainstream people on concepts of bitcoin and why it needs to be bought.

All these factors have for now limited Bitcoin to a digital cash or store of value. Because of its scarcity and potential of usage in future, bitcoin is largely viewed as digital gold. Bitcoin has become a prominent reserve asset and hedge against inflation.

In the next post in this series, I will lay out my understanding of Bitcoin as an Asset.

Disclaimer

I am long on Bitcoin. I might sell Bitcoin in future anytime. This post is NOT a recommendation to buy, sell, or hold Bitcoin. I wrote this post to organize my thoughts about Bitcoin and shared it so that you might find it useful. I am not a registered Security analyst. Views in this post should NOT be taken as an Investment advice.

References

  1. Inflation-QE-Lemonade-Economy-Government-Central-bank
  2. The Bitcoin Standard – By Saifedean Ammous
  3. How Currency Works – howstuffworks.com
  4. Macro Impact On Bitcoin :: Pantera Blockchain Letter, April 2020
  5. Bitcoin as Reserve Asset
  6. Blockchain Basics – Coursera
  7. Bullish case for Bitcoin – Vijay Boyapati
  8. Bitcoin Paper – Satoshi Nakamoto

Grow the scope or not?

Every manager at some point in their career will be forced to address the question – “Should my team take on more scope or should we continue doing what we do well?”.

The general sentiment around this question often tends to lie oscillate in extremes.

  1. Growing scope helps your team’s overall output.
  2. Do what your are doing well. There is a room to improve. Don’t fall into the trap of scope growth.

Unfortunately, for most managers who are faced with this scope situation none of these extremes are useful. In this post, I will dig into why these extremes rarely work and what would be a reasonable framework to address our question of scope growth.

Growing scope to increase output

Some obvious problems with scope growth is the time commitment needed from your team. Time spent in additional scope takes away time from the current commitments, time that could have been spent to improve your oncall health, loss of focus for the team which very likely impacts quality, engineer burn out etc.

An optimist in you might say,” I will get budget for additional engineers which should help with these problems”. Well, you are in for a surprise because output of the team doesn’t always linearly growth size of the team. A lot of factors get in the way. Examples include:

  1. Your team’s engineering process might not be mature enough to scale to new members seamlessly? These include quality of onboarding docs, code review process, CI tools, stability of code base, maturity of coding & deployment best practices. Without these properly set, you are very likely going to make matters worse and slow everyone.
  2. Have you accounted for time spent in hiring candidates. This is a cost easy to ignore because it is distributed across multiple people. Your role as manager in sourcing candidate, sell calls etc. 5-8 other engineers interviewing these candidates, time taken to write feedback, follow up debrief meetings. Remember most top engineering companies have an acceptance rate of low single digits. So number of candidates you need to screen for each engineer you want to hire is multiple orders of magnitude. You need ask yourself if your team’s time is better spent somewhere else.
  3. Onboarding new engineers is a non-zero cost. New engineers are not going to immediately productive. They need to be mentored, they have to familiarize with code base (which means their initial code reviews will demand lot more time from existing engineers), potential for bugs is higher, they need to spend time understanding dependencies etc.
  4. Can your engineering systems scale to new use-cases?
  5. Is the new scope/opportunity complimentary to your team’s current portfolio? If not, then you suffer from lack of focus, inability to leverage existing infrastructure, institutional knowledge of the team. Most likely there is a team in your company better suited to do this than your team.

As you can see, if we prematurely grow scope and size of the team, we likely fall short of expectations not only for the new scope but also impact quality of current deliverables, cause likely churn in the team. Now let’s see the other end of the spectrum.

Do what your are doing well

The other end of the pendulum is to continue investing what the team is already doing and possibly. After reading the previous section, it is easy to see why this solution is appealing. But this is not without its faults. Let us look at some of them.

Engineers get better with time and practice

Engineers are humans. So, with practice and time they get good at their job and it is your job as a manager to make sure they do. So, likely what they can handle and deliver in same amount of time will be higher in future.

Engineering Systems get Better

Any good company worth its salt will invest and improve its engineering systems over time. This include deployment systems, engineering frameworks (for logging etc), observability & monitoring, code review tools. Some of these are process improvements within your own team and some could be at company level. But over a period of time, you are hopefully driving towards more productivity in less time. So collectively your team should be able to achieve more in future.

Engineers want to grow

If you are doing a good job of helping your reports long term goals, then some of the junior developers will soon become senior engineers, and some of today’s seniors will want to become leads with larger scope. It is your responsibility as manager to ensure that your are paving the way for these.

Not always will scope grow organically within your product. If you are in a team/suborg where this happens then your job is easier. For majority of the teams, it might not be true. So, you need to be on look out for newer opportunities (which compliment existing charter). Without this, you are likely going impact growth of engineers and possible attrition. Remember that not every engineer will see this problem ahead of time. Even if they do, they might not be comfortable speaking out. It is too late to retain employees once they decide to leave. It is always better to be proactive.

You need to grow

Don’t forget your own growth. As a leader you are expected to self manage your growth to an extent. If you are not constantly looking out for newer challenges, the trap of comfort will soon make you irrelevant and impede your career growth.

Framework for Scope Growth

So we can now clearly see both these extremes are not really helpful. So how do we answer the question – “When to consider scope growth?”. While, the answer is very subjective for every team & company, I found it useful to think about this across two different axis.

Individual level

As a manager you are best position to know the growth trajectory of your reports and how it looks like 6 months from now. So you need to be constantly thinking of opportunities that you can create for them then. For junior engineers these could be new features on existing initiatives (ex: like increased roll outs, new country launches and associated challenges). For senior engineers, this could new initiatives within existing charter of your team/org. You should avoid taking on completely new charters on individual level (exceptions include a very senior engineer or exploratory work).

Team level

At this level you are mostly thinking of newer charter or larger initiatives. There are multiple factors to consider here:

  1. Maturity of engineering systems: Is your team’s engineering system mature enough to handle new charter. If not, it is better to handle those before considering scope creep.
  2. Current State of the team: How well is your team executing? In the book Elegant Puzzle: Systems of Engineering Management, the author explain four different stages of team. I highly recommend this framework. I personally found it super useful in determining when to cut or increase scope of a team.
  3. Composition of team: Always think of the composition of team when considering new charter or initiative. How will this disrupt current team’s execution. Can we take on additional engineers? I personally recommend keeping a lower ratio of newer engineers to existing engineers at any given time.

By constantly thinking in these two axis, we set ourselves & team for good progressive scope growth and hopefully take on new charter and initiatives.

I hope you find this useful. Either way I am interested in knowing your thoughts and learning from your experiences. Please don’t hesitate to leave a comment below.

Avoiding vanity trap of 4 9’s SLO

Service level objectives (SLO) are standard way of defining the expectations of applications/systems (see SLA vs SLO vs SLI). One standard example of SLO is uptime/availability of an application/system. Simply, put it is % of time the service or application responds with a valid response.

It is also a common practice in large organizations for a SRE team to keep track of SLOs of critical systems and gateway services reporting them to leadership in frequent cadence.

But like any tool, SLO’s also have a motivation and a purpose. Without careful considerations these employing seemingly well intended policies can become a vanity metric and cause unintended consequences. In this post, I take one such situation I encountered in past and provide a simple framework that could help avoid these traps.

4 9’s Availability

Simplified System Architecture of a Company

Consider above example which is a common architecture of a company. Usually a gateway service fronts all client/external calls and routes them to internal services. It is also common practice in some companies for a central SRE team to some company level tracks SLO at this gateway level (Note: individual teams still measure their own services’ SLOs).

Today, with commoditized hardware, databases, evolution of container system etc, it has become a common expectation to expect a high uptime of 4 9’s(jargon for 99.99% availability). Simple way this is usually measured is % of Success (2xx in http protocol) responses your application sends with respect to total number of requests. This has almost become a vanity metric for engineering organization. Let us see two classed of problem caused without putting proper thought while defining them.

Lazy Policy Effect

If all you measure is availability across all endpoints as vanity metric, then it is possible that 1 particular endpoint has dominant share of traffic to your service. But does this traffic share reflect the importance of endpoint?

Ex: Let us consider two endpoints.

  • Logging endpoint used to capture logs on the device and forward them to backend for analytics purpose.
  • Sign-in endpoint used to authenticate user’s session.

It is easy to see how first endpoint can have more traffic and can hide the availability. As SRE team the numbers/SLOs always look good week-over-week if first endpoint is stable.

I call this lazy policy effect. It happens, because at the time of defining SLO’s, it is possible that authors of this policy never asked few critical questions.

  1. What is the purpose of tracking this SLO? Most often the answer will be – “We need to ensure we are` providing best experience to customers”. Which leads to question #2
  2. Is this metric truly achieving this purpose? It is clear at this point to see that not all endpoints directly contribute to customer experience. So you probably need a policy which segments endpoints by category (customer facing vs operational etc) and only measures SLOs for relevant ones.

Bad Incentive Effect

Not let see another consequence of these blanket policies.

Let us say 1 of the endpoint’s purpose is to collect sensor logs mobile device periodically. This is most likely a background process which does not interfere with consumer experience on device. In this case, it is easy to see how certain level of failures is acceptable for this endpoint. We can have the device retry on failures or even afford to miss some logs.

But this team unfortunately has to abide by the above 4 9’s policy set by SRE. Otherwise they will contribute to drop in organizational level SLO and will be called into next leadership review to provide an analysis. No matter how well intended and blameless these reviews are, most teams will try to avoid being called into these reviews. There are various “clever” ways you can do that.

One of them, is add multiple retries between gateway -> dowstream service. Or increase timeout duration for calls from gateway -> dowstream service etc. You get the idea.

These will certainly reduce 5xxs (and improve availability SLO). But unnecessarily increased latencies or caused these logging apis take up more resources on the gateway host. This could increase latencies for other “customer” endpoints. Lot of times these are even hard to notice.

Even though organization has defined these policies for better customer experience, they actually degraded customer experience. This might even go unnoticed because the availability SLO is always met.

Aspects to Consider

When defining such engineering policies in organization or team it is important to ask following questions.

Purpose Of THE SPECIFIC SLO

  • What is the purpose of tracking this SLO?
  • Is this metric truly achieving this purpose and in what situations will this metric not service this purpose?

Second ORDER CONSEQUENCES

  • What does it mean for engineering teams to adhere to this policy/SLO?
  • What feedback mechanism from teams do we need to put in place (so that we can adapt these policies and not incentivize teams to put work arounds)? Every policy needs to adaptable. Especially policies which demand large organizational cost to adhere to them.

Trade offs in Cache-aside Application Caching

In this blog post, I will go through some patterns for maintaining consistency between cache and database when using application level cache. Specifically I will focus on cache-aside/look-aside-cache pattern.

Application Cache: What and Why?

Cache is a temporary storage that is typically has smaller size and provides faster access time to data.

Application level caching is a common pattern in modern microservices architecture. In this pattern a cache is used alongside the underlying database. Frequently queried data is retrieved from cache as opposed to a database. There are multiple advantages for maintaining a cache. Some of which include:

  • Improving read latency: Since one of the common responsibilities of underlying storage is durability, often data is stored in file system. This can reduce performance of read. Popular caches like memcached and redis store data in memory enabling faster look up times.
  • Some databases also provide cache for faster lookup for frequent queries. For example, mysql provides query cache. But there are still reasons to not use the underlying storage as cache.
    • You don’t want to overload your underlying storage for some hot key reads. Ex: In a Social media platform a famous celebrity’s post might have more frequent access and create excess load to one particular partition.
    • There are more than 1 query patterns to underlying data. So one data model which is used by underlying storage does not always fit your query needs.

Cache-aside/look-aside-cache

The most common architecture used in application level is cache-aside or look-aside-cache architecture. In this architecture, application or an external process is responsible for the orchestration between caching the data. This is different than a database integrated caches which provide built-in read-through, write-through, write-behind capabilities (Which I dont discuss in this blog).

Following diagram illustrates a simple cache-aside architecture.

FIGURE 1

Consistency Models

There are different patterns you can use to hydrate and evict cache. Each gives us different trade offs and guarantees in terms of consistency between cache and underlying storage. These trade offs emerge primarily from behavior under failures/ network partitions and concurrent access.

One way to understand these tradeoffs is to think about cache as an extension to our database and use the Consistency Models from distributed system literature. I will go over couple of them which are relevant to this blog.

  1. Strict Consistency
  2. Sequential Consistency

Sample System

Let us assume our application has 2 nodes, a cache and underlying database.

We have one item called counter in our database whose values are numbers starting from 1. Let the initial state of the counter is 0. Some other symbols we will use going forward:

N1 : Application Node 1
N2 : Application Node 2

W(c)=1: represents write operation on counter to value 1
R(c)=1 : indicates read operation returned value 1

Strict Consistency

This is strongest form of consistency model. Under this model:

  1. All operations read/write should follow wall clock order.
  2. Every process (in our case cache and storage) should see the result on an entity (in our case counter) at any given time. In other words, no stale reads.

I am going to illustrate this with two examples.

FIGURE 2
  • At some time t0 N1 updated the counter to 1
  • Immediately when a read operation was performed, both DB and Cache were able to retrieve the updated value
  • At time tX (X > 0), N1 updated the counter to 2
  • Immediately when a read operation was performed, both DB and Cache were able to retrieve the updated value

This system satisfied both our criteria. Cache captured the order writes correctly and also there was no lag in capturing in this update.

Scenario 2

FIGURE 3: Stale reads in cache caused failure of Strict Consistency.

In this example, you can see that immediately after updating counter to 2, cache returned a stale result. This because the write operation arrived at Cache little late. So, result returned from DB and Cache do not agree with each other. Eventually at a later time, the read operation on cache returned correct result. So, violated freshness condition and hence its not Strictly Consistent.

Scenario 3

FIGURE 4: Out of order writes caused failure of Strict Consistency.

In this example, though cache and DB agree on values all the time, both of them got the writes out of order. W(C)=1 was issued before W(C)=2. But by the times write reached cache and storage, order got swapped because of network delays.

Note: Strict consistency needs wall clock ordering which is very difficult. This means we need clock sync between application, database and cache.

Sequential Consistency

Under this model

  1. All read/write ops were executed in some global ordering (not necessarily wall clock but every one needs to agree on the order)
  2. Does not enforce any latency requirements.

For our application cache topic, the main concern is #2. This is where scenario 2 in sequential consistency.

FIGURE 5

Eventually if the values converge and order of operations is same across both, then we can say sequential consistency is preserved.

Another failure scenario is when cache and db do not agree on same order of operations.

FIGURE 6

Here, both DB and cache do not agree on same ordering of events. So this breaks condition #1 above.

This sequence matters if you are using cache for not just storing last value of Counter but have to show the history of updates to user. In that case even if background process updates latest value in cache to be in sync with db, the history of changes will not reflect the same order as DB.

Cache Update Patterns

Now that we looked at some consistency models that are relevant to our topic, let us shift attention back towards patterns for updating the cache in a cache-aside/look-aside architecture.

Update on Write

In this approach, application updates cache along with database for every write operation.

FIGURE 7

So when a new write operation is issued to the application, it first writes to DB and then writes to cache..

This seemingly simple approach has few downsides in failure modes and concurrent write situations.

Failure Modes

FIGURE 8

Consider the above scenario.

  1. At time t=0, both DB and cache have a value 0 for counter C
  2. At time t=1, application issued a successful update to DB for counter C to 1.
  3. Application then tried to update cache, but this write fails. As result the value in cache remains stale

This clearly breaks our Sequential Consistency model as our cache no longer agrees with DB. Worst yet, because we are updating cache only on a write operation, our cache will continue to respond with stale value till another write happens. Depending on your application this may or may not be a problem

For example, if this cache is used in social media posts to store the last response to a post, then its okay if application is sending a slightly stale response for a while.

But on contrary if you are using this in an auction application, where cache is used to check latest bid to validate if the next bid is bigger; then you will inadvertently accept a bid when you are supposed to reject the bid.

One way you can mitigate this is by using 2 Phased Commit (2PC) with your application acting as coordinator.

Even then its not fully failure-proof for various reasons like:

  1. You need support from both the DB and Cache to perform Prepare and Commit Steps
  2. Even then you can still have cache fail on Commit phase
  3. Or What if your application node dies after issuing Commit to DB and before issuing to Cache?

CONCURRENT WRITE ISSUE

Apart from failure modes we saw earlier, there are another set of issues this method encounters when two nodes are concurrently performing writes. Figure 9 below illustrates this.

FIGURE 9

In this case, two nodes N1 and N2 and updating the value of Counter C. But due to network delay, the write operations to cache and DB are not in same order which causes divergence in values in cache vs DB.

There are couple of ways you can solve for this.

Using Locks

If the underlying DB and Cache both support locking, then we can leverage this to solve the above issue. At the beginning of the write operation, application node will acquire lock for Counter C from both DB and cache and release the lock after write succeeds.

Since lock is acquired first by N1 (who gets first request to write in our example), it ensures that N2’s write doesn’t get written to cache first.

However, locks can cause performance bottlenecks and hence should be used carefully. Not all applications can tolerate that performance hit.

Using Compare and Set (CAS)

Some cache providers have CAS capabilities (ex: memcache CAS command). Before writing to cache, application can read the value for the key. In response, along with value for the key, cache also provides another incremental token (which gets updated on every write).
Application will then issue a CAS command with both new value and token. Under the hood, cache will only accept the value, if the token matches the token for the key in cache. If another write to cache happened between application reading and writing, then this token will chance in the cache and hence write will be rejected.

In our example in Figure 9, when cache gets the write from Node N1, it would recognize that value has already been updated since N1 has performed a read, and hence would reject the stale write.

This is arguably less expensive than lock solution above, since you dont need locks. But this is still an expensive operation and should be used only if your application requires this consistency (ex: acution bid example above).

Evict on Write & Update on Read

In this approach, on every write operation, application updates the DB and evicts the key from cache. So on next read operation, when there is a cache miss application will fetch the value from DB and updates cache.

This approach definitely solves concurrent write issue we saw in Figure 9. The reason is, both write operations to cache will be to evict cache. So, it does not matter which order they come in, because end result will be same.

So does that mean our problems are solved? Not really. Because this approach introduces another kind of concurrency issues which can be illustrated with Figure 10 below.

FIGURE 10

What happened here?

  1. N1 tries to read key C from cache and encounters a cache miss.
  2. So N1 reads value from DB and issues a update to C.
  3. In the meanwhile, N2 updated C in DB and invalidate Cache.
  4. Due to network latency, update to C from step #2 reaches cache now. Which means cache will have a value C = 1. This causes a divergence between DB and cache.

I personally, do not see many benefits of this approach over the previous approach. Arguably, you can delay your evict operation after write so that all pending read-updates to cache finish. But this is not a very deterministic, since you do not know how long network delays can be.

This leads us to a next pattern “Update via Change Data Capture” which solves concurrency and fault tolerance issues very well.

Update via Change Data Capture

Change data capture is a mechanism where changes to DB are logged into a message stream. A separate event processing system, consumes these messages and updates the cache accordingly. This concept is explained in depth here by Martin Kleppmann.

This is a slightly complicated setup that requires:

  1. Database which logs all updates in sequence to a event log (mysql bin log)
  2. Event streaming platform like kafka which provides fault tolerance and at-least once delivery of messages along with retaining order of messages.
  3. Stream processor system (like Apache flink) which can consumer these messages in real time and provide exactly-once semantics.

Here is a diagram illustrating this architecture

Figure 11

This approach has few Pros and Cons as discussed below.

Pros

  1. This solves the concurrency issues discussed (Figure 9 and Figure 10) in the approaches above. This is because we have a separate system that is sequentially updating the cache, which means there are no concurrent writes happening in cache. Concurrent writes only happen to DB. Once database resolves the order of writes, our stream processing system will replay those events to the cache in the same order.
  2. Provides Sequential Consistency. Since we rely on system like Apache Flink and Apache kafka which provide fault tolerance guarantees, we will not a miss an update to the cache.

Cons

  1. Maintenance and Operation Overhead: As we can see from diagram, this requires a complicated setup. This brings operational and maintenance challenges. So unless, these concurrency guarantees are required for this system, the cost of setup and maintenance might not be justified
  2. Delayed updates: Even though Apache kafka, Apache flink provide high throughput and low latency, it is still possible that updates to your cache might be slightly delayed. In steady state, this will likely be order of sub seconds or seconds. Depending on the application it may or may not be acceptable to have that delay as it will cause stale reads. So we need to consider these cases carefully.
  3. Failure in event processing: If stream processor system goes down and takes a while to bring it back up, then cache will be stuck with stale data for that duration. We can avoid these stale reads by force restarting/evicting cache to make application read from DB . Also, on restart event processor might take a while to catchup to latest update. It can cause stale reads during this catching up period. You can avoid these stale reads by having stream processor start from latest update (if history is not needed in cache).

Summary

In this blog we looked at different patterns (Update on Write, Evict on Write & Update on Read, Update via Change Data Capture) which can be used to update cache in a cache-aside architecture. We looked at behavior of each of these patterns during a failure/network-partition and concurrent updates in distributed application. It is clear from these tradeoffs that there is no free lunch. As a developer, the choice must be informed by Consistency requirements of your end user application, maintenance & operational overhead.

Please leave your thoughts and comments below. I look forward to learning from your experience.

References

In writing this blog I used some of my experience and information from the following resources.

  1. Database caching using Redis – AWS
  2. Change Data Capture: The Magic Wand We Forgot – Martin Kleppmann
  3. Consistency Models – Columbia University
  4. Application-Level Caching with Transactional Consistency – MASSACHUSETTS INST ITUTE OF TECHNOLOGY
  5. Distributed systems: for fun and profit – Mikito Takada
  6. Scaling Memcache at Facebook

Inflation, QE – Lemonade Economy, Government & Central Bank

One of the prominent themes in fintwit these days is inflation and QE (quantitate easing). In this blog, I am capturing my understanding on the topic in (hopefully) simpler terms.

Lemonade Economy

Before we get to inflation, let us try to understand economy through a simple thought experiment.

Lets say, one fine day I get an idea to sell Lemonade in the evenings at a public park new my home. I put in my own money to buy a stand, lemons and equipment needed to make lemonade. This business proves profitable because there are lot of kids who play in this park and they pester their parents for lemonade. It wont be long before I decide to expand this business to two other parks in my neighborhood.

As good as this idea is, it has some limitations. One, I need more capital and two, I cannot be in multiple parks at same time. So I approach a bank and show them my business plan and take a loan. With this money, I buy equipment and supplies needed for additional parks. But, I still did not solve the second problem. How can I be in multiple places at same time? For this I start employing people to run this business in other parks. My business takes off with very good cash flows.

Watching this success, a new entrepreneur decides to put a competing business by selling sugar cane juice in same parks. They go through same process of borrowing from bank, employing people and setting up juice stands. This soon becomes a small economy with money primarily powered by spending.

  1. Parents (buyers) of kids spend their excess cash for refreshments passing money to entrepreneurs.
  2. Entrepreneurs expand their business through leverage (fancy word for borrowed money), receive money from buyers.
  3. Bank lends money to entrepreneurs and collects interest payments.
  4. Employees receive money from employers.
  5. Suppliers of equipment (lemons, sugar cane, stands) get more revenue for their business.

Important piece to notice here is the total money in this economy has not changed. It just moved from one entity to another. One entity’s spending becomes another one’s income. If the total money in the economy is represented by a pie, the percentage share of each entity has changed.

One of the differences between this hypothetical economy and real world is absence of two critical entities – Government & Central Banks. Let us go through their roles next.

Government

In any society in today’s world, government plays a fundamental pillar which keeps the society intact. Its responsibilities are wide ranging like judicial system for basic safety or well being of society, infrastructure – like roads, parks etc, retirement benefits, military expenses etc.

One side affect of these roles is that it also creates employment to people and hence helps economy engine flowing through deployment of money. Government gets this money primarily through taxes (both on income, trade and other forms). In a happy equilibrium government has enough income to fund its expenditures. But sometimes government need more money than it needs. Let’s say to fund a huger rail system construction which needs capital upfront. Or a far less desirable reason – wars.

Just like entrepreneurs in our example government has to resort to borrowing money. This borrowing can be done in multiple ways. Two popular ways are:

  1. Issuing bonds (which is a promise to payback later)
  2. Using Quantitate Easing

Issuing Bonds

In this method, government can simply issue a contract – which requests for some amount money with a promise to payback in some number of years with a certain interest rate. Government can they shop this around to lenders. In United states these are referred to as Treasury securities. Lenders could be anyone – an investor looking for a safe way to invest this money or a foreign government who has excess income and wanting to invest. It is important for government to use both forms.

Borrowing always from lenders in country is not feasible. Also, it doesn’t get new capital in the country. Rather, it just moves total money in the country around. On the other hand, borrowing from other nations and foreign investors creates additional influx of money. This could help power the economy, increase productivity, generate more goods and potentially export them, thus generating more money inflow.

If governments are not careful in their spending, soon this could lead to irrecoverable spiral. For ex unexpected costs like wars or change in tax policies or pandemic will lead to more borrowing. Over a period of time, government’s balance sheet becomes lop sided and with it its ability to repay accrued debt soon vanishes. This can lead to lack of enough lenders to fund additional debt to run the government. At this point, governments have to resort to another mode of funding – Quantitive Easing.

Quantitave EASING

Before discussing this, we need to introduce another entity economic system – Central Banks. Central banks (Federal Reserve in USA) is the sole authority that manages monetary policy in a country. Among other things, one major responsibility that it fulfills is managing production of money in the country. In other words, it prints the money.

One form of quantitative easing occurs when government issues more bonds but instead of selling to investors, it sells it to Central bank. Where does central bank get this money? It just prints it. This transaction doesn’t occur in such straightforward way. It is done through intermediaries. But what matters is, at the end these bonds appear in balance sheet of Central bank and government gets the money it needs. So government can get to spend more money without extracting it from investors or foreign lending. This in itself should cause a drop in value of each individual unit of money.

For example, before printing new money if there are total 100 notes in circulation and total value of those 100 notes is 100000. Then each note is worth 1000. But if you suddenly have 110 notes in circulation but total value of goods and services in country hasn’t changed. Then each note is worth 100000/110 = 909.90.

Astute reader may point out that this is not a problem, if those additional 10 units are distributed evenly to everyone. It is true, but in realty that does not happen. In practice not all of those additional money printed gets sent to people. Instead of some of them is used to cover additional expenses of government (isn’t it why it borrowed in first place?), or to power unemployment benefits during time of recession etc. This means that some people have to take a hit in terms of value lost on the cash they hold.

Apart from this form of QE, there are other times in which Central Bank prints money. For example, Central banks in past have printed money to not just buy Treasuries but to also buy bonds issued by other Corporations. This happened during 2009 financial recession and in the current Covid-19 powered recession.

So, in these cases newly minted money is not sent to common person through government. But rather it is sent into financial markets. This causes asset (stocks) prices to rise which benefits people who have luxury to invest in financial markets. These people will see their net worth raise. These people have more money to spend than they would have had otherwise.

This leads us to our next topic – Inflation.

Inflation

All this new money flooding into the system means there are more dollars chasing same number of goods and services. In a free market this causes rise in the prices. This is referred to as inflation – the reduction in purchasing power of each unit of money.

Government has a mechanism to track this rate of price increase. The official term for this is Consumer Price Index. They take a basket of goods that are usually purchased by households and track the price of this basket over a period of time. This is generally what people mean when they say inflation is 2% or 3%.

But there are some flaws in judging outcome of monetary policies purely through lens of CPI. One of them is that it is managed by government and often items (or their proportions) considered in these basket change.

The other is, prices of goods are affected by forces other than money printing. For example in his book – The Price of Tomorrow, Jeff Booth explains how deflationary force of technology causes the prices to go down. Technology and innovation helps us produce goods and services at cheaper price and faster pace. These savings are passed on to consumers causing reduction in prices (deflationary). So, the real downsides from money printing are hidden by these deflationary forces.

Also CPI doesn’t really capture other expenses that are incurred by households. Take College eduction for example. This article by CNBC, cites that college education increased by 25% over last 10yrs. Or this article by Forbes, which says college education rate is growing at faster pace than wages. You will find similar stories for Health care. So while wages increasing at a very marginal rate, these expenses are accelerating.

This puts a massive pressure on majority of population. Education and financial markets are essential ways of upward mobility. Over time it is increasingly difficult for masses to get access to these opportunities, essentially leading to a two class system. No wonder despite a historical bull run in economy over last 10yrs, more and more people feel left out. The system is failing them.

This is very well articulated in this following twitter thread by @PrestonPysh.

Conclusion

No matter where in political spectrum between Capitalism and Socialism one lies, there seems to be enough evidence that current monetary policies and particularly QE seems to have unintended affects. It is driving higher inequality, more polarization, emergence of two class systems and increasing loss of confidence in the fiat monetary system. It is an extremely complex subject which I hope to continue learning about.

References

  1. The Price of Tomorrow – Jeff Booth
  2. Big Debt Crises – Ray Dalio
  3. How The Economic Machine Works by Ray Dalio
  4. Preston Pysh’s Explanation of Inflation
  5. How currency works – howstuffworks.com

Choosing the Right Managerial Style

What is your Managerial Style – Leadership or Management. Coaching or Supporting?

I have deliberated on this question a lot during my career. While the definition of each of them is very well documented, often conversations tend to pitch one against another, which puts newer managers in the uncomfortable spot of picking one vs other and guessing which one is better or worse making wrong choices (it certainly did for me).

Having seen these styles work well and not-so-well in my career (as a manager and a tech lead), I now believe that a good line managers need to adapt and use both techniques. The success of style depends entirely on context and people.

Management

When I say Management, I refer to following operational style:

  • Overseeing goals of a team
  • Being tactical in determining strategy in every step
  • Being Operation thinker who plans execution steps
  • Focusing on objectives
  • Minimizing risk in execution and seeks stability
  • Sometimes Teach by doing

Leadership

At a high level, operating style for “Leadership” is: 

  • Setting vision and directing
  • Influencing & Coaching people through reasoning (explaining “why” we are doing things)
  • Making people feel part of vision and motivating them to creative in execution while still staying on track
  • Being a strategic thinker
  • Optimizing for long term autonomy of the team (some times trading off immediate risks)

There are enough subtle differences between the two styles and often it is not entirely sure what is ideal. In this write up, I try to capture a framework for choosing between the two styles.

Learnings From Mistakes

I made this mistake early in my management career. There was a project that an engineer reporting to me was working on. It involved a lot of cross organizational alignment, planning and execution. I knew this was a steep step up for this engineer. By then through training I received (or let’s say I probably took wrong lessons) I had this idealistic view of a manager leading through coaching than being very tactical and execution focused. 

Before too long, this approach backfired for the team and engineer. Project execution was constantly falling off track despite the best efforts of the lead engineer. There was lack of clarity for everyone involved in schedules, dependencies and what needs to be done when. All this while I was still Coaching the engineer – guiding them through questions, helping them arrive at decisions and figuring out the path of the project. 

So what happened? What the engineer really needed was more hands on support than just coaching. Someone who can help operationalize the project, help in figuring out milestones, how to get alignment on deliverables and timelines across the team. Purely relying on coaching and expecting the engineer to ask the right questions and figure out a path forward, is setting up for failure at that stage of their career.

It’s through experiences like these that I now believe that to be a good Engineering lead (at least as line managers), one has to be able to operate in both styles depending on context and people involved. You need to be able to do any of the following depending on context.

  1. Directing – Setting path, operationalizing, assigning clear deliverables
  2. Supporting – Help in brainstorming, provide feedback proactively, teach by doing if needed
  3. Coaching – Let them make the decisions, provide high level directions, ask probing questions, help with setting decision frameworks
  4. Delegate – Trust and only get involved when asked

Learnings From Mythology

I was recently forwarded a story about two leaders in Hindu mythology and their differing styles. It is very relevant to current topic in this blog. So I modified that slightly here to draw parallels to our subject.

Ramayan and Mahabharata are two epics in Hindu mythology. The centre story of both these books is around victory of good over evil. 

In one story Ram (protagonist) leads his army to defeat Ravana in his land, While in the second Krishna (protagonist) oversees Pandavas defeat Kauravas in the battle at Kurushektra. 

In Ramayan,  Ram is the best warrior of his side. He leads his army from the front. Strategizes & directs different people to do things which will meet the objectives. His people while very skilled are not capable of operational tactics.. Ram sets direction & also tells people what to do during difficult times. Ultimately they won the war & the final outcome was achieved.

On the other hand Krishna told Arjuna (skilled warrior), I won’t fight the battle. I won’t pick up any weapon; I would only be there on our chariot as a charioteer. 

And he did what he said. He never picked up the weapon & he never fought. Still, Pandavas won the war & final outcome was achieved.

What is the difference?

It was their managerial style & It was also the type of people who were being led and situation at hand.

Ram was leading an army of warriors who were not skilled fighters & they were looking for direction. While on other hand, Krishna was leading Arjuna who was one of the best archers of his time. 

While  Ram’s role was to show it & lead from the front, Krishna played the role of a coach whose job was to help clarify doubts, provide general guidance needed for Arjuna to go about his work. 

Krishna couldn’t teach Arjuna archery but he could definitely help him see things from a very different perspective whereas Ram had to use his superior skills and experience in helping guide his warriors across difficult terrains.

So they had to operate in two styles:

Ram- A skilled warrior, was tactical, gave precise roles & instructions (operationalizing the strategy), motivated the army to fight with specific cause in mind. He needed the trust of his warriors to be able to do this. Hint: Management.

Krishna: Arjuna was looking for a coach who provides strategic clarity, explains vision and why it was needed. Krishna did exactly that, he coached Arjuna and allowed the team to take lead, fight for the cause of the team, use his skill and creativity in succeeding. Hint: Leadership

What type do you need to be?

Look at the combination of your team, project and context to reflect what type of role you need to play.

  • One who keeps answering/solving problems for people ? Or Who asks relevant questions from their people so that they can find their own solution?
  • Someone who tells/directs, is tactical and operationalizes the plan?  Or Someone who coaches and sets a path and lets their people find their own ways?
  • Are u someone who has had bright engineers but yet fall through in execution of larger projects? Or do you have an engineer who is an expert who seeks clarity and direction?

Best outcomes are achieved when you put the right hat based on the context.