My Favorite Books - 2016 Edition

I am deeply jealous, albeit slightly confused, of people who read over fifty books a year. I try to read around fifteen, and even then, I have trouble remembering what I read a few months after the fact. This is normal, of course, as the act of remembering information is a proxy of instituting that information in your daily thinking. If you don't think about it, you forget it. That's why the amalgam morning headlines you read with your first cup of morning coffee are forgotten by the time you have your third cup. The information comes in one ear and comes out the other. What comes in easy goes out just as easy. 

But some things stay with you. Out of the few thousand articles I read this year, I can recall maybe three or four that had a lasting impact on my thinking. That makes you wonder: was it even worth reading all of those articles in the first place? For me, the answer is an unequivocal "probably not." As I've publicly stated before, going forward in 2017, I will decrease my time spent reading news and increase my time spent reading books. Time is limited, knowledge and information are not. Your goal as an intellectually curious person, I believe, is to maximize the gathering of knowledge and information while minimizing the time spent garnering it all. 

With that preamble out of the way, let us look at some of the books that are worth remembering, just as we did in 2015

The True Believer

Most books are too long and cover too little. This book is the opposite of that. If you had to cram as much wisdom per sentence, you would have a hard time matching The True Believer. The book is all about mass movements; how they form, what keeps them in power, and finally, why they eventually fail. What makes it especially interesting is the author, who was a self-educated drifter and longshoreman. You will not find any pseudo-intellectualism in his writing. Here are some quotes that resonated with me:

When a mass movement begins to attract people who are interested in their individual careers, it is a sign that it has passed its vigorous stage; that it is no longer engaged in molding a new world but in possessing and preserving the present. I ceases then to be a movement and becomes an enterprise. 

When people revolt in a totalitarian society, they rise not against the wickedness of the regime but its weakness.

[On what makes a good leader] What are the talents requisite for such a performance? Exceptional intelligence, noble character and originality seem neither indispensable nor perhaps desirable. The main requirements seem to be: audacity and joy in defiance; an iron will; a fanatical conviction that he is in possession of the one and only truth; faith in his destiny and luck; a capacity for passionate hatred; contempt for the present; a cunning estimate of human nature; a delight in symbols (spectacles and ceremonials); unbounded brazenness which finds expression in a disregard of consistency and fairness; a recognition that the innermost craving of a following for communion and that there can never be too much of it; a capacity for winning and holding the utmost loyalty of a group of able lieutenants. 

The knowledge in this book can be applied almost anywhere - markets, technology, or your own leadership.

The Lessons of History 

History is a collection of stories that help you pattern match. When we live in a world filled with social media echo chambers and filter bubbles, it is important to be able to take a step back from the news and take a deep look at what's really going on. History lets you do that because the outcomes of each event are known. The patterns (stories) are there to be absorbed, and the matching (making links to the present) you must do on your own. More often than not, pattern matching results in a successful decision, but there are times it can hurt (a deep background in history and patterns can make you jaded and hesitant to act, whereas naivety encourages participation, even if through sheer inexperience). The Lessons of History is a course on the patterns our society goes through, and I use it to think about today. 

So the first biological lesson of history is that life is competition. Competition is not only the life of trade, it is the trade of life - peaceful when food abounds, violent when the mouths outrun the food. Animals eat one another without qualm; civilized men consume one another by due process of law. Co-operation is real, and increases with social development, but mostly because it is a tool and form of competition; we co-operate in our group - our family, community, club, church, party, "race", or nation - in order to strengthen our group in its competition with other groups. 

Intellect is therefore a vital force in history, but it can also be a dissolvent and destructive power. Out of every hundred new ideas, ninety-nine or more will probably be inferior to the traditional responses which they propose to replace. No one man, however brilliant or well-informed, can come in one lifetime to such fullness of understanding as to safely judge and dismiss the customs or institutions of his society, for these are the wisdom of generations after centuries of experiment in the laboratory of history. 

So the conservative who resists change is as valuable as the radical who proposes it - perhaps as much more valuable as roots are more vital than grafts. It is good that new ideas should be heard, for the sake of the few that can be used; but it also good that new ideas should be compelled to go through the mill of objection, opposition, and contumely; this is the trial heat which innovations must survive before being allowed to enter the human race. 

Before I tackle any business decision, I try first to find some sort of historical precedent as to the outcome. If the outcome is beneficial in my favor, I proceed to think deeper about the problem. If the outcome is negative, I reevaluate whether the historical apology is an apt one, and if it is, whether this is a decision that is worth pursuing. History doesn't repeat itself but it often rhymes.

Fooled by Randomness 

We are often assigned books to read at an early age, with the ultimate goal of having these books teach us something about life. I believe this can actually be detrimental. For a book to be impactful, you must not only understand it from an academic perspective, but also be "ready" for it. Reading a book you are not ready for is detrimental as you are less likely to pick it up sometime again in the future if you did not appreciate it the first time around. When I first read Fooled by Randomness a few years ago, I didn't fully appreciate the lessons it told. For this reason, I decided to re-read it this year, despite it not resonating with me years ago. And I'm very glad I did, for it has changed the way I approach certain situations in my life. 

My lesson from Soros is to start every meeting at my boutique by convincing everyone that we are a bunch of idiots who know nothing and are mistake-prone, but happen to be endowed with the rare privilege of knowing it.

People do not realize that the media is paid to get your attention. For a journalist, silence rarely surpasses any word.

Lucky fools do not bear the slightest suspicion that they may be lucky fools - by definition, they do not know that they belong to such a category.

The first lesson I took away from Fooled by Randomness is that the magnitude of an event is far more important than the frequency with which it occurs. This is simple to understand in the case of investing. In the first scenario, let's say you have $100 to invest, and you do so by investing $1 in 100 companies. Each of these investments returns 5x the initial capital invested. In the second scenario, you still have $100 to invest, but ninety-nine of your investments fail, and only one returns 1000x. In scenario one you make a total of $500 ($1 x 100 x 5). In scenario two you make $1000 ($1 x 1000 x 1). Frequency is overrated; magnitude is underrated. There is another application of this very same idea to networking. For work, I often have to attend conferences and various meetups. In the past, I usually opted to having short conversations with a large amount of people. These conversations tended to be chit-chatty in nature, and very rarely led to any sort of lasting relationship. More recently, however, I have pivoted to speaking with only one or two people at a conference, but giving them much more time and attention. This has been incredibly beneficial, both from a relationship-building perspective, but also for building friendships. Again, the magnitude of conversation mattered much more than the frequency of it.

The second lesson comes in the form of how I view luck vs skill. Taking advice from successful people is a popular pastime. But is it possible to separate how much of their success is attributed to luck vs skill? This is a question to the answer of which I am still trying to determine. This much I have learned, however: do not take advice from people who have have gotten rich based on the outcome of one event (more likely than not that advice is not reproducible and was the result of luck); take advice from those who have a consecutive record of success (it is more likely than not that skill was involved rather than luck); be wary of the advice from experts as soon as that advice enters a field they have not been successful in (skill does not often translate well to other topics); luck can be increased by rolling the die more times (take chances, fail often, learn, and move on). 

And with that, let's mark an end to 2016 and look forward to a memorable 2017. I know I haven't been great at keeping this blog updated, but thanks always for reading and keeping subscribed. 

How to Interpret Information in an Infinite Knowledge World

If you know me in person or follow me on Twitter, you know that I'm a pretty voracious reader. Having been like this for quite a while now, I have developed an approach on how I interpret information. This includes both news and books; temporary and permanent knowledge. 


Despite years of stagnation, Twitter remains by far my favorite social network. It gives you a glimpse into the head of another person; what they read and how they spend their time. A lot of hush hush watercooler conversations are actually public on Twitter, if you follow the right people. That said, there is also an incredible amount of noise, most of which can be safely ignored (e.g. politics Twitter, where facts go to die). 

When I browse my timeline, I try to remind myself of an old idea someone told me a while back (unfortunately I do not remember who that someone was). That idea is as follows. Imagine how much work goes into writing a book. Often times, the author has years of requisite knowledge (it's common to write your first book at 50 - 70 years old), puts in countless hours of research, and has a publisher fact check the results. Obviously not all books go through this rigorous process, but the best ones certainly do. Now think about the process of writing a tweet. Perhaps the person has years of knowledge, but it's unlikely she put in hours into thinking and fact checking the tweet. That wouldn't be practical. Think about that next time you retweet something you like. How much thought do you think went into it, and is it actually factually correct? I don't know about you, but my most popular tweets have been those I've posted after midnight, usually around my 3rd or 4th glass of wine. Twitter is fun, but don't treat every tweet as gospel. 


So we've established it takes a few seconds to post a tweet. What about to publish a news article or a blog post? Not every news outlet can be like The New Yorker and give the author months or years of investigative journalism before he hits publish. Or this post. I've thought about this topic for a while now, and even did some light research too, but who fact checked it except me? It would be hypocritical of me not to say proceed with caution, even with my own writing! 

Now, here is another trope I use, this time when I read news. A while back I was reading a Wall Street Journal article about some accounting standards a company misapplied. Now, it just so happened to be that I was just learning about that very same accounting standard in school, and my professor was a former partner at a big accounting firm - a subject matter expert. You might see where I'm going with this. The Wall Street Journal writer had the facts totally wrong! Not only did he apply the incorrect accounting standard, but he also misunderstood the standard he misapplied (if only two wrongs made a right?). The writer wasn't a bad guy; he was simply given a topic he hadn't much experience in. The WSJ is for the most part an excellent source of financial news, but even they make mistakes. Hiring a CPA to fact check every news article is impractical, and besides, even a CPA doesn't know about every new accounting standard.

The only reason I caught this mistake was because I just happened to be studying that exact same topic by an expert in the field. Unless you were also an expert in this topic, you probably took the whole thing as fact. And who could blame you, why should you know better about advanced accounting standards? Think about that next time you're reading about a topic you are otherwise clueless about. Is it possible the author is writing beyond his subject matter expertise? Probably. 


There were 304,912 books published and republished only in the United States in 2013. I will eat my shoe (Allen Edmonds uses good, tough leather, so I strike a fair dare) if each of these books were actually any good. What is a good book, anyway? For purposes of this post, a good book is one that is factually correct not today, but in the short to medium term future. On an infinite timescale, every knowledge book will be factually incorrect because we will discover new things that we did not know at the time of writing. The goal of reading a book today, then, is for the information contained within it to be useful in your life (20 - 80 years). 

What further complicates things is that out of the millions of books published every year, few will be great, many will be good, and the majority will be a waste of your time. How then, should you choose what to read when the constraining resource is time? In the past, I've used Google, GoodReads, and countless other book review websites to help me separate the good recent books from the bad. But what I've noticed is some good books became bad books as time went on. Reviews slowly went from four and a half stars to four, and then even to three stars in the span of a few years as the 'facts' presented in the books turned to actually be opinions.

Rather than trust reviews of modern day books, I've found another process that eliminates hype and filters for the best books: time. For the most part, I now read books that are still well-received at least ten years after they were published. What this tells me is the information contained in the book stood the test of time (Another fun exercise: take a look at your tweets from two months ago and cringe in absolute horror from all the things you got wrong). A book that was published fifty years ago and is still read today tells me it's a book with lasting content. If it's a business or investing book (which are notoriously trendy) that lasted that long, you can be sure it's got long lasting nuggets of wisdom. The last thing you want to spend your time on is reading ephemeral books - that's the definition of a waste of time. 

Books (and art, music, and all other knowledge content) are derivative instruments of prior work repackaged to the taste of modern times. I was watching a season of Dexter a few years ago, and I thought the episode finale was very well done, original, and downright chilling. A character that we were lead to believe is dead actually turned out to be alive, and not only that, but the true murderer in the case. A short time later I watched Psycho, a classic Alfred Hitchcock film from 1960, that essentially uses the same premise of assumed-dead-but-actually-isn't to even greater chilling effect. In short, Dexter copied Psycho, which I bet you copied something else from a time before that. What's old is new again; original wouldn't exist to a person who has seen all of history. 

Summing it all up

We live in a time where an almost infinite source of information is thrown at you. That makes it really hard to know what to spend your time on. The internet has also made it incredibly simple and free to publish ideas, lowering the standard of quality to the substandard. The above themes help me cope with the abundance of information, and I hope they will to you as well. 

Thinking through Artificial Intelligence

I’m really not a fan of the term ‘artificial intelligence’, or AI, for short. We tend to connote a negative meaning to the word artificial, implying that an artificial intelligence is unnatural, and possibly even evil. In fact, the term AI reminds me of another term — genetically modified organisms (GMOs) — which have also been the subject of vicious debates in recent years despite, well, science. I suppose AI could have a worse name, like maybe genetically modified intelligence, but we can leave that to be the villain of another sci-fi film.

As is often the case with new technology, there are camps of people who are incredibly paranoid about what such a technology can do to the stable world order. The cannonical example often used is one of the 19th century English textile workers who protested against the new technologies brought about by the Industrial Revolution — the Luddites. The term is now inscribed to mean a person who is anti-technology, even though the reality of the Luddite argument was quite a bit different. What we have now are AI Luddites who are afraid of artificial intelligence due to the potential catastrophic events an evil AI can cause.

My first encounter with an evil AI, as I imagine was most people’s, was the film The Terminator (1984). The main antagonist of the film, Skynet, was pure artistic genius on the part of the writers. From Wikipedia:

Skynet is a fictional conscious, gestalt, artificial general intelligence (see also Superintelligence) system that features centrally in the Terminator franchise and serves as the franchise’s main antagonist.
Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realizing the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfill the mandates of its original coding.

If Skynet doesn’t scare you, I don’t know what will. But let’s get back to a less evil artificial intelligence.

AI has a long, storied history, which you can read about here. But I’ll be picking up on the topic from even earlier, a 1957 movie and a favorite of mine, Desk Set.

Worrying about artificial intelligence, circa 1957

The film is classified as a romcom according to IDMB (or is it IMDB’s AI deciding what to tag it?), but it’s really much more than that. Taking place in the reference department of a library, we are introduced to a group of women whose job it is to pick up the phone, research facts, and answer questions on a wide array of topics. If that sounds inefficient, that is because it is, leading the president of the library to hire a methods engineer and efficiency expert to replace the reference department with an AI computer. A romantic hour later, the AI is programmed, installed, and production ready. Unfortunately, the AI ends up having trouble answering customer calls, and is later ‘upgraded’ back to the women who used to work in the reference department in the first place.

With the beautiful bias of hindsight, we know what actually killed the reference department were search engines like Google, not AI. The point of bringing this example up was to show that AI-ludditry is nothing new. What actually disrupts your job may not be what you think will disrupt your job. Outkast taught us that in the wonderfully deep lyrics of Ms. Jackson:

“You can plan a pretty picnic, but you can’t predict the weather”

Which leads me back to Skynet and evil AI. Why are so many people so paranoid about a strong AI breaking out of the box and taking over? My gut reaction to an evil strong AI is to ask if there has been any historical precedent for technology turning bad to hurt humans. Granted, there has never been such powerful AI tech as there is today, but nonetheless, the question stands. And besides, why does a strong AI have to be bad? It could turn out to be good just as it could evil. Innocent until proven guilty.

The next logical pattern to start pondering is as follows. Okay, so someone created an evil AI — what are the realities of such a situation? The human brain uses 20 Watts to operate, which is extremely efficient and so far non-reproducible in non-humans. Meanwhile, the Google computer (AlphaGo) that beat Lee Sedol in a game of G0 used approximately one Megawatt. That is 50,000 the energy consumption a human brain uses, and we’re only talking about a board game (a complex game, but still a game without the external factors of a real environment). Thus, the question becomes slightly different — is there enough computing power in the world for an evil AI to achieve world dominance?

By the way, I want to remind you that we’re speaking in hypotheticals here. A self-learning, cognitive, strong AI does not exist yet. The debate thus far has been around preventive measures that usually begin with “what if”. As you can probably tell now, I’m not very worried about a Skynet-esque AI. My strong suspicion is that people picture an evil AI because of all the science fiction films and novels that they read as children. But fine, let us embrace the possibility — at least for a second — that an evil AI does come into existence. Should we be spinning our wheels designing failsafes into the AI system to prevent such an outcome?

Obviously yes, we should be thinking about such remote possibilities in all system designs. But allow me to make a brief philosophical excursion on why instituting failsafes won’t rescue us from an evil AI. An artificial intelligence that turns evil is a low probability event. In other words, it’s a black swan event. And by definition, black swan events cannot be predicted in advance. It follows that designing a failsafe into the system will not prevent the evil AI from escaping, given that the definition of a black swan event is an unpredictable event. How can you take preventive measures against an unpredictable event? You can’t really.

I’ll leave off with an Alan Kay quote I’ve always enjoyed:

It’s easier to invent the future than to predict it

Go invent an AI instead of predicting the unpredictable!

Thinking in Systems: The Business of Blockchain

As a teenager, I used to obsessively read SSD reviews on AnandTech back when the technology was still new, unknown, and prohibitively expensive. Having no formal electrical engineering degree, the reviews were hard to get through at first, but after a few years I kind of understood most of the concepts and the technology (since mostly forgotten). But the truth is, all of that nerdery wasn't needed to exploit the benefits of SSDs. Sure, read/write times matter, but should you also care about the quantum tunneling process NAND goes through, or the tunnel oxide degradation, or maybe the JEDEC endurance standards? I would hope not! A technology graduates to a product when its benefits can be understood by many. Do you really need to know how your car operates under the hood in order to be able to drive it? 

The same analogy can be extended to new technologies. You don't necessarily need to understand the stack on a technical level (your company has great engineers for that, remember?). As a non-technical person, you should be asking the right questions on how it will impact your business. One such new technology is the blockchain (aka distributed ledger). Assuming you're working with a technical team who can do the implementation, here are some other questions you should be thinking about internally:

  • What kind of impact would the elimination (or most likely reduction) of middlemen have on your organizations risk profiles?
  • Is your organization ready to invest in blockchain tech, even if the solution is 5-10 years away? As a reminder, organizations that won on the internet were the result of cumulative advantage. Money spent on failed R&D projects might provide unexpected results many years ahead (many scientific breakthroughs occurred through by serendipity). Also, not everything can be measured on a spreadsheet, so do not forget the intangibles that get generated as a result of failed projects. 
  • What are the second and third order effects of updating your organizations plumbing? Everybody wants transactions to settle immediately, but would kind of consequences would that entail? 
  • Do you have the internal talent to undergo such an effort or would you have to hire external talent? Despite working at a consultancy firm, I think it's very important to have the internal talent to support the transition. Even if you hire outside help to implement the system, you should make sure you've got the internal talent to maintain it, especially since blockchain tech is relatively new as far as systems go. 
  • How will it impact your firms' competitive advantage? Apple's competitive advantage is creating beautiful hardware and seemingly less beautiful software, so upgrading its IT systems might not be a priority. Google, on the other hand, is a data company, which makes money through advertising, and increasingly, artificial intelligence. For a company that might be in the business of manufacturing autonomous cars in the next few decades, decentralized autonomous organizations might be an investment in its best interest. The same can be said about banks, which are being unbundled by startups. A bank is different from a startup because it has scale, while a startup has the innovation. Scale benefits from decreased costs, which a blockchain system can bring with it
  • What will a successful implementation look like? Before you go around hiring blockchain specialists and consultants, you should have a semblance of a plan for the endgame. You might be wrong, but you should at least plan to be right (and course-correct as you go). If you're a bank, you might want a system that, at its core, is designed to limit regulatory risk. If you're a medical company, your goal might to limit regulatory risk but also to design the system with privacy at the forefront. The point is every company is different, and there system you put place will most likely end up being proprietary.
  • Talk with your stakeholders. Unless you are a private company and exist in a vacuum (which even then, you probably have stakeholders), your stakeholders will probably be effected if your organizations risk profiles change as a result of adoption of a new tech. If you are a wealth management company and manage people's money, you probably have LPs to consider. How would they feel about you spending money on adopting new tech XYZ, especially if it ends up reducing operational risk and saving their money? What about the auditors - would they increase or reduce audit fees? Can they audit your new system? Would regulators even support such a system rewrite? Think about your stakeholders, who in some businesses, might even be willing to subsidize your technological investment since it's in their best economic interest.
  • Forget the hype. Part of Steve Job's magic and Apple's resultant success was a very simple concept: under-promise and over-deliver. Too many products end up doing exactly the opposite, which doesn't make them a failure relative to competing products, but does make them a failure compared to what they set out to accomplish. This can be ruinous for a company/technology, because if the company loses hope, employees tend to leave for greener pastures and the company ends up melting away. That's why it's important to compare the technology on a relative scale rather than an idealistic one. How does it compare to existing solutions? Not how does it compare to what it set out to be. In very rare cases does a technology overhype and over-deliver. 
  • Keep an open mind. As is often the case with new technology, everybody has an idea for how it should be implemented, and no idea is right or wrong. The Internet is an open standard, but the doors to it all have a different doorknob. Google Chrome, Internet Explorer, Firefox, Safari, Opera (I can go on) all have a certain vision for how you should be browsing. Sure, some of them are dominant, but they all steal implementation details from each other. No solution will be perfect, so always be skeptical of those masquerading to be. Oh, and don't forget, good ideas often look like bad ideas initially. 

The technical problems are hard to solve, but many smart people are already figuring them out. The last thing you want to do is cram a shiny, new blockchain system because it is a shiny, new blockchain system. Ask the right questions, think about the opportunity costs, the repercussions, and finally, if you can even pull off such a transition. 

Note: All thoughts are mine and do not reflect those of my employer.

Bridging Private and Public Markets

Being precious of your time, I'll just say the idea upfront: the reason IPOs are historically poor investments is because the valuation is already at a peak prior to the IPO. The subject of this post is why that is the case.

Before we reason this logic out together, let me be clear that this point holds only for VC-backed companies. There are plenty of great companies that never take VC money, IPO on their own terms, and do just fine, but that's not the subject of our post. And if you would allow me, a caveat: what I write below is from my personal experience watching private and public markets unfold. I did not have the time to compile a data-heavy analysis, but my gut tells me it would resemble something like what I came up with below. 

If you follow investing and finance circles closely, you know that the common wisdom is to avoid investing in IPOs at all costs (I assure you, the pun is unintended). Historically, returns of such newly public companies can be poor. We like to think of newly public companies as disruptors, and they very well are, but being a disruptor does not mean you will succeed. The reverse, however, is often true. Successful companies are disruptors, but disruptors are not necessarily successful companies. 

So let's say you are a disrupting company (a Black Swan) on the verge of going public, and you raised a few hundred million from venture. At that very point in your company's history, valuation is at an all time peak. If there were checkboxes to check on how to reach the highest valuation you can, you'd have checked them all. To better illustrate this point, I made a chart (see below):

As a seed or angel investor, your job is to invest from time period 2001 - maybe 2005 while the company is in its extremely early and risky stages. As a later stage VC, you will be investing from 2006 and until the liquidity event, which is when the company goes public in 2016. You will notice there is a premium attached to going public, which I originally dub the "IPO Premium" (also goes by "IPO Pop"). There are many different explanations for why this premium often occurs, very few of which actually have anything to do with the fundamental health and prospects of the company. For the intellectually curious, the premium can be due to phenomenons like investors and employees getting liquidity for their stock and a growth momentum that occurs when a company goes public (I'm not saying it's rational, I'm just saying it exists). 

Going back to the original bolded premise of this post - why is it that VC backed companies tend to perform so poorly post-IPO? You might think the answer is due to a highly complex financial explanation, but the answer(s) can be explained with relative ease:

Venture capital valuations are usually not a function of the fundamental value of a company. Unlike, say, a blue chip stock, startups do not have a reliable track record of, well, frankly anything. You cannot discount the cash flows of a startup since they are often unprofitable; you might as well run a DCF on your toddler as a predictor of their future success (please don't). Instead, venture valuations are a function of the marginal backing of the last investor. In plain terms, this means the valuation is dictated by the willingness of the last VC to invest in your company. Unlike public markets, which are distributed value carriers, valuations in private markets are often driven by very few investors. This is not a bad thing, it's simply the way in which markets operate (mind you, public markets can be just as irrational).

Another reason why VC backed companies often underperform ex-post is due to the difference in the way public investors view success versus a private investor (a VC). A VC is looking for visionary founders who create products that have the potential to be huge businesses. Once that potential is agreed upon by other VCs and people in the private climate, valuations tend to increase. Meanwhile, a public investor judges a company based on totally different metrics. Public investors care about the business model and all of the things that come with it; revenues, expenses, and profits. The potential of the company on which private investors have agreed upon should now be coming to fruition. If it doesn't, and soon, public investors start to get antsy and eventually sell the stock, dropping the valuation considerably. Again, there is nothing inherently wrong with that (and if there is, I urge you to design a better system). 

This last point is a bit harder to articulate but that won't stop me from trying. If you take a look at the chart above once more, you will notice that the liquidity event for private investors occurs are the highest valuation the company has had up to that time. This makes sense - a company grows and is at its healthiest right at the point it goes public. But this private valuation, which remains private as the investment bankers attempt to take it public - does not translate well into a public valuation, which can only be calculated after the stock is publicly traded for a period of time. This discrepancy occurs at the point at which valuation turns from an art form to a science. I exaggerate slightly, as even public company valuations are still often guesstimates (read: art), but as a general rule, private valuations are gut-feeling based while public ones are more data heavy (after all, there is finally data to analyze). The gap between what a company is actually worth, called intrinsic value (it is the amount which public valuations tend to approach after a certain period of time), and what is it worth on the private market, gives rise to this pricing irrationality.


You might say, but Larry, what about companies like Apple, Google, and Facebook, all of which took venture money and were outrageously successful in public markets? Survivorship bias is a very real threat here. While it's true that those companies (and many more) started as private darlings and became public darlings, what about the hundreds of companies that went out of business or were acquired for a discount? Don't forget about those.

There is one rule in venture capital and that is there are no rules in venture capital. That said, there are general theories, which often hold. This post was an attempt to bridge the rules of private markets and public markets. Similar to the discrepancy between quantum mechanics and general relativity (which is a much more interesting debate than the one we're having), the same rules do not govern private and public markets. For that reason, it's important to understand both sets of rules and know when each applies.