Thinking through Artificial Intelligence

I’m really not a fan of the term ‘artificial intelligence’, or AI, for short. We tend to connote a negative meaning to the word artificial, implying that an artificial intelligence is unnatural, and possibly even evil. In fact, the term AI reminds me of another term — genetically modified organisms (GMOs) — which have also been the subject of vicious debates in recent years despite, well, science. I suppose AI could have a worse name, like maybe genetically modified intelligence, but we can leave that to be the villain of another sci-fi film.

As is often the case with new technology, there are camps of people who are incredibly paranoid about what such a technology can do to the stable world order. The cannonical example often used is one of the 19th century English textile workers who protested against the new technologies brought about by the Industrial Revolution — the Luddites. The term is now inscribed to mean a person who is anti-technology, even though the reality of the Luddite argument was quite a bit different. What we have now are AI Luddites who are afraid of artificial intelligence due to the potential catastrophic events an evil AI can cause.

My first encounter with an evil AI, as I imagine was most people’s, was the film The Terminator (1984). The main antagonist of the film, Skynet, was pure artistic genius on the part of the writers. From Wikipedia:

Skynet is a fictional conscious, gestalt, artificial general intelligence (see also Superintelligence) system that features centrally in the Terminator franchise and serves as the franchise’s main antagonist.
Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realizing the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfill the mandates of its original coding.

If Skynet doesn’t scare you, I don’t know what will. But let’s get back to a less evil artificial intelligence.

AI has a long, storied history, which you can read about here. But I’ll be picking up on the topic from even earlier, a 1957 movie and a favorite of mine, Desk Set.

Worrying about artificial intelligence, circa 1957

The film is classified as a romcom according to IDMB (or is it IMDB’s AI deciding what to tag it?), but it’s really much more than that. Taking place in the reference department of a library, we are introduced to a group of women whose job it is to pick up the phone, research facts, and answer questions on a wide array of topics. If that sounds inefficient, that is because it is, leading the president of the library to hire a methods engineer and efficiency expert to replace the reference department with an AI computer. A romantic hour later, the AI is programmed, installed, and production ready. Unfortunately, the AI ends up having trouble answering customer calls, and is later ‘upgraded’ back to the women who used to work in the reference department in the first place.

With the beautiful bias of hindsight, we know what actually killed the reference department were search engines like Google, not AI. The point of bringing this example up was to show that AI-ludditry is nothing new. What actually disrupts your job may not be what you think will disrupt your job. Outkast taught us that in the wonderfully deep lyrics of Ms. Jackson:

“You can plan a pretty picnic, but you can’t predict the weather”

Which leads me back to Skynet and evil AI. Why are so many people so paranoid about a strong AI breaking out of the box and taking over? My gut reaction to an evil strong AI is to ask if there has been any historical precedent for technology turning bad to hurt humans. Granted, there has never been such powerful AI tech as there is today, but nonetheless, the question stands. And besides, why does a strong AI have to be bad? It could turn out to be good just as it could evil. Innocent until proven guilty.

The next logical pattern to start pondering is as follows. Okay, so someone created an evil AI — what are the realities of such a situation? The human brain uses 20 Watts to operate, which is extremely efficient and so far non-reproducible in non-humans. Meanwhile, the Google computer (AlphaGo) that beat Lee Sedol in a game of G0 used approximately one Megawatt. That is 50,000 the energy consumption a human brain uses, and we’re only talking about a board game (a complex game, but still a game without the external factors of a real environment). Thus, the question becomes slightly different — is there enough computing power in the world for an evil AI to achieve world dominance?

By the way, I want to remind you that we’re speaking in hypotheticals here. A self-learning, cognitive, strong AI does not exist yet. The debate thus far has been around preventive measures that usually begin with “what if”. As you can probably tell now, I’m not very worried about a Skynet-esque AI. My strong suspicion is that people picture an evil AI because of all the science fiction films and novels that they read as children. But fine, let us embrace the possibility — at least for a second — that an evil AI does come into existence. Should we be spinning our wheels designing failsafes into the AI system to prevent such an outcome?

Obviously yes, we should be thinking about such remote possibilities in all system designs. But allow me to make a brief philosophical excursion on why instituting failsafes won’t rescue us from an evil AI. An artificial intelligence that turns evil is a low probability event. In other words, it’s a black swan event. And by definition, black swan events cannot be predicted in advance. It follows that designing a failsafe into the system will not prevent the evil AI from escaping, given that the definition of a black swan event is an unpredictable event. How can you take preventive measures against an unpredictable event? You can’t really.

I’ll leave off with an Alan Kay quote I’ve always enjoyed:

It’s easier to invent the future than to predict it

Go invent an AI instead of predicting the unpredictable!

Thinking in Systems: The Business of Blockchain

As a teenager, I used to obsessively read SSD reviews on AnandTech back when the technology was still new, unknown, and prohibitively expensive. Having no formal electrical engineering degree, the reviews were hard to get through at first, but after a few years I kind of understood most of the concepts and the technology (since mostly forgotten). But the truth is, all of that nerdery wasn't needed to exploit the benefits of SSDs. Sure, read/write times matter, but should you also care about the quantum tunneling process NAND goes through, or the tunnel oxide degradation, or maybe the JEDEC endurance standards? I would hope not! A technology graduates to a product when its benefits can be understood by many. Do you really need to know how your car operates under the hood in order to be able to drive it? 

The same analogy can be extended to new technologies. You don't necessarily need to understand the stack on a technical level (your company has great engineers for that, remember?). As a non-technical person, you should be asking the right questions on how it will impact your business. One such new technology is the blockchain (aka distributed ledger). Assuming you're working with a technical team who can do the implementation, here are some other questions you should be thinking about internally:

  • What kind of impact would the elimination (or most likely reduction) of middlemen have on your organizations risk profiles?
  • Is your organization ready to invest in blockchain tech, even if the solution is 5-10 years away? As a reminder, organizations that won on the internet were the result of cumulative advantage. Money spent on failed R&D projects might provide unexpected results many years ahead (many scientific breakthroughs occurred through by serendipity). Also, not everything can be measured on a spreadsheet, so do not forget the intangibles that get generated as a result of failed projects. 
  • What are the second and third order effects of updating your organizations plumbing? Everybody wants transactions to settle immediately, but would kind of consequences would that entail? 
  • Do you have the internal talent to undergo such an effort or would you have to hire external talent? Despite working at a consultancy firm, I think it's very important to have the internal talent to support the transition. Even if you hire outside help to implement the system, you should make sure you've got the internal talent to maintain it, especially since blockchain tech is relatively new as far as systems go. 
  • How will it impact your firms' competitive advantage? Apple's competitive advantage is creating beautiful hardware and seemingly less beautiful software, so upgrading its IT systems might not be a priority. Google, on the other hand, is a data company, which makes money through advertising, and increasingly, artificial intelligence. For a company that might be in the business of manufacturing autonomous cars in the next few decades, decentralized autonomous organizations might be an investment in its best interest. The same can be said about banks, which are being unbundled by startups. A bank is different from a startup because it has scale, while a startup has the innovation. Scale benefits from decreased costs, which a blockchain system can bring with it
  • What will a successful implementation look like? Before you go around hiring blockchain specialists and consultants, you should have a semblance of a plan for the endgame. You might be wrong, but you should at least plan to be right (and course-correct as you go). If you're a bank, you might want a system that, at its core, is designed to limit regulatory risk. If you're a medical company, your goal might to limit regulatory risk but also to design the system with privacy at the forefront. The point is every company is different, and there system you put place will most likely end up being proprietary.
  • Talk with your stakeholders. Unless you are a private company and exist in a vacuum (which even then, you probably have stakeholders), your stakeholders will probably be effected if your organizations risk profiles change as a result of adoption of a new tech. If you are a wealth management company and manage people's money, you probably have LPs to consider. How would they feel about you spending money on adopting new tech XYZ, especially if it ends up reducing operational risk and saving their money? What about the auditors - would they increase or reduce audit fees? Can they audit your new system? Would regulators even support such a system rewrite? Think about your stakeholders, who in some businesses, might even be willing to subsidize your technological investment since it's in their best economic interest.
  • Forget the hype. Part of Steve Job's magic and Apple's resultant success was a very simple concept: under-promise and over-deliver. Too many products end up doing exactly the opposite, which doesn't make them a failure relative to competing products, but does make them a failure compared to what they set out to accomplish. This can be ruinous for a company/technology, because if the company loses hope, employees tend to leave for greener pastures and the company ends up melting away. That's why it's important to compare the technology on a relative scale rather than an idealistic one. How does it compare to existing solutions? Not how does it compare to what it set out to be. In very rare cases does a technology overhype and over-deliver. 
  • Keep an open mind. As is often the case with new technology, everybody has an idea for how it should be implemented, and no idea is right or wrong. The Internet is an open standard, but the doors to it all have a different doorknob. Google Chrome, Internet Explorer, Firefox, Safari, Opera (I can go on) all have a certain vision for how you should be browsing. Sure, some of them are dominant, but they all steal implementation details from each other. No solution will be perfect, so always be skeptical of those masquerading to be. Oh, and don't forget, good ideas often look like bad ideas initially. 

The technical problems are hard to solve, but many smart people are already figuring them out. The last thing you want to do is cram a shiny, new blockchain system because it is a shiny, new blockchain system. Ask the right questions, think about the opportunity costs, the repercussions, and finally, if you can even pull off such a transition. 


Note: All thoughts are mine and do not reflect those of my employer.

Bridging Private and Public Markets

Being precious of your time, I'll just say the idea upfront: the reason IPOs are historically poor investments is because the valuation is already at a peak prior to the IPO. The subject of this post is why that is the case.

Before we reason this logic out together, let me be clear that this point holds only for VC-backed companies. There are plenty of great companies that never take VC money, IPO on their own terms, and do just fine, but that's not the subject of our post. And if you would allow me, a caveat: what I write below is from my personal experience watching private and public markets unfold. I did not have the time to compile a data-heavy analysis, but my gut tells me it would resemble something like what I came up with below. 

If you follow investing and finance circles closely, you know that the common wisdom is to avoid investing in IPOs at all costs (I assure you, the pun is unintended). Historically, returns of such newly public companies can be poor. We like to think of newly public companies as disruptors, and they very well are, but being a disruptor does not mean you will succeed. The reverse, however, is often true. Successful companies are disruptors, but disruptors are not necessarily successful companies. 

So let's say you are a disrupting company (a Black Swan) on the verge of going public, and you raised a few hundred million from venture. At that very point in your company's history, valuation is at an all time peak. If there were checkboxes to check on how to reach the highest valuation you can, you'd have checked them all. To better illustrate this point, I made a chart (see below):

As a seed or angel investor, your job is to invest from time period 2001 - maybe 2005 while the company is in its extremely early and risky stages. As a later stage VC, you will be investing from 2006 and until the liquidity event, which is when the company goes public in 2016. You will notice there is a premium attached to going public, which I originally dub the "IPO Premium" (also goes by "IPO Pop"). There are many different explanations for why this premium often occurs, very few of which actually have anything to do with the fundamental health and prospects of the company. For the intellectually curious, the premium can be due to phenomenons like investors and employees getting liquidity for their stock and a growth momentum that occurs when a company goes public (I'm not saying it's rational, I'm just saying it exists). 

Going back to the original bolded premise of this post - why is it that VC backed companies tend to perform so poorly post-IPO? You might think the answer is due to a highly complex financial explanation, but the answer(s) can be explained with relative ease:

Venture capital valuations are usually not a function of the fundamental value of a company. Unlike, say, a blue chip stock, startups do not have a reliable track record of, well, frankly anything. You cannot discount the cash flows of a startup since they are often unprofitable; you might as well run a DCF on your toddler as a predictor of their future success (please don't). Instead, venture valuations are a function of the marginal backing of the last investor. In plain terms, this means the valuation is dictated by the willingness of the last VC to invest in your company. Unlike public markets, which are distributed value carriers, valuations in private markets are often driven by very few investors. This is not a bad thing, it's simply the way in which markets operate (mind you, public markets can be just as irrational).

Another reason why VC backed companies often underperform ex-post is due to the difference in the way public investors view success versus a private investor (a VC). A VC is looking for visionary founders who create products that have the potential to be huge businesses. Once that potential is agreed upon by other VCs and people in the private climate, valuations tend to increase. Meanwhile, a public investor judges a company based on totally different metrics. Public investors care about the business model and all of the things that come with it; revenues, expenses, and profits. The potential of the company on which private investors have agreed upon should now be coming to fruition. If it doesn't, and soon, public investors start to get antsy and eventually sell the stock, dropping the valuation considerably. Again, there is nothing inherently wrong with that (and if there is, I urge you to design a better system). 

This last point is a bit harder to articulate but that won't stop me from trying. If you take a look at the chart above once more, you will notice that the liquidity event for private investors occurs are the highest valuation the company has had up to that time. This makes sense - a company grows and is at its healthiest right at the point it goes public. But this private valuation, which remains private as the investment bankers attempt to take it public - does not translate well into a public valuation, which can only be calculated after the stock is publicly traded for a period of time. This discrepancy occurs at the point at which valuation turns from an art form to a science. I exaggerate slightly, as even public company valuations are still often guesstimates (read: art), but as a general rule, private valuations are gut-feeling based while public ones are more data heavy (after all, there is finally data to analyze). The gap between what a company is actually worth, called intrinsic value (it is the amount which public valuations tend to approach after a certain period of time), and what is it worth on the private market, gives rise to this pricing irrationality.

Coda

You might say, but Larry, what about companies like Apple, Google, and Facebook, all of which took venture money and were outrageously successful in public markets? Survivorship bias is a very real threat here. While it's true that those companies (and many more) started as private darlings and became public darlings, what about the hundreds of companies that went out of business or were acquired for a discount? Don't forget about those.

There is one rule in venture capital and that is there are no rules in venture capital. That said, there are general theories, which often hold. This post was an attempt to bridge the rules of private markets and public markets. Similar to the discrepancy between quantum mechanics and general relativity (which is a much more interesting debate than the one we're having), the same rules do not govern private and public markets. For that reason, it's important to understand both sets of rules and know when each applies. 

What's The End Goal for Wealthfront and Betterment?

There are three ways to make money in the asset management space:

  1. Have a large amount of assets under management (AUM) and charge fairly modest fees (mutual funds)
  2. Have a relatively low amount of AUM and charge high fees (hedge funds, venture capital funds)
  3. Have a relatively low amount of AUM, charge low fees, and employee very few people (small mutual funds, hedge funds, and venture capital funds sometimes operate in this structure).

I've written plenty on the topic of robo-advisors before, but what I haven't done is run the math behind their revenue and profitability metrics. Working in the hedge fund space for the past few months taught me a lot about the intricacies of various fund structures. At the end of the day, the goal of every fund is very simple: make more money than other funds while factoring in for risk. 

Funds make that money by taking a percentage cut out of the money you give them to invest on your behalf. Sometimes it gets more complicated than that, but at the end of the day, a percentage cut is all it is. And although Wealthfront and Betterment are not funds per say, they make money like a fund does. With that short background out of the way, let's take a look at a space which has besotted venture capital's capital for the past few years. Below is a scenario and sensitive analysis of the various AUM and annual fee combinations a robo-advisor such as Wealthfront and Betterment may achieve.

Currently, the major robo-advisors charge a 0.25% cut for the AUM they manage on your behalf. Thus, Scenario 2 "Current fee structure" applies to them. In the case of Wealthfront and Betterment, both manage around $3 billion, which isn't in the chart above. But that's not an issue, we can just crack the numbers ourselves:

$3,000,000,000 (AUM) x 0.25% (annual fee) = $7,500,000

Thus, both companies make around $7.5 million in every year. But let's not forget that robo-advisors will grow like a child, educed by an adolescent growth spurt. And the fees, who knows where they will end up in a few years. What we need to create in this case is a sensitivity analysis, which will show us the various revenues these robo-advisors will make under a range of circumstances. 

Let's assume for a second that Wealthfront and Betterment have a monumental 2016 (happy new year), and both end up more than doubling their AUM to $8 billion, while keeping the same fee structure. As the table above shows, that would bring in revenues of $20 million ($8b x 0.25%). That's not a bad business, right? If the robo-advisors were run by just two guys with their trusted algorithms, maybe it could work. But these startups are pretty large. According to the latest data from Mattermark, I was able to find that Wealthfront has 138 employees, while Betterment 139. How much do you think it costs to support such a large headcount? Admittedly, this next calculation is very much a guesstimate, but it's nonetheless worth attempting to try and guess the headcount costs.

Let me go through these calculations bit by bit so we don't lose track. We already agreed to say that both companies have roughly the same AUM, which is $3 billion. They both charge 25 bps to manage that money (note that I'm ignoring the money they manage for free up to $10k, which I suspect is a large amount of the younger customer base). That gives us yearly revenues of $7.5 million.

But what about costs? Wealthfront has 138 employees, and let's say the average employee salary is $85k. This salary part is worth expanding slightly. The average software developer salary is around $100k (often much more at hot startups), and I presume close to half, if not more, of the company is staffed by engineers. Research analysts, who pick which securities to invest in, are also highly compensated, mostly because of their experience (I doubt Wealthfront and Betterment are hiring many fresh college grads to do research). The marketing and administrative staff probably make closer to $50k, but there should be a lot less of them at a new startup. So $85k is probably a very conservative estimate of average salary, but it's the best we can do with the data with have. 

With these assumptions, both Wealthfront and Betterment are operating at a loss, and this is without accounting for the office rent expenses, servers, marketing, legal, and a slew of other necessary overhead. So now you see what the problem is: neither company is following any of the cardinal asset management rules that we started this post with. Presumably, the plan is to grow AUM to gigantic amounts to offset these costs, but that's easier said than done. 

Robo-advisors are becoming a commodity: from startups to large banks, providing an algorithm dressed up as a nice app is the absolute minimum you need to have to compete in this space. Wealthfront and Betterment know this well, so they traditionally competed on offering the lowest fees (0.25%) and having the best designed apps. Unfortunately, those two things are not competitive moats. Charles Schwab is already undercutting both companies with no fees (0.00%), and despite what Betterment will tell you, it's a great deal. 

So what happens next? A few things:

  • The robo-advisors will consolidated into one or two large providers. As asset management cardinal rule #1 says, you can make money if you have high AUM and low fees (read: economies of scale). An alternative strategy to reach high AUM growth is to provide 401(k) plans for companies. Wealthfront and Betterment are already trying to do this, but it will be a hard battle to compete with companies like Fidelity, which don't give up very easily.
  • Big banks will acquire these robo-advisors for the engineering talent. Think about it. What do the banks have that the startups desire so much? Well, it's AUM and access to economics of scale. And what do the banks want from the startups? It's not the trivial AUM, and nor is it the stock picks. It's the engineering talent. These robo-advisors are still valued at hefty premiums, so I don't see banks swooping them up anytime soon. But as soon as things go downhill and valuations plummet, you'll see lots of acqui-hires start to happen. 
  • A business can either boom or bust (or remain in an indefinite steady state of tranquility and no growth, but that's no fun). Most of the little guys in this space (and the fintech space in general) will dissolve out of existence. Starting a business is  tough, making it work is tougher, and making it work in a competitive space is a brobdingnagian hardship.
  • And finally, more fees. For all the agitprop Wealthfront and Betterment palaver about traditional financial institutions, they might also have to raise rates or provide premium options. 25 basis points is not a business model, it's a temporary growth tactic. I wouldn't be surprised to see robo-advisors offer premium services that might some day involve person-to-person meetings. Companies like LearnVest and Personal Capital are already doing it, although their AUM grows resultantly slower. 

It might seem like I'm being a codger on Wealthfront and Betterment, but the opposite is actually true. I think it's phenomenal that startups are trying to overtake a slow, bureaucratic and slightly sanctimonious industry. But for that to happen, they need to be realistic. The last thing I want is for them to go out of business or be acquired to become the status quo. Emerson is known to have said "Money often costs too much"; perhaps he was talking about managing it. 

My Favorite Books - 2015 Edition

I've been writing this blog for over two years now, but I've been reading strings of characters that we call words which form sentences who are the content of books for much longer. From the many blogs I read, my favorite sort of posts are those that recommend good books. Very rarely do I get my book recommendations from Amazon, Goodreads, or some sort of popularity contest listicle. I much prefer reading a book recommendation from a writer I admire than the masses of crowds. With that said, here are five of my favorite books that I've read in 2015. 

 Reminscences of a Stock Operator

Unless you follow investing circles, you probably never heard of this book, which is a shame since it's a classic. The best books are timeless - they can be written hundreds of years ago and still be applicable in modern times. Reminiscences is that book. Even if you don't plan to invest a nickel in your entire life, you should read this book just for the stories. 

Reminiscences is about a flawed man who is a stock trader, and the book is about his life stories. What you learn are the behavioral aspects of finance, which no other book teaches so well. And the background of its main character, Jesse Livermore, is absolutely crazy. I won't spoil it, so read it here before taking on Reminiscences (which is a really short read, by the way).

Quotes: 

The game taught me the game.
Ignorance at twenty-two isn't a structural defect.
He'd say good morning as though he had discovered the morning's goodness after ten years of searching for it with a microscope and was making you a present of the discovery as well as of the sky, the sun, and the firm's bank roll.

One Up on Wall Street 

I read a lot of finance and business books, not because they're fascinating (although some are), but mostly to be able to refute people who quote some strategies as gospel. If there's anything you should know about business and finance, it is that both are extremely dynamic and social forces that constantly change through time. To be extremely succinct: no strategy will last forever. 

With that said, the reason One Up on Wall Street is one of my favorite investing books is because it treats investing as an art and a science rather than a formulaic affair. Most finance books begin and end by the same process, which usually involves fundamental research (income statement, balance sheet, cash flows) and complex statistical back testing and trend analyses. While I'm sure Peter Lynch does some of that too, his core investing philosophy is simple. What he sees to be true is what he invests in. Here's Peter Lynch's philosophy applied by me for Starbucks.

Every day I walk by multiple Starbucks locations and see them packed to the brim. I observe that customers order overpriced coffee (revenue), and often stay at the same location for hours (ambience, social impact). I then call up my friends and ask them what their favorite chain coffee shop is (Dunkin Donuts, Tim Hortons, Starbucks, other). The answer is unanimously Starbucks, although local hipster coffee shops are always preferred. Fortunately, hipster coffee shops don't compete for the same customers as chains, so Starbucks has that market dominated. 

Next, Lynch would recommend we meet with Starbucks management to see how they are in person. Are they frugal or ostentatious? Do they know what their competitors are up to? How do they plan to expand? 

Most financial analysts don't lay this kind of groundwork. They just look at the financial data and the news, wholly forgetting the intangibles. Peter Lynch, on the other hand, is all about intangibles. And it doesn't hurt that he's an amusing writer who isn't afraid to throw jabs. Highly recommended. 

Quote:

"Any idiot can run this business" is one characteristic of the perfect company, the kind of stock I dream about.

Creativity Inc.

I don't know about you, but I've always wondered why Pixar movies are so damn good compared to other animated films. Maybe it's because I was two years old when Toy Story came out and it had a structural impact on the fundamental formation of my psyche, or maybe Pixar just makes great movies. I don't know. 

Anyway, Creativity Inc. is great because it's so different from your usual business-shrouded, self-help books. Most large companies eventually become stale, where creativity goes to a perpetual state of rest. What this book does so magnificently well is take all the MBA curriculum you've ever learned and toss it. As a surprise bonus for Apple fans, the book has neat stories about interactions Ed Catmull (Pixar's CEO) had with Steve jobs.

Quotes:

The problem is, the phrase is dead wrong. Hindsight is not 20-20. Not even close. Our view of the past, in fact, is hardly clearer than our view of the future. While we know more about a past event than a future one, our understanding of the factors that shaped it is severely limited. Not only that, because we think we see what happened clearly - hindsight being 20-20 and all - we often aren't open to knowing more.
Management's job is not to prevent risk but to build the ability to recover.
Making the process better, easier, and cheaper is an important aspiration, something we continually work on - but it is not the goal. Making something great is the goal.

Setting the Table: The Transforming Power of Hospitality in Business 

Here's another book that's only tangentially related to my job, and yet, I learned a lot from. Danny Meyer is the proprietor of the burger chain Shake Shack, but also many other restaurants that you've probably never heard of. I forget who gave me this book recommendation, but it's a good read even if you never plan to become a restaurateur. 

Essentially, what Meyer preaches is focus on the customer experience to the most inconsequential detail. I found a lot of similarities between Meyer's approach and that of Apple and Amazon. The customer experience is key, even if you have to spend amounts which would make the finance department laugh you out of the boardroom.

Quote:

In every business, there are employees who are the first point of contact with the customers (attendants at airport gates, receptionists at doctors' offices, bank tellers, executive assistant). Those people can come across either as agents or as gatekeepers. An agent makes things happen for others. A gatekeeper sets up barriers to keep people out. We're looking for agents, and our staff members are responsible for monitoring their own performance. In that transaction, did I present myself as an agent or a gatekeeper? In the world of hospitality, there's rarely anything in between.

Sapiens: A Brief History of Humankind

Ok, let me be honest here. Initially, I didn't want to read Sapiens because everybody was reading it. I'm always skeptical of such books (and TV shows, movies, and everything else that the crowd embraces) since they're often founded on hype rather than facts. But I succumbed, since I was such a fan of A Short History of Nearly Everything, the premise of which is similar to Sapiens, which is also a short history of nearly everything.  

And I'm happy I read it. Although there are many facts this book references that I find questionable, the philosophical questions this book raises are truly eye-opening. Facts aside, this book will make you think. That can't be said about most books.  

Quote:

The modern economy grows thanks to our trust in the future and to the willingness of capitalists to reinvest their profits in production. Yet that does not suffice. Economic growth also requires energy and raw materials, and these are finite. When and if they run out, the entire system will collapse.

Some other books I've read in 2015 that you might want to put on your list if you're starving for more:

  • The Richest Man in Babylon: a select few short stories that teach you how to save.
  • The Talent Code: geniuses aren't born geniuses (sorry, Jimmy Neutron).
  • The Black Swan: The Impact of the Highly Improbable: outlier events matter more than those in the normal distribution, yet we tend to ignore them.
  • Thinking Fast and Slow: slightly overrated book, but nonetheless thought provoking. It shows the pitfalls of human behavior and how you can't avoid them.
  • A Random Walk Down Wall Street: unless you're a professional trader, just invest in mutual funds or a diversified portfolio of individual stocks. Not the best investment book, but worth a read.