How to Interpret Information in an Infinite Knowledge World

If you know me in person or follow me on Twitter, you know that I'm a pretty voracious reader. Having been like this for quite a while now, I have developed an approach on how I interpret information. This includes both news and books; temporary and permanent knowledge. 

Twitter

Despite years of stagnation, Twitter remains by far my favorite social network. It gives you a glimpse into the head of another person; what they read and how they spend their time. A lot of hush hush watercooler conversations are actually public on Twitter, if you follow the right people. That said, there is also an incredible amount of noise, most of which can be safely ignored (e.g. politics Twitter, where facts go to die). 

When I browse my timeline, I try to remind myself of an old idea someone told me a while back (unfortunately I do not remember who that someone was). That idea is as follows. Imagine how much work goes into writing a book. Often times, the author has years of requisite knowledge (it's common to write your first book at 50 - 70 years old), puts in countless hours of research, and has a publisher fact check the results. Obviously not all books go through this rigorous process, but the best ones certainly do. Now think about the process of writing a tweet. Perhaps the person has years of knowledge, but it's unlikely she put in hours into thinking and fact checking the tweet. That wouldn't be practical. Think about that next time you retweet something you like. How much thought do you think went into it, and is it actually factually correct? I don't know about you, but my most popular tweets have been those I've posted after midnight, usually around my 3rd or 4th glass of wine. Twitter is fun, but don't treat every tweet as gospel. 

News

So we've established it takes a few seconds to post a tweet. What about to publish a news article or a blog post? Not every news outlet can be like The New Yorker and give the author months or years of investigative journalism before he hits publish. Or this post. I've thought about this topic for a while now, and even did some light research too, but who fact checked it except me? It would be hypocritical of me not to say proceed with caution, even with my own writing! 

Now, here is another trope I use, this time when I read news. A while back I was reading a Wall Street Journal article about some accounting standards a company misapplied. Now, it just so happened to be that I was just learning about that very same accounting standard in school, and my professor was a former partner at a big accounting firm - a subject matter expert. You might see where I'm going with this. The Wall Street Journal writer had the facts totally wrong! Not only did he apply the incorrect accounting standard, but he also misunderstood the standard he misapplied (if only two wrongs made a right?). The writer wasn't a bad guy; he was simply given a topic he hadn't much experience in. The WSJ is for the most part an excellent source of financial news, but even they make mistakes. Hiring a CPA to fact check every news article is impractical, and besides, even a CPA doesn't know about every new accounting standard.

The only reason I caught this mistake was because I just happened to be studying that exact same topic by an expert in the field. Unless you were also an expert in this topic, you probably took the whole thing as fact. And who could blame you, why should you know better about advanced accounting standards? Think about that next time you're reading about a topic you are otherwise clueless about. Is it possible the author is writing beyond his subject matter expertise? Probably. 

Books 

There were 304,912 books published and republished only in the United States in 2013. I will eat my shoe (Allen Edmonds uses good, tough leather, so I strike a fair dare) if each of these books were actually any good. What is a good book, anyway? For purposes of this post, a good book is one that is factually correct not today, but in the short to medium term future. On an infinite timescale, every knowledge book will be factually incorrect because we will discover new things that we did not know at the time of writing. The goal of reading a book today, then, is for the information contained within it to be useful in your life (20 - 80 years). 

What further complicates things is that out of the millions of books published every year, few will be great, many will be good, and the majority will be a waste of your time. How then, should you choose what to read when the constraining resource is time? In the past, I've used Google, GoodReads, and countless other book review websites to help me separate the good recent books from the bad. But what I've noticed is some good books became bad books as time went on. Reviews slowly went from four and a half stars to four, and then even to three stars in the span of a few years as the 'facts' presented in the books turned to actually be opinions.

Rather than trust reviews of modern day books, I've found another process that eliminates hype and filters for the best books: time. For the most part, I now read books that are still well-received at least ten years after they were published. What this tells me is the information contained in the book stood the test of time (Another fun exercise: take a look at your tweets from two months ago and cringe in absolute horror from all the things you got wrong). A book that was published fifty years ago and is still read today tells me it's a book with lasting content. If it's a business or investing book (which are notoriously trendy) that lasted that long, you can be sure it's got long lasting nuggets of wisdom. The last thing you want to spend your time on is reading ephemeral books - that's the definition of a waste of time. 

Books (and art, music, and all other knowledge content) are derivative instruments of prior work repackaged to the taste of modern times. I was watching a season of Dexter a few years ago, and I thought the episode finale was very well done, original, and downright chilling. A character that we were lead to believe is dead actually turned out to be alive, and not only that, but the true murderer in the case. A short time later I watched Psycho, a classic Alfred Hitchcock film from 1960, that essentially uses the same premise of assumed-dead-but-actually-isn't to even greater chilling effect. In short, Dexter copied Psycho, which I bet you copied something else from a time before that. What's old is new again; original wouldn't exist to a person who has seen all of history. 

Summing it all up

We live in a time where an almost infinite source of information is thrown at you. That makes it really hard to know what to spend your time on. The internet has also made it incredibly simple and free to publish ideas, lowering the standard of quality to the substandard. The above themes help me cope with the abundance of information, and I hope they will to you as well. 

Thinking through Artificial Intelligence

I’m really not a fan of the term ‘artificial intelligence’, or AI, for short. We tend to connote a negative meaning to the word artificial, implying that an artificial intelligence is unnatural, and possibly even evil. In fact, the term AI reminds me of another term — genetically modified organisms (GMOs) — which have also been the subject of vicious debates in recent years despite, well, science. I suppose AI could have a worse name, like maybe genetically modified intelligence, but we can leave that to be the villain of another sci-fi film.

As is often the case with new technology, there are camps of people who are incredibly paranoid about what such a technology can do to the stable world order. The cannonical example often used is one of the 19th century English textile workers who protested against the new technologies brought about by the Industrial Revolution — the Luddites. The term is now inscribed to mean a person who is anti-technology, even though the reality of the Luddite argument was quite a bit different. What we have now are AI Luddites who are afraid of artificial intelligence due to the potential catastrophic events an evil AI can cause.

My first encounter with an evil AI, as I imagine was most people’s, was the film The Terminator (1984). The main antagonist of the film, Skynet, was pure artistic genius on the part of the writers. From Wikipedia:

Skynet is a fictional conscious, gestalt, artificial general intelligence (see also Superintelligence) system that features centrally in the Terminator franchise and serves as the franchise’s main antagonist.
Rarely depicted visually in any of the Terminator media, Skynet gained self-awareness after it had spread into millions of computer servers all across the world; realizing the extent of its abilities, its creators tried to deactivate it. In the interest of self-preservation, Skynet concluded that all of humanity would attempt to destroy it and impede its capability in safeguarding the world. Its operations are almost exclusively performed by servers, mobile devices, drones, military satellites, war-machines, androids and cyborgs (usually a Terminator), and other computer systems. As a programming directive, Skynet’s manifestation is that of an overarching, global, artificial intelligence hierarchy (AI takeover), which seeks to exterminate the human race in order to fulfill the mandates of its original coding.

If Skynet doesn’t scare you, I don’t know what will. But let’s get back to a less evil artificial intelligence.

AI has a long, storied history, which you can read about here. But I’ll be picking up on the topic from even earlier, a 1957 movie and a favorite of mine, Desk Set.

Worrying about artificial intelligence, circa 1957

The film is classified as a romcom according to IDMB (or is it IMDB’s AI deciding what to tag it?), but it’s really much more than that. Taking place in the reference department of a library, we are introduced to a group of women whose job it is to pick up the phone, research facts, and answer questions on a wide array of topics. If that sounds inefficient, that is because it is, leading the president of the library to hire a methods engineer and efficiency expert to replace the reference department with an AI computer. A romantic hour later, the AI is programmed, installed, and production ready. Unfortunately, the AI ends up having trouble answering customer calls, and is later ‘upgraded’ back to the women who used to work in the reference department in the first place.

With the beautiful bias of hindsight, we know what actually killed the reference department were search engines like Google, not AI. The point of bringing this example up was to show that AI-ludditry is nothing new. What actually disrupts your job may not be what you think will disrupt your job. Outkast taught us that in the wonderfully deep lyrics of Ms. Jackson:

“You can plan a pretty picnic, but you can’t predict the weather”

Which leads me back to Skynet and evil AI. Why are so many people so paranoid about a strong AI breaking out of the box and taking over? My gut reaction to an evil strong AI is to ask if there has been any historical precedent for technology turning bad to hurt humans. Granted, there has never been such powerful AI tech as there is today, but nonetheless, the question stands. And besides, why does a strong AI have to be bad? It could turn out to be good just as it could evil. Innocent until proven guilty.

The next logical pattern to start pondering is as follows. Okay, so someone created an evil AI — what are the realities of such a situation? The human brain uses 20 Watts to operate, which is extremely efficient and so far non-reproducible in non-humans. Meanwhile, the Google computer (AlphaGo) that beat Lee Sedol in a game of G0 used approximately one Megawatt. That is 50,000 the energy consumption a human brain uses, and we’re only talking about a board game (a complex game, but still a game without the external factors of a real environment). Thus, the question becomes slightly different — is there enough computing power in the world for an evil AI to achieve world dominance?

By the way, I want to remind you that we’re speaking in hypotheticals here. A self-learning, cognitive, strong AI does not exist yet. The debate thus far has been around preventive measures that usually begin with “what if”. As you can probably tell now, I’m not very worried about a Skynet-esque AI. My strong suspicion is that people picture an evil AI because of all the science fiction films and novels that they read as children. But fine, let us embrace the possibility — at least for a second — that an evil AI does come into existence. Should we be spinning our wheels designing failsafes into the AI system to prevent such an outcome?

Obviously yes, we should be thinking about such remote possibilities in all system designs. But allow me to make a brief philosophical excursion on why instituting failsafes won’t rescue us from an evil AI. An artificial intelligence that turns evil is a low probability event. In other words, it’s a black swan event. And by definition, black swan events cannot be predicted in advance. It follows that designing a failsafe into the system will not prevent the evil AI from escaping, given that the definition of a black swan event is an unpredictable event. How can you take preventive measures against an unpredictable event? You can’t really.

I’ll leave off with an Alan Kay quote I’ve always enjoyed:

It’s easier to invent the future than to predict it

Go invent an AI instead of predicting the unpredictable!

Thinking in Systems: The Business of Blockchain

As a teenager, I used to obsessively read SSD reviews on AnandTech back when the technology was still new, unknown, and prohibitively expensive. Having no formal electrical engineering degree, the reviews were hard to get through at first, but after a few years I kind of understood most of the concepts and the technology (since mostly forgotten). But the truth is, all of that nerdery wasn't needed to exploit the benefits of SSDs. Sure, read/write times matter, but should you also care about the quantum tunneling process NAND goes through, or the tunnel oxide degradation, or maybe the JEDEC endurance standards? I would hope not! A technology graduates to a product when its benefits can be understood by many. Do you really need to know how your car operates under the hood in order to be able to drive it? 

The same analogy can be extended to new technologies. You don't necessarily need to understand the stack on a technical level (your company has great engineers for that, remember?). As a non-technical person, you should be asking the right questions on how it will impact your business. One such new technology is the blockchain (aka distributed ledger). Assuming you're working with a technical team who can do the implementation, here are some other questions you should be thinking about internally:

  • What kind of impact would the elimination (or most likely reduction) of middlemen have on your organizations risk profiles?
  • Is your organization ready to invest in blockchain tech, even if the solution is 5-10 years away? As a reminder, organizations that won on the internet were the result of cumulative advantage. Money spent on failed R&D projects might provide unexpected results many years ahead (many scientific breakthroughs occurred through by serendipity). Also, not everything can be measured on a spreadsheet, so do not forget the intangibles that get generated as a result of failed projects. 
  • What are the second and third order effects of updating your organizations plumbing? Everybody wants transactions to settle immediately, but would kind of consequences would that entail? 
  • Do you have the internal talent to undergo such an effort or would you have to hire external talent? Despite working at a consultancy firm, I think it's very important to have the internal talent to support the transition. Even if you hire outside help to implement the system, you should make sure you've got the internal talent to maintain it, especially since blockchain tech is relatively new as far as systems go. 
  • How will it impact your firms' competitive advantage? Apple's competitive advantage is creating beautiful hardware and seemingly less beautiful software, so upgrading its IT systems might not be a priority. Google, on the other hand, is a data company, which makes money through advertising, and increasingly, artificial intelligence. For a company that might be in the business of manufacturing autonomous cars in the next few decades, decentralized autonomous organizations might be an investment in its best interest. The same can be said about banks, which are being unbundled by startups. A bank is different from a startup because it has scale, while a startup has the innovation. Scale benefits from decreased costs, which a blockchain system can bring with it
  • What will a successful implementation look like? Before you go around hiring blockchain specialists and consultants, you should have a semblance of a plan for the endgame. You might be wrong, but you should at least plan to be right (and course-correct as you go). If you're a bank, you might want a system that, at its core, is designed to limit regulatory risk. If you're a medical company, your goal might to limit regulatory risk but also to design the system with privacy at the forefront. The point is every company is different, and there system you put place will most likely end up being proprietary.
  • Talk with your stakeholders. Unless you are a private company and exist in a vacuum (which even then, you probably have stakeholders), your stakeholders will probably be effected if your organizations risk profiles change as a result of adoption of a new tech. If you are a wealth management company and manage people's money, you probably have LPs to consider. How would they feel about you spending money on adopting new tech XYZ, especially if it ends up reducing operational risk and saving their money? What about the auditors - would they increase or reduce audit fees? Can they audit your new system? Would regulators even support such a system rewrite? Think about your stakeholders, who in some businesses, might even be willing to subsidize your technological investment since it's in their best economic interest.
  • Forget the hype. Part of Steve Job's magic and Apple's resultant success was a very simple concept: under-promise and over-deliver. Too many products end up doing exactly the opposite, which doesn't make them a failure relative to competing products, but does make them a failure compared to what they set out to accomplish. This can be ruinous for a company/technology, because if the company loses hope, employees tend to leave for greener pastures and the company ends up melting away. That's why it's important to compare the technology on a relative scale rather than an idealistic one. How does it compare to existing solutions? Not how does it compare to what it set out to be. In very rare cases does a technology overhype and over-deliver. 
  • Keep an open mind. As is often the case with new technology, everybody has an idea for how it should be implemented, and no idea is right or wrong. The Internet is an open standard, but the doors to it all have a different doorknob. Google Chrome, Internet Explorer, Firefox, Safari, Opera (I can go on) all have a certain vision for how you should be browsing. Sure, some of them are dominant, but they all steal implementation details from each other. No solution will be perfect, so always be skeptical of those masquerading to be. Oh, and don't forget, good ideas often look like bad ideas initially. 

The technical problems are hard to solve, but many smart people are already figuring them out. The last thing you want to do is cram a shiny, new blockchain system because it is a shiny, new blockchain system. Ask the right questions, think about the opportunity costs, the repercussions, and finally, if you can even pull off such a transition. 


Note: All thoughts are mine and do not reflect those of my employer.

Bridging Private and Public Markets

Being precious of your time, I'll just say the idea upfront: the reason IPOs are historically poor investments is because the valuation is already at a peak prior to the IPO. The subject of this post is why that is the case.

Before we reason this logic out together, let me be clear that this point holds only for VC-backed companies. There are plenty of great companies that never take VC money, IPO on their own terms, and do just fine, but that's not the subject of our post. And if you would allow me, a caveat: what I write below is from my personal experience watching private and public markets unfold. I did not have the time to compile a data-heavy analysis, but my gut tells me it would resemble something like what I came up with below. 

If you follow investing and finance circles closely, you know that the common wisdom is to avoid investing in IPOs at all costs (I assure you, the pun is unintended). Historically, returns of such newly public companies can be poor. We like to think of newly public companies as disruptors, and they very well are, but being a disruptor does not mean you will succeed. The reverse, however, is often true. Successful companies are disruptors, but disruptors are not necessarily successful companies. 

So let's say you are a disrupting company (a Black Swan) on the verge of going public, and you raised a few hundred million from venture. At that very point in your company's history, valuation is at an all time peak. If there were checkboxes to check on how to reach the highest valuation you can, you'd have checked them all. To better illustrate this point, I made a chart (see below):

As a seed or angel investor, your job is to invest from time period 2001 - maybe 2005 while the company is in its extremely early and risky stages. As a later stage VC, you will be investing from 2006 and until the liquidity event, which is when the company goes public in 2016. You will notice there is a premium attached to going public, which I originally dub the "IPO Premium" (also goes by "IPO Pop"). There are many different explanations for why this premium often occurs, very few of which actually have anything to do with the fundamental health and prospects of the company. For the intellectually curious, the premium can be due to phenomenons like investors and employees getting liquidity for their stock and a growth momentum that occurs when a company goes public (I'm not saying it's rational, I'm just saying it exists). 

Going back to the original bolded premise of this post - why is it that VC backed companies tend to perform so poorly post-IPO? You might think the answer is due to a highly complex financial explanation, but the answer(s) can be explained with relative ease:

Venture capital valuations are usually not a function of the fundamental value of a company. Unlike, say, a blue chip stock, startups do not have a reliable track record of, well, frankly anything. You cannot discount the cash flows of a startup since they are often unprofitable; you might as well run a DCF on your toddler as a predictor of their future success (please don't). Instead, venture valuations are a function of the marginal backing of the last investor. In plain terms, this means the valuation is dictated by the willingness of the last VC to invest in your company. Unlike public markets, which are distributed value carriers, valuations in private markets are often driven by very few investors. This is not a bad thing, it's simply the way in which markets operate (mind you, public markets can be just as irrational).

Another reason why VC backed companies often underperform ex-post is due to the difference in the way public investors view success versus a private investor (a VC). A VC is looking for visionary founders who create products that have the potential to be huge businesses. Once that potential is agreed upon by other VCs and people in the private climate, valuations tend to increase. Meanwhile, a public investor judges a company based on totally different metrics. Public investors care about the business model and all of the things that come with it; revenues, expenses, and profits. The potential of the company on which private investors have agreed upon should now be coming to fruition. If it doesn't, and soon, public investors start to get antsy and eventually sell the stock, dropping the valuation considerably. Again, there is nothing inherently wrong with that (and if there is, I urge you to design a better system). 

This last point is a bit harder to articulate but that won't stop me from trying. If you take a look at the chart above once more, you will notice that the liquidity event for private investors occurs are the highest valuation the company has had up to that time. This makes sense - a company grows and is at its healthiest right at the point it goes public. But this private valuation, which remains private as the investment bankers attempt to take it public - does not translate well into a public valuation, which can only be calculated after the stock is publicly traded for a period of time. This discrepancy occurs at the point at which valuation turns from an art form to a science. I exaggerate slightly, as even public company valuations are still often guesstimates (read: art), but as a general rule, private valuations are gut-feeling based while public ones are more data heavy (after all, there is finally data to analyze). The gap between what a company is actually worth, called intrinsic value (it is the amount which public valuations tend to approach after a certain period of time), and what is it worth on the private market, gives rise to this pricing irrationality.

Coda

You might say, but Larry, what about companies like Apple, Google, and Facebook, all of which took venture money and were outrageously successful in public markets? Survivorship bias is a very real threat here. While it's true that those companies (and many more) started as private darlings and became public darlings, what about the hundreds of companies that went out of business or were acquired for a discount? Don't forget about those.

There is one rule in venture capital and that is there are no rules in venture capital. That said, there are general theories, which often hold. This post was an attempt to bridge the rules of private markets and public markets. Similar to the discrepancy between quantum mechanics and general relativity (which is a much more interesting debate than the one we're having), the same rules do not govern private and public markets. For that reason, it's important to understand both sets of rules and know when each applies. 

What's The End Goal for Wealthfront and Betterment?

There are three ways to make money in the asset management space:

  1. Have a large amount of assets under management (AUM) and charge fairly modest fees (mutual funds)
  2. Have a relatively low amount of AUM and charge high fees (hedge funds, venture capital funds)
  3. Have a relatively low amount of AUM, charge low fees, and employee very few people (small mutual funds, hedge funds, and venture capital funds sometimes operate in this structure).

I've written plenty on the topic of robo-advisors before, but what I haven't done is run the math behind their revenue and profitability metrics. Working in the hedge fund space for the past few months taught me a lot about the intricacies of various fund structures. At the end of the day, the goal of every fund is very simple: make more money than other funds while factoring in for risk. 

Funds make that money by taking a percentage cut out of the money you give them to invest on your behalf. Sometimes it gets more complicated than that, but at the end of the day, a percentage cut is all it is. And although Wealthfront and Betterment are not funds per say, they make money like a fund does. With that short background out of the way, let's take a look at a space which has besotted venture capital's capital for the past few years. Below is a scenario and sensitive analysis of the various AUM and annual fee combinations a robo-advisor such as Wealthfront and Betterment may achieve.

Currently, the major robo-advisors charge a 0.25% cut for the AUM they manage on your behalf. Thus, Scenario 2 "Current fee structure" applies to them. In the case of Wealthfront and Betterment, both manage around $3 billion, which isn't in the chart above. But that's not an issue, we can just crack the numbers ourselves:

$3,000,000,000 (AUM) x 0.25% (annual fee) = $7,500,000

Thus, both companies make around $7.5 million in every year. But let's not forget that robo-advisors will grow like a child, educed by an adolescent growth spurt. And the fees, who knows where they will end up in a few years. What we need to create in this case is a sensitivity analysis, which will show us the various revenues these robo-advisors will make under a range of circumstances. 

Let's assume for a second that Wealthfront and Betterment have a monumental 2016 (happy new year), and both end up more than doubling their AUM to $8 billion, while keeping the same fee structure. As the table above shows, that would bring in revenues of $20 million ($8b x 0.25%). That's not a bad business, right? If the robo-advisors were run by just two guys with their trusted algorithms, maybe it could work. But these startups are pretty large. According to the latest data from Mattermark, I was able to find that Wealthfront has 138 employees, while Betterment 139. How much do you think it costs to support such a large headcount? Admittedly, this next calculation is very much a guesstimate, but it's nonetheless worth attempting to try and guess the headcount costs.

Let me go through these calculations bit by bit so we don't lose track. We already agreed to say that both companies have roughly the same AUM, which is $3 billion. They both charge 25 bps to manage that money (note that I'm ignoring the money they manage for free up to $10k, which I suspect is a large amount of the younger customer base). That gives us yearly revenues of $7.5 million.

But what about costs? Wealthfront has 138 employees, and let's say the average employee salary is $85k. This salary part is worth expanding slightly. The average software developer salary is around $100k (often much more at hot startups), and I presume close to half, if not more, of the company is staffed by engineers. Research analysts, who pick which securities to invest in, are also highly compensated, mostly because of their experience (I doubt Wealthfront and Betterment are hiring many fresh college grads to do research). The marketing and administrative staff probably make closer to $50k, but there should be a lot less of them at a new startup. So $85k is probably a very conservative estimate of average salary, but it's the best we can do with the data with have. 

With these assumptions, both Wealthfront and Betterment are operating at a loss, and this is without accounting for the office rent expenses, servers, marketing, legal, and a slew of other necessary overhead. So now you see what the problem is: neither company is following any of the cardinal asset management rules that we started this post with. Presumably, the plan is to grow AUM to gigantic amounts to offset these costs, but that's easier said than done. 

Robo-advisors are becoming a commodity: from startups to large banks, providing an algorithm dressed up as a nice app is the absolute minimum you need to have to compete in this space. Wealthfront and Betterment know this well, so they traditionally competed on offering the lowest fees (0.25%) and having the best designed apps. Unfortunately, those two things are not competitive moats. Charles Schwab is already undercutting both companies with no fees (0.00%), and despite what Betterment will tell you, it's a great deal. 

So what happens next? A few things:

  • The robo-advisors will consolidated into one or two large providers. As asset management cardinal rule #1 says, you can make money if you have high AUM and low fees (read: economies of scale). An alternative strategy to reach high AUM growth is to provide 401(k) plans for companies. Wealthfront and Betterment are already trying to do this, but it will be a hard battle to compete with companies like Fidelity, which don't give up very easily.
  • Big banks will acquire these robo-advisors for the engineering talent. Think about it. What do the banks have that the startups desire so much? Well, it's AUM and access to economics of scale. And what do the banks want from the startups? It's not the trivial AUM, and nor is it the stock picks. It's the engineering talent. These robo-advisors are still valued at hefty premiums, so I don't see banks swooping them up anytime soon. But as soon as things go downhill and valuations plummet, you'll see lots of acqui-hires start to happen. 
  • A business can either boom or bust (or remain in an indefinite steady state of tranquility and no growth, but that's no fun). Most of the little guys in this space (and the fintech space in general) will dissolve out of existence. Starting a business is  tough, making it work is tougher, and making it work in a competitive space is a brobdingnagian hardship.
  • And finally, more fees. For all the agitprop Wealthfront and Betterment palaver about traditional financial institutions, they might also have to raise rates or provide premium options. 25 basis points is not a business model, it's a temporary growth tactic. I wouldn't be surprised to see robo-advisors offer premium services that might some day involve person-to-person meetings. Companies like LearnVest and Personal Capital are already doing it, although their AUM grows resultantly slower. 

It might seem like I'm being a codger on Wealthfront and Betterment, but the opposite is actually true. I think it's phenomenal that startups are trying to overtake a slow, bureaucratic and slightly sanctimonious industry. But for that to happen, they need to be realistic. The last thing I want is for them to go out of business or be acquired to become the status quo. Emerson is known to have said "Money often costs too much"; perhaps he was talking about managing it.