Amit Seru presented “Measuring Technological Innovation over the Long Run”, joint work with Bryan Kelly, Dimitris Papanikolaou, and Matt Taddy. They ran text analysis of patents, and judge similarity by whether patents use many of the same words. They define an innovative patent as one that doesn’t use many of the same words as its predecessors, but many of the same words as its followers.
(The Washington Post spiffed up many of the graphics). The measure picks up what you might suspect and shows waves of innovation.
It also picks up a sensible industry breakdown. Who knew there was such an explosion of innovation in fishing hunting and trappng?
The paper and talk are really fun. You naturally want to explore which are the great patents. (All time #1: Samuel FB Morse, for Morse Code.) The big economic question is, can we see that innovations lead to productivity? That’s the question of the day, with an obvious innovation revolution under our noses, but stagnant productivity. One graph of many:
The paper is really about the construction of the index, and the authors advertise even they have not begun to use the index. So it’s also filed in the “thesis topics” section here.
Steve Kaplan presented What Do We Know About VC (Venture capital) Performance? VC Persistence? Steve says the paper will be out soon, so stay tuned to his webpage.
Steve uses the Burgiss database, which “Includes complete transactional history of 8,000+ private capital funds representing $6.0+ trillion in committed capital.” Hopefully that reduces some of the survivor bias which is much worse in VC than other fields. (Data vendors do not make money from academics. They make money selling information to people who want to research funds. Information on dead funds is not useful to them. Or, at least, they don’t think it’s useful!)
Like Amit’s great patents stories, Steve’s talk had a lot of interesting facts about fund performance which I will skip. The most provocative table was this one however. Across rows, Steve ranks General Partners by their performance in the second previous fund. 1 to 4 are quartiles, with the top row being the GPs who did best in the second previous fund. Now, how did they do in the current fund?
The far right row is the headline result. PME is the Kaplan-Schoar Public Market Equivalent. Basically, it is the ratio of VC returns to the return you could have made by putting the same funds in the S&P500 during the same period. Over 1 means beating the S&P. Yes, it assumes beta = 1, and so forth, but it’s a good rough and ready adjustment for risk and timing, which other measures do not make.
So, if you invest in the GPs who had the best second-previous fund, you get 1.28 times the market, and if you invest in the worst ones, 0.77 times the market!
The question whether there are persistently good managers has dogged finance for 40 years. Indeed, this is the strongest persistence in performance I’ve seen.
I have to whine of course. That’s my job (especially when talking to Steve about alternative investments!) As you look across the first row in the matrix, you see how many managers achieve which quartile in their subsequent investments. Of 90 managers in the top past quartile, 25 ended up first quartile, 25 in second quartile, 24 in third quartile, and 16 in the bottom. Persistence? Well, I guess only 16 not 25 in the bottom quartile is a bit of persistence, a bit less likely to end up in the cellar. But that’s not as sexy as a 1.28 PME vs. 0.77!
What’s going on? As we discussed it, I suspect the answer is that the top left quadrant has a few big winners in it. Then the PME is high, though the numbers of GPs in the quartile is high. So the fact may be, GPs who did well in the second previous round had a greater chance of a huge score.
VC returns are amazingly non-normal. It really is a lottery ticket with a small chance of a huge payoff. Most usual statistics are not well suited to this reality.
OK, it’s easy to whine about t statistics, and that the most recent performance does not show this persistence. But it’s an interesting fact.
However, it’s also interesting to me to reflect on the debate between academics and the standard practitioner view. From an academic point of view, these point estimates are indeed dramatic persistence. I suspect from the practitioner view, the same facts are a cold shower of coin-flipping random walk. That well established GPS in the top quartile repeat that performance so seldom — even if they do it more than complete chance suggests — is pretty shocking relative to standard industry views.