Social, Trends & Social Trends

Last week, a friend of mine told me he was thinking about beginning to invest in the stock market with a particular eye on Tesla Motors and wondered if I had any thoughts to share. I do. Not as much about $TSLA as about part-time stock pickers and the emerging resources available to them.

More and more people are getting involved in managing their investments. And, when they do, a simple low-cost, robo-advised, target-date fund built out of ETFs isn’t always the best answer for everyone. For investors who have a good grasp of their overall financial situation, investing in particular companies can both fun and profitable – particularly if done with the right guides.

One guide I’d recommend, especially for those with less experience and only a few hours each week to spare, is Chris Camillo’s “Laughing at Wall Street.” Chris does a great job of describing how to use social data to generate and vet ideas, encouraging investors to use their personal experiences and observations but then filter and manage those possibilities in a disciplined manner before making the investment.

Another investor guru to turn to is Howard Lindzon, whose book “The Wallstrip Edge” is an easy-to-read introduction to the idea of “trend” or momentum investing. Howard articulates a more structured approach to identifying potential investments and then encourages investors to use social networks to learn as they go along. (He is such a believer in using a financial social network that he created one: StockTwits. We’re regular participants there as @wealthofinfo.)

Whether you use social networks to generate ideas for yourself or to trade ideas with others, it’s clear these sources will only get more numerous and be connected to even more aspects of our financial lives, as Jeff Jarvis makes clear in his thoughtful book “Public Parts.” Jarvis writes about the opportunity the internet creates for everyone to prove their expertise – and how those opportunities are growing all the time.

We’re at an interesting point in the evolution of social data. It started largely as data derived from communities built online (Facebook, Twitter). But as our social norms for data sharing shift and the lines between our online and offline existences get blurred (Jarvis covers this well), social data will begin to automatically encompass “real world” activities we never would have shared just a few years ago. Like driving. And eating. And trading stocks. Personally, I think soon we’ll find more and more experts who don’t even know they’re experts. It's one of the things we do at Array.

That’s the part of social data that has attracted us for years: when it isn’t just about online communities but is about society as a whole. That data is fun (and, yes, profitable) for us to work with, and more is coming all the time.

I have the good fortune to be on a panel at SXSW Interactive this year with Chris and Howard, as well as Leigh Drogen of Estimize (another online investment community with a big following). I look forward to talking about this with them – and many other interested people, from part-time stock pickers to full-time hedge fund managers – down in Austin. See you there.

Up with People (Revisited)

 

The Wall Street Journal recently outlined a trend of of increased funding for startups that provide "automated online financial guidance." The piece envisioned a clash of machines vs. humans, with new technology poised to revolutionize the industry. But as is usually the case (see our initial "Up with People" post from a couple of years ago), these battle lines are a bit too simple.

In truth, robo advisors aren't all that "robo." Nor is the technology particularly new. And the biggest opportunity to revolutionize the industry resides with a different group.

If you check the original research used by the WSJ, you'll see it refers to "automated and semi-automated services" (emphasis added). You'll also see that a significant fraction of the venture capital being invested is in businesses that feature human interaction in their value proposition. Meanwhile, traditional advisors are making plans to offer online guidance. So expect to see a variety of business models along a continuum, not just two polar opposites.

At that point, the difference between companies will stem more from how the different features are bundled and branded, and less from the quality of the features. Why? Because the underlying tools are software packages (risk assessment, asset allocation, etc.) that aren't being changed so much as made transparent. Consider similar instances of online disruption to travel agents, tax preparers and stock brokers -- industries where an unnecessarily complex system forced people to pay a premium for access. When someone came along and improved the interface, opened up the system and had people do their own data entry, most of that cost went away.

In other words, robo advisor startups gain market share in the short run precisely because <em>most people paying for financial guidance today are already using robo advisors -- they're simply also paying someone to do the data entry.

Robo advisors may have a near-term cost advantage, but cost-cutting is a hard game to win for a long period of time. Eventually meaningful company growth will require the creation of additional value for the customer. Whichever of today's startups are still standing will find themselves in conference rooms talking about deepening their customer relationships, quite likely adding humans back to the payroll and looking more like their initial adversaries.

Unfortunately, one significant source of value for customers is absent from the conversation: improved investment returns. People haven't been as focused on their rates of return in the past five years due to a steadily rising market and an industry focus on low-cost funds, but creating top-line value for customers will remain the great differentiator.

It's true that capturing alpha is hard (much harder than automating processes and cutting costs). But it isn't impossible. And in most places where it is being done, it involves people: not just at hedge funds (Bridgewater in particular comes to mind) but among every day investors. Communities like Estimize and StockTwits are evolving a concept of collaborative finance where communities of tens of thousands co-create forecasts and strategies that increase investment returns.

Interestingly, there is one group that could accelerate the expansion of collaborative finance and seize the lead in creating top-line value for customers, a group that has already scaled to millions of customers: the leading players of the previous investment revolution. Most of those surviving revolutionaries have become much less creative after their early success and have seen years of relatively flat performance (Schwab, Ameritrade, Fidelity and E*TRADE foremost among them). But I believe it is possible that one or more of them will soon head this direction and begin to help its customers help each other.

They don't lack examples to follow (thanks to eToro, Covestor and others) or a willing client base (due to changes in cultural norms). They don't lack the resources (the initial build and rollout would cost less than 1% of what they spend on advertising in a year). And if they happen to lack expertise in collaborative knowledge platforms, they can call us. The only question is whether they lack the motivation; if so, perhaps the new wave of robo advisors will provide that.

Mr. Buffet Won't be Attending

Millions of times each day, people are tweeting about investments.  In an earlier post, we shared a list of the stocks that get the most attention.  What about the ones that get the least?

There are a couple of ways to think about that, as our team details on a new "side project" blog dedicated to social data matters.  Micro-blogging platforms have evolved to such an extent that there is now a demonstrable correlation between trading and tweeting.  Which means we can watch for divergence between what happens with a stock in the marketplace and what happens on Twitter and StockTwits.  For example, are people transacting in a stock a lot more (or less) than they are tweeting about it?

The answer is yes.  And in 2014, one stock stood apart from all the rest for being tweeted about much less than it was traded.  In fact, its ratio was more than 13 TIMES worse than average. That stock?  Warren Buffet's Berkshire-Hathaway.

Your investment style and time horizon will likely determine whether that is a good thing or a bad thing -- but it's a real thing.  So if you're interested in either social data or financial markets or (like us) both, it's worth reading the original research.

Standard and Pareto's 500

Earlier this week, the folks over at Benzinga asked us which stocks were tweeted about the most in 2014; the answer won't surprise you, but the shape of the overall distribution might.  In social data, as in so many other arenas, the Pareto principle of an approximate "80-20" dynamic holds true: in 2014, 66% of Twitter "cashtagged" traffic* related to the most popular 20% of S&P 500 stocks. 

$AAPL alone accounted for 11% of the tweets -- more than the least popular 200 stocks combined.  On a graph that shows the % of traffic associated with each stock, the head of the curve is so narrow and the tail is so long it can be hard to tell them apart from the axes:

That Apple together with Facebook, Amazon, Google and Microsoft account for more than 20% of the tweets is pretty remarkable (rounding out the top 10 were Netflix, Goldman Sachs, Yahoo, Bank of America and General Motors -- Tesla and Herbalife jump in if you don't limit it to the S&P 500). Of course, there is plenty of data to be mined out that long tail with more and more people joining the conversation -- we spend much of our time down there.  We'll tell you more about that in our next post.

* "Cashtag" is popular way to specify a particular financial instrument; it is generally understood, for example, that "$IBM" refers to IBM publicly traded stock, whereas "IBM" refers to the company itself.  The practice originated on StockTwits and is now common on Twitter.

Note on methods: For a variety of reasons, we restricted this analysis to tweets with only one cashtag, although most of the analysis does not vary much if you include tweets with multiple cashtags. We did exclude a few tickers that changed during the course of the year.  And we used Twitter as our data source.

Everything is Better with Bacon

Good and evil, art and science, nature and nurture – people seem to thrive on false dichotomies.  Of course the concepts in these pairs differ; it’s just that they rarely, if ever, occur in a pure state.  Nevertheless, we get a constant stream of oversimplified stories such as humans beating computers in the stock market or big data analysts abandoning causation in their pursuit of correlation.  Trust me: the humans who are “beating” computers are actually using computers themselves, just as sure as any big data analysts who set aside theories of causation won’t be big data analysts very long.

Both fundamental and technical traders (another false dichotomy, since anyone who isn’t at least a little bit of both is eventually doomed) rely heavily on computers; it would be virtually impossible to do otherwise these days, and that’s fine: computers are great at crunching numbers.  In the case of big data, the end result of all that crunching can be some seemingly relevant correlations.  But it’s as pointless to look at a correlation without a notion of causation as it would be to trade stocks without using computers.  Fortunately, one thing humans still do far better than machines is to create stories.

To be sure, sometimes our reflexive need to wrap data in a narrative does us a disservice: it leads us to see patterns that aren’t there well as overlook patters that are (Kahneman, Mandelbrot and Taleb are particularly helpful on this point if you haven’t read them).  And, of course, it is this very tendency that gives rise to false dichotomies (since they make stories easier to construct and share).  But I think we’re at the peak of overcorrection when we try to get people out of the process.  Without the people we don’t have a story, without the story we don’t have a context, and without the context those data don’t have useful meaning. 

This need for meaning isn’t an existential or spiritual matter; it’s a purely practical one.  It tells you which data are important and how those data need to be captured.  Our company recently diagnosed a problem for a client that had discovered a historical correlation in their data that was failing to translate into ongoing business results.  When we pressed them to tell us the story they thought they were “hearing” from the data and then tried to retell that story using the job sequence and data fields in their system, we identified two problems that were interacting – one of which was a simple choice about how they created their time code field.  It wasn’t “wrong” in any objective sense, but it was completely wrong in the context of the story.

We know from the centuries of progress tied to the scientific method (somewhat formalized and socialized by Roger Bacon in the 13th century but nascent long before even then) that having a tentative “story” in mind can help you test data; if you form your story only after seeing the data, then you need to get more data before you can conclude anything.  And the process is never really over.  People who really understand the value of working hypotheses know they are always working on new ones.  

It's Getting Crowded

Gather several geese together and you get a “gaggle.”  An assembly of rhinos is a “crash.” And a collection of giraffes makes for a “tower” (really).  So what do you call a bunch of self-directed bullish and bearish traders?  According to a Bloomberg piece last week on social investing, they’re likely to be “copycats.”

The article outlined the remarkable growth of networks such as EToro and ZuluTrade and detailed the spread of mirror-trading functionality (by which participants can copy each other’s trades).  According to one study referenced in the article, one in six brokers now offers clients the ability to duplicate other users’ transactions.  One in six.

The trend toward more participation and transparency has been growing for several years in the brokerage space, with a variety of launches and “pivots” leading not just to trade sharing and duplication strategies but also to features like information and analysis aggregation (e.g., Seeking Alpha, Motley Fool) or advisor curation (Covestor) or all of the above (StockTwits).

We love the trend.  But we’re on the verge of a “too much of a good thing” problem.  These companies combine for several million customers, and there are dozens of other efforts – including some that help traders duplicate more-established investors like Warren Buffet and Carl Icahn.  Even if only 1 in 100 participants is making money and sharing trades, potential advisors already outnumber investment instruments by an order of magnitude. 

In short, we’ve traded one research problem for another, and picking a good investment manager isn’t THAT much easier than picking a good investment.  Look around and you’ll see the places that downplayed stock selection tools when they began to let their clients copy other traders are now bringing back the same kind of tools – to filter through advisors.  (Covestor even offers advisors that help you pick advisors.)  If you don’t change your mindset, solving a problem with crowdsourcing just leads to a problem of crowdsorting.

So what would a change in mindset look like in this space?  We suggest packs over picks.  That is, rather than putting all of your energy into identifying one or two great traders to copy, spend half of it sorting good traders from bad and then the other half following what more of the good investors are doing.  We can confirm after several years’ experience that this is much easier said than done, but it is still the right direction to head. 

Most people have heard the notion that foxes know many little things while the hedgehog knows one big thing.  We don’t think anyone is all fox or all hedgehog (except for, you know, a fox and a hedgehog).  We believe we can generate returns beyond any single participant’s ability if we can just find the bit of hedgehog in most investors (and bloggers and analysts).  And do you know what do you call it when you get a bunch of hedgehogs together?  An Array. 

103% of Spreadsheets have Errors

The appearance of the words "spreadsheet" and "comedian" together in political headlines suggests something is changing about the way policy gets created.  We welcome an increasing role for data in the debate -- if it is used well.  Unfortunately, right now, that seems to be a big "if."

For more than a month, politicians, economists and other macroeconomic policy-talkers (which is not to say policy-writers) have been batting back and forth a three-year old research paper so brief that even with page breaks, generous margins, seven charts and a couple of data tables it is barely two dozen pages long.  The research had previously served as a key reference for many deficit hawks until a grad student brought to light a series of errors in the work that indicated not only a seeming lack of rigor by the authors but also a systemic sloppiness which enabled the paper's questionable "insights" to influence both press and politicians.

Most of the Twitter-level attention has understandably been directed to the easiest to comprehend (and most snicker-worthy) of the paper's mistakes: an Excel formula which failed to reference all of the applicable cells.  While we don't know exactly what the original spreadsheet looked like, there is no question that the re-creation of the math makes the error seem obvious.  (You can see that graphic and read a summary of the critique or read the original in its entirety,  And soon we were treated to articles about the prevalence of spreadsheet errors (including one which cited a study that suggested that 88% of spreadsheets have errors -- although you have to wonder who checked the spreadsheet about the spreadsheets).  In an era when more data is being both captured and used by and on all of us, clearly even small spreadsheet errors are going to have big impacts.

As a variety of commentators have pointed out, several other errors and potential errors exist in the austerity study.  The authors used unconventional methods for weighting the data (and failed to explain why).  They selectively excluded some of the data (and failed to explain that, too).  They provided no compelling reason for the initial parameters they used for the study (and have subsequently gone back to work with a larger data set.  Perhaps they simply should have revisited the decision to publish altogether.  As Matthew O'Brien suggests, the boring reality is that the relationship between public debt and growth isn't clear, and trying to say anything dispositive about debt and growth more broadly is near-impossible because there simply isn't enough data.

In short, the Excel error may be the most widely-discussed weakness of the paper, but it is by far the least important in the long run (and, as the math turns out, least impactful as well).  Sure, you need proper analytical execution.  But you need the right approach in front of that.  And the right parameters in front of that.  Most importantly, however, you need to ensure you're working on a problem that can be solved with data in the first place.  Yes, many policies can and should be guided by thoughtful data analysis.  But the most important skill for using any tool is an awareness of its proper uses.m And not all problems can be solved with spreadsheets -- even error-free ones.

Reporting on Hedge Funds is Terrible

While we aren’t generally in the business of highlighting misleading pieces on the web (talk about a job that would never end), we ARE in the business of using blog entries and tweets and many other kinds of data to generate actionable equity research.  That means in order to differentiate between good and bad investment opportunities, we have to be able to differentiate between good and bad blog entries. 

Last week, Matthew Yglesias (a business writer at Slate.com) wrote that “Investing in Hedge Funds is Terrible.”  That’s the actual headline – maybe he can blame that on an editor, but if an editor contributed a headline they really ought to have also contributed, you know, editing. 

As a launching point, he uses Josh Brown’s comments on the tremendous difficulty of identifying the best emerging managers and the near-impossibility of getting proven managers to let you into their fund.  Brown makes a fair and essential point: it’s hard to isolate signal from noise when looking at fund performance, and by the time you have it sorted out you will likely need tens or hundreds of millions of dollars to gain the attention of a winner.

Unfortunately, Yglesias goes on to say two things that don’t follow at all:

  • “So in fee-adjusted terms, you get a negative valuation.”  Um, no.  Whether you get a negative return when adjusting for fees depends on two things: the return and the fees.  Since both returns and fees vary by fund, this is impossible to say in general.  He might want to suggest that the industry averages a negative valuation, but that’s about as fair and helpful as saying that Slate.com averages useless reporting.
  • The fact that “the guy deciding who to invest with isn’t even investing his own money only makes the field more open for ripoffs.”  That might be the case.  But this is an issue with the advisors, not with hedge funds.  If anything, such agency problems [http://en.wikipedia.org/wiki/Principal%E2%80%93agent_problem] seem like a reason to be sure “the guy deciding who to invest with” IS investing his own money as well.  Guess where that happens?  Hedge funds.

We aren’t here to defend hedge funds – some are good and some are bad, and some of the good ones are our clients.  We’re here to enhance the opportunities available to all investors by “sharing the wealth of information,” and we believe few things are as big an obstacle to getting rich as poor thinking. 

Making Hay

If you read this blog (and, well, you do), you probably weren’t surprised  by Nassim Taleb’s recent claim that rapidly increasing amounts of data can give rise to rapidly increasing amounts of bad analysis.

Taleb’s observation leans more to the mathematical than the behavioral.  In brief: greater numbers of variables per observation combined with greater numbers of observations give rise to more false correlations.  As much as we love the graph showing the nonlinearity of spurious correlations (heck, we like just saying “the nonlinearity of spurious correlations”), the point is far from novel.

More interesting is the implication that this is a problem.  Why should an increase in spurious correlations be an issue for companies working with such data?  If, in Taleb’s analogy, “the problem is that the needle comes in an increasingly larger haystack,” why is the increase in the hay-to-needle ratio anything other than a quantitative challenge – precisely the kind of problem that should pose no concern at all for the kinds of server farms giving rise to the correlations in the first place?

The answer stems from a methodological shift that has paralleled the increases in processing power and data availability.  One way to frame the issue would be to say a lot of “analysts” are confusing the necessary with the sufficient.  Or we could suggest they are confusing correlation and causation the way many people do.  But the most direct way to put it is to note that a lot of software-enabled “analysts” are just plain lazy.

Morphing data into information requires hands-on work and detailed knowledge of the data source to avoid GIGO pitfalls.  Shaping information into insight involves generating hypotheses that reflect an understanding of the data’s context and dynamics.  Filtering insight into knowledge demands testing holdout samples to confirm the hypotheses.  And turning knowledge into wisdom means repeating these steps forever.

Correlation is a fine place to start analysis.  And it’s an important place to conclude it.  But there are many steps on the journey other than getting a correlation from Excel (or SAS or R).  If the rise in spurious correlations is a problem for companies working with such data, they probably just aren’t working very hard.  

Forecasting Weather, Markets and Everything Else

Splitting a week between the sun on the west coast and storms on the east coast provided a good reminder of the most important part of making a good forecast: choosing the topic and the timeframe.  If you are forecasting weather a week in advance, you want to predict for a day rather than a specific hour, and it's easier to do in San Diego than in New York. 

I was fortunate to be out in Coronado last week with 200+ members of the StockTwits community for a discussion of investing principles and possibilities when this point was made in a variety of ways.  Scott Ingraham shared his perspective on the past, present and future for internet startups in the real estate space.  JC Parets charted technical insights to futures and forex markets.  Bill Gurtin’s fixed income talk revealed what an expert and true lifelong student really looks like.  Committing to particular markets and approaches (with at least a tendency toward certain timeframes) is a great foundation for successful forecasting – which, after all, is the core of successful trading. 

Since the person sitting on the other side of a transaction effectively turns most trades into a forecasting contest, the other factor to consider is the relative strength and weakness of your competition.  Josh Brown gave a clear and succinct example of this notion when discussing how to compete with high-frequency traders: low-frequency trading.  Other speakers had additional ideas on the road less taken.  Estimize is doing “wisdom of crowds” work that is going to make sell-side estimates a thing of the past.  Lucky Sort and ChartIQ are doing amazing things to put otherwise-unwieldy data sets into the hands of everyday traders.  There was enough material to do an entire conference on “the advent of unstructured data mining for alpha capture” (which was the title of a talk by Dan Mirkin of Trade Ideas), but we could have done an entire conference on at least ten other topics that were touched on, too.

Those of us working at Array were naturally energized to see smart and ambitious people in the alternative data and wisdom of crowds space (all much more likely to be collaborators than competitors, I believe).  And we embrace the idea of choosing both timeframe and topic carefully; we have, for example, been supporting a very successful fund for a while that emphasizes mid-range turnover of small and mid-cap equities.  But mostly right now we’re just excited to be part of a large, diverse and growing community of people working on a variety of issues in this space – one forecast at a time.

Up with People

The pendulum is doing what pendulums do: swing back.  (Yeah, I was surprised too, but it turns out the plural is NOT “pendula,” unless it’s used as an adjective in Latin.  And no, I can’t imagine how use it as an adjective, but then again I don’t read Latin.)  While the pendulum is for the moment pointing toward common sense instead of algorithms and experts, we should celebrate a couple of headlines.

In “Why Data Will Never Replace Thinking,” Justin Fox notes that Nate Silver and Samuel Arbesman – two of the sharper minds decoding data in the press these days – are keen to balance out the “big data” noise by reminding us that choosing what data to look at and how to look at it are hardly simple or objective matters.  Fox makes the point that human judgment remains an essential part of the observe-hypothesize-test cycle whether we realize it or not, so perhaps best to realize it.  Of course, we spend a lot of time on this at Array (we even have hypotheses about hypotheses), and we embrace having good old-fashioned human beings at the heart in the process as an advantage – particularly in situations where finding the local optimum in a known space might no longer be sufficient to succeed.

An even better headline may be Yahoo! Finance’s “Individual Investors Are Beating Pros at Their Own Game,” stemming from an interview in which Josh Brown of TheReformedBroker.com observed a trend: “asset allocation being done by the individual, and even by the financial advisor, in a way that sidesteps the mythology of this stock picker or that stock picker.”  According to Brown, last month “investors put $18 billion into US stock ETFs. That’s a net number.  $12 billion of that went right into the SPY [the SPDR S&P 500 ETF].  They’re not interested in stock picking, they’re not interested in active management; however, they recognize they cannot let this market run away without them….  The public is reacting, but they’re not interested in having their assets shepherded by the quote-unquote ‘smart money.’”

While the article doesn’t fully merit the headline (which we are nevertheless printing out, enlarging and putting up in the office), Brown is pointing to something big: SPY benefits from a combination of anxiety about retirement and distrust of equity experts.  In an environment of fear and loathing, SPY wins simply by default.   The good news is things are headed in the right direction:  the index will beat most of the stock pickers people are running from.  The bad news (aside from the waste and bubbles that inevitably accompany bumping 500 stocks by 10 bps in a month just because they happen to be in an index) is that this new ETF religion will become yet another obstacle to helping people meet their goals. 

We know it isn’t as simple as humans vs. machines or traders vs. algorithms.  Investing is a case where having the “best of both” is not just a nice theory but can actually be a reality: equity selection based a broad set of objective inputs , customized for the investor and performed at a low cost can generate index-beating returns.  We know because we’re doing it.  And no matter which direction that pendulum is pointing, it’s always time for better returns.

P.S.  If you reached the end of this post and wondered why I haven’t more directly referenced Up with People, it’s because I’m still recovering from the 1982 Super Bowl halftime “extravaganza.”  On the other hand, if you the title of this post didn’t bring up any memories, enjoy the show.

Network Affect

We don’t see it on the calendar, but it sure seems like someone declared this National Network Week.*  One of the country’s most thought-provoking writers just released a book on the vast potential of peer networks.  In the finance space, two of the premiere investor networks announced significant new features – the StockTwits Social Sentiment Indicator and eToro’s Social Index – to unleash more of that potential, while a newer network completed beta and yet another announced its arrival.  And one of our favorite VCs mused about the strategic strengths and weaknesses of a cross-network utility – essentially, attempting to construct a new network by leveraging existing networks.

Part of the increasing interest in networks is likely due to economic and technological “supply” factors we’ve addressed in this blog before, and part may be a sociological “demand” factors for experiences offering trust and transparency on an Internet that is better known for volatility and anonymity.  Regardless of the cause, it’s important amidst the popular interest to keep in mind that all networks are not created equal; rather, networks have different strengths and weaknesses that stem from their varied physical and logical topologies, and ensuring alignment between a network’s design and the problem you’re trying to solve will make your life a lot easier. 

A good first question to ask yourself is whether the end relationship you are looking for is most likely 1:1, n:n or n:1 – what may be termed a connection, a clique or a council.  One-to-one connections will generally be facilitated by a more centralized network (think of a job search or dating site that does initial matching – or at least allows you to put a number of filters on results).  Many-to-one advisory councils are better enabled by a more distributed approach that keeps inputs flat and open within basic processing and participation guidelines (prediction markets, Wikipedia).  Many-to many cliques (such as a Meetup group) are generally less-defined in this regard and (perhaps consequently) can be difficult to maintain very long (as there seems to be a narrow sweet spot of scale that rewards such ambiguity).

Of course, even bigger (and more interesting) than the challenge posed by a variety of network structures is the fact that network boundaries are manifold, fluid and often unknown – including to the members themselves.  Even from the narrow perspective of an investing platform, you might trade forex more adventurously when your account is up and equities more conservatively when your account is down (and might be getting better at commodities in the meantime).  That is why Array dynamically reconstitutes networks while solving different problems rather than locking them in.

* It isn't National Network Week, but it IS International Talk LIke a Pirate Day.  Have fun!

Disclaimer Disclaimer

It is printed in every prospectus.  It appears in annual reports.  And it has become a standard denotation for due diligence decks.  Yet it has the potential to be among the most misleading statements you will ever read: past performance is no guarantee of future results

Okay, sure, it is technically true.  But if you really want to protect investors from bad assumptions about forecasted returns, we would propose splitting this into two separate statements – each of which might be more useful than the one we’re all accustomed to reading.

There are NO guarantees of future results.  The best case in any consumer sector is that you’ll be out a lot of time and energy in an effort to recapture your money from the company that made the guarantee, and finance is worse than most consumer sectors.  Remember that just a few years ago, millions of managers and advisors being paid billions of dollars lost hundreds of billions of dollars, and for the most part not a single thing about those arrangements has changed.

Past performance is the best indicator of future results.  Just because there are no guarantees doesn’t mean future results are random.  They aren’t.  From hedge fund managers to self-directed investors, past performance when viewed properly offers valuable indicators.  What we see is obviously something in-between: not a perfect guarantee but not random, either.  The bottom past performers skew toward the bottom in the future, and the top past performers skew toward the top. 

We examined the performance of a group of 20,000 self-directed equity traders in detail.  If you were inclined to see the markets as random, you might have observed that someone in the best group of traders has an 80% chance of failing to be in that group next time.  That would be true, but it would miss the point.  We would observe instead that the a top past performer is likely to be above average, a bottom past performer is likely to be below average, and when viewed as groups, top past performers are very likely to beat bottom past performers.

Of course, it isn’t just any measures of past and future performance we’re talking about.  You have to know what you’re looking for, how to sift other things out efficiently and how to shape the insights you’ve mined.  (This is a big part of what Array helps partners do.)  But the material is there to build something valuable.  There are no guarantees of future results.  But based on Array’s very recent “past performance,” we think we see some pretty interesting indicators of a bright future for self-directed investors.

Building a Learning Network

It feels like a paradox: the best way to learn* is to put together a high quality network of mentors and information sources, but it’s hard to assess that quality until you’re well up the learning curve.  Fortunately, there are some reliable workarounds that can get you started.  We suggest four simple steps to begin building a high-value learning network before you’ve gathered any expertise.

First, as the proverb goes, “know thyself”: what your temperament is, what your goals are and what you are both willing and able to commit (in terms of time and energy as well as money).  The bad news is that you can’t possibly know everything about a subject, but the good news is you don’t need to.  When it comes to investing, there are many people trading many different ways in different markets, and at least 80% of that information is not a fit for your style, goals and resources.  Maybe 90%.

Second, understand the basic dividing lines by which people organize themselves so you can find a compatible culture.  For high school students, it’s the jocks and the nerds and the cool kids.  For traders, it’s the techies and the swingers and the fundamentalists and so on, often with a secondary grouping according to asset class.  As you get comfortable you can shape your own identity, but initially choosing the culture most aligned with your strengths, interests and needs will reduce the number of exploratory dead ends and accelerate you down productive avenues.

Third, find a community that embraces your preferred culture and that offers a high degree of transparency.  If you’re looking on the Internet, transparency means seeking out communities where people are inclined to use their real identities, explain their thought processes and share real transaction data while avoiding anonymous “thumbs up or down” chat rooms and prediction market sites.  In general, when talk is cheap, listening is an expensive way to learn; when talk is more expensive, listening is among the best investments you can make. 

And fourth, wherever you can, choose a community with a larger number of members (think thousands or tens of thousands).  At a minimum this helps ensure a reasonable flow of information, since a very small percentage of community members regularly contribute to the network’s content. A larger community also increases the chances of encountering informative disagreements within established norms.  Most importantly, though, a larger community can enable you to move beyond anecdote and theory toward statistically valid observations.  And that move is a game-changer which impacts not only how you learn from your network but what you learn – as well as how much

That should get you started.  As you begin to learn, you should eventually participate in the network and, of course, start making real investments.  Then don’t forget to go back and revisit what you initially thought about yourself, the different cultures and the various communities.  It’s true you can assemble a good learning network without much knowledge of the space, but building a great one involves using what you learn to go back and revise some of your initial ideas.  A network that is learning makes for the best learning network.

* “The best way to learn” here is assessed in terms of ROI or efficiency; the most effective way to learn anything in the long run is, of course, by jumping in.

Back to School

Learning is a multi-layered undertaking.  When people are learning something, they are usually presented with opportunities to learn many other lessons at the same time from the same experience.  That’s what makes learning so hard, and it’s also what makes learning so valuable.  In business terms, the most important knowledge has numerous barriers to entry but scales well.

When we talk about this at Array, we distinguish among four different aspects of a learning experience: the particular item learned, the larger hypotheses to which that item relates, the manner in which we learned and the sources of the information.  If it’s a good day (and if we’ve had our second cup of coffee), we capture lessons learned in each of these areas for every experience.  We believe that self-directed investors too can benefit by considering these different elements.

Individual observations provide the foundation for insights, but to create real value we need a hypothesis that can initiate or guide action – so each observation needs to be examined against current beliefs and then vice-versa.  Does the new information give rise to new hypothesis or perhaps strengthen an existing one to the point it merits testing?  Or does the information instead call into question things you thought you knew, raising questions about earlier sources or thought processes?

At the same time, track how people learn best: reading a textbook or listening to an interview or back testing a strategy – all of it can be helpful, even if none of it is as rich in learning as actual investing.  People who document what they learn and take note of how they learned it not only get better but get better faster.  Awareness of a preferred learning style may be the second greatest potential source for accelerating education.

The most leverage in learning, though, resides in selecting the right sources of information: in short, teachers and classmates.  Of course, success in the long-run requires all of the above – plenty of good data, well-tested hypotheses, high self-awareness and the best team – but trusting the wrong people at the outset means everything they say and do might point you in the wrong direction [link], whereas surrounding yourself with people you can trust means you’ll have the right resources to learn effectively.  We’ll share our take on that core challenge in the next post.

Social Securities

People often work to define terminology as part of working to define an industry.  Witness the emergent nomenclature for community-oriented investment tools: “trader networks,” “social investing,” “collaborative trading” and so on – each with a focus on improving an individual’s investment returns by making connections among a larger group of people, but each with its own area of emphasis. 

Sometimes the “larger group of people” being connected is merely that: a larger group or “crowd.”  And sometimes members of the group are connected to each other in a “network” (or even a “community,” if the ties are close enough and the interests are sufficiently aligned).  The group might serve to generate new ideas (“crowdsourced” ) or feedback (“crowdsorted”)  or both (as with Wikipedia, Threadless, etc.), and that feedback can be solicited explicitly (Amazon compiles it, Netflix filters it) or derived implicitly (such as when you buy certain items at a store). 

These ideas hold a great deal of promise for the investment industry due to the extraordinary increases both in the number of variety of people participating and in the transaction data that exists, increases that stem from a series of mutually-reinforcing developments during the past several decades:

·         Regulatory revision and industry innovation brought new products and lower prices that significantly expanded the prospect pool.

·         Employment and economic changes have largely replaced employer-directed pension plans with self-directed retirement investments.

·         One demographic development brought a wave of baby boomer retirement capital, and another is now bringing their “DIY”-minded children.

·         Shifting social norms turned what used to be private conversations about finance into public postings of purchase and investment decisions.

·         Technological advances brought us electronic trading, made it accessible via the internet, and increased the ability to capture, store and share information on each trade.

The earlier changes brought us a rapid rise in the number of transactions (US participation in the stock market during the second half of the last century rocketed from 5% to 50% of population but has been roughly flat since then), and internet era brought us a rise in the fraction of those transactions that are captured, stored and shared.  Only a small group of mid-market brokers bridged that transition.  The question is whether the community-based tools being shaped are an indicator of another transition ahead and, if so, what it will take for brokers to make it to the other side.

When it comes to online investing, the “five SAFES” – Schwab, Ameritrade, Fidelity, E*Trade and Scottrade – have the biggest crowds, but none has made a meaningful move in the community direction.  And so now a varied collection of potential competitors is working to crack that elite online circle.  TradeKing has the longest-established network, and eToro may well have the biggest (almost certainly the fastest growing) network.  If you want to talk community, no one can touch Stocktwits (though the long-term effects of Twitter’s cashtag swipe are unclear).  Motley Fool may have the best crowdsorted equity research available to the broad public.  Zecco has elements of all of the above, so their merger with TradeKing should be worth watching.  As will what Marissa Mayer does with Yahoo! Finance or what Rich Fairbank does with ShareBuilder.

So where is Array in all of this?

Array puts the “wisdom” in the “wisdom of crowds.”  Or, perhaps more accurately, we find the wisdom in the crowd.  Brokers (and platforms, as we must acknowledge now given recent Twitter and Facebook moves toward finance) have the crowds; our system finds the wisdom by turning securities comment and transaction databases into sources of crowdsorted research that can be provided back to the customer base.   Array will be partnering with brokers to help investors assemble their networks, pick their stocks or construct their portfolios – or all three.  However the system is deployed, we’re confident Array will help investors build a ton of value.  And we’re pretty sure that helping investors build value is the best way to ensure our partners thrive through whatever transitions lie ahead.