As the nation continues to prepare itself for the transition to a new and unpredictable presidential administration, a significant amount of attention is suddenly being paid to “fake news,” and the role that it may or may not have played in Donald Trump’s surprising rise to power. Of course, “fake news” is not in any way a new phenomenon—the creators and distributors of questionable content may have changed, particularly in the social media era, but misleading (or downright incorrect) news stories are as old as newspapers themselves.

Nevertheless, purely fabricated stories do seem to be gaining traction in recent years, particularly as distrust in the established order (including the mainstream media) grows and spreads. Most recently, a baseless rumor-turned-story about a Washington, D.C. pizzeria that supposedly harbored a Hillary Clinton-led child sex ring inspired a North Carolina man to storm the shop with an assault rifle, believing that he could “do some good” and save kids from a life of sex slavery. The man was arrested and now faces three serious charges, all because he was unable to discern the difference between plausible and implausible news items (and news sources).

For those of us who spend our careers trying to divine the market’s next move, this dynamic might seem eerily familiar. Perhaps no industry is more swamped with predictions, projections, and dire warnings of impending doom, regardless of the underlying health of the economy or financial markets. Trying to determine which pundits (or sources) are reliable and which are not can often be a fool’s errand, in no small part because nearly every prediction contains at least one small kernel of truth.

While we all attempt to sift through the morass of questionable headlines and ill-meaning industry players, it’s a useful exercise to consider how we got here, where we should be focusing our attention, and what signals we should be looking for as we try to make sense of this ever more confusing world.

Macroeconomics and the limited predictive value of statistics

As we enter into the rapidly-maturing era of “big data,” macroeconomics is perhaps the area of study that has become most reliant upon aggregated data and dense statistical models, all of which attempt to glean meaningful insights about human behavior. Our analytical tools have never been more powerful, nor has our ability to collect and track massive amounts of data. And yet, the more we study and project, the less we seem to understand about the way the world actually works.

Indeed, examples of confusion and disagreement are practically everywhere. In an influential 2010 research paper, Harvard economists Carmen Reinhart and Kenneth Rogoff found that economic growth was significantly slowed in economies where public debt grew to a level of greater than 90% of GDP. Their findings added fuel to the smoldering fire of the nascent Tea Party movement, and a wave of fiscal austerity measures soon took hold in nations across the world. Unfortunately, Reinhart and Rogoff’s conclusions were soon challenged, as revelations about basic Excel spreadsheet errors cast doubt on the authors’ credibility (and motives).

Just a couple of years later, the French economist Thomas Piketty released an international best-selling book, “Capital in the Twenty-First Century,” which presented compelling evidence that global income (and wealth) inequality has risen dramatically in recent decades, to the point that it may now threaten future economic growth (and political stability). Like Reinhart and Rogoff before him, though, Piketty soon found himself the target of widespread criticism, as his methods and conclusions came under fire.

In both cases, there was no ultimate resolution of whose side was definitively “right” or “wrong”—both debates came down to a matter of how to interpret certain data sets, and which subsets of that data deserved to carry more (or less) weight. In particular, the treatment of certain “outliers” came into the spotlight, as they often do in cases of statistical analysis with limited data sets.

And in the absence of pure “truth,” just about anything can start to seem plausible—hence the rise of “fake news.” As President Obama himself has noted, in our current media environment, “everything is true and nothing is true”—”truth” can be thought of not as a black-or-white issue, but existing on a spectrum from “mostly true” to “mostly false” (the Snopes-ification of the world, perhaps).

The lesson for the casual observer should ultimately be that statistical analysis, while useful, is far from a panacea. Statistical analysis is often at least as much an art as a science, and presenting statistical findings as “settled fact” is therefore often an irresponsible act. Statistics can illuminate, and they can be highly suggestive of underlying effects and/or connections, but they rarely represent definitive “proof” of anything.

In spite of—or perhaps because of—this underlying dynamic, every time that a statistical result is initially presented as fact, and then subsequently challenged (or disproven), the public faith in those who report and study these statistics erodes. We in the general public want our “experts” to provide us with definitive answers, and we also want them to be “right” an overwhelming majority of the time. If they are not, we discount their opinions, and we often start to question their abilities or their motives.

Sometimes, this discounting of expert analysis is richly deserved. “Statistical significance” is a problematic term, and its application is similarly fraught with errors. As data sets grow larger and more complex, spurious correlations become almost inevitable, and opportunities for underhanded methods like “p-hacking” also become more numerous.

That discounting is precisely what leads to the embrace of “fake news”—if the so-called “experts” are so often wrong, even when their findings are reported by reputable publications, why should their words carry any more weight than the hearsay rumors of our neighbors, friends, or internet strangers?

Implications for the future

Ensconced as we are in the era of “big data,” wherein seemingly every news article is accompanied by a chart, a massive infographic, or some other sort of statistical analysis, our ability to assess the quality (or veracity) of these statistical studies has never been more important. Simply put, just because we now have access to significantly more data analysis does not necessarily mean that we have access to significantly better analysis.

On the contrary, the fact that so many people are now capable of producing a pretty-looking chart with a few clicks in an Excel spreadsheet makes it even more likely that we will see meaningless statistical drivel than ever before.

Just as the barriers to publication have never been lower (and we are thus inundated with information and news sources of questionable reliability), the barriers to entry to statistical analysis have similarly been lowered. Armies of amateur data analysts are now let loose upon the world, and we can no longer rely upon mainstream media sites to do our filtering for us (think Nate Silver is infallible? Think again).

Simply put, data analysis (and examining the data analysis of others) is not for the faint of heart. Good statistical analysis requires truly good data, and perfectly clean and rigorous data sets remain few and far between (again, “more” data is not necessarily “better” data—consider the plight of nutritional studies if you require further evidence). Given these noisy and often imperfect data sets, some level of simplification—or, at minimum, “slicing and dicing” of the data—is generally essential in order to draw any conclusions whatsoever.

Unfortunately, sometimes the steps that we take toward simplification can omit the very information that is the most essential to our ultimate understanding of the world around us. Simply put, our simplifications can sometimes obscure as much as they reveal.

Taking things back to markets again, international economies (and financial markets) are incredibly complex systems with an immense number of interrelated variables in play. With as much data about financial markets as we now have, there are bound to be a wide variety of false signals, shaky correlations, and other dynamics that look meaningful, but are in fact anything but. Any time you hear anyone trying to give a simple answer to a complicated question, you should therefore be immediately suspicious.

Going forward, we as consumers of mass media (and of social media) need to know how to filter the good sources of information from the bad, and we need to educate ourselves about statistical methods so that we ourselves can separate meaningful science from hopeless drivel.

It’s become an unfortunate truth that mainstream media is perhaps less reliable now than ever before, simply because the volume of news and analysis has increased, and filtering has become more difficult. We as citizens and consumers of information need to be careful not to respond by blindly embracing anything that appears to be “out of mainstream”—if mainstream media is “bad,” that does not necessarily mean that non-traditional media sources are better. More often than not, they are even worse, or at least less reliable.

Numbers (and the narratives that they create) have an extraordinary hold on the human imagination, and they shape behaviors in ways that we still might not fully appreciate. We all therefore need to recognize the power of numbers and statistics, understand how to properly interpret them, and know when the “data” that we’re consuming is or is not useful. Statistics are powerful, but limited; knowing and understanding their limitations is more important now than ever before.