I will second Bev's comments. I read an article a few years ago that summed up the problem nicely. In short, it said that years ago, data were scarce, so there was only one way to analyze the data (e.g., 2 groups of unpaired sample data = 2 sample t-test). You could botch a few things, but the selection of the analysis was not typically one of those things.
In the big data world, there are a multitude of ways to potentially analyze the data. While some are obviously wrong to the trained analyst, they are still used by the untrained analyst. Others may even seem correct to the trained analyst, but are incorrect, not because they are unsuited to the data, but because they answer a different question than the one asked (e.g., ANOVA vs. ANOM; they answer different questions).
In other words, there are a lot more ways to make a mistake with big data.