The great thing about Social Media is that everyone can easily publish their thoughts and opinions. And the worst thing about Social Media is that everyone can easily publish their thoughts and opinions. Anyone can now conduct “research” and publish their “findings” on the internet. It’s ubiquitous for the masses. Yahoo!

Open disclaimer: I’m a research nerd. Absolutely love it and always have. I consider myself to be a perpetual student of “the game”. In my case, “the game” is complex B2B technology sales. At the risk of dating myself, we use to refer to it as good old fashioned enterprise selling. Selling high priced software solutions to a group of stakeholders or a buying committee in a large enterprise company.

What is the intersection of my love for research and learning, with the fact that anyone can publish anything at anytime from anywhere? Simply put, there is a new class of lecturers out there on the internet and Social Media. Millions, if not tens of millions of them. And it’s growing every day. They are the self-proclaimed thought leaders. They heap unwarranted praise on each other and refer to themselves as “rock stars”.

They tag scores of their rock star friends in every post and invite their opinions. They hash tag the shit out of Silicon Valley’s obnoxious acronym and buzzword vernacular. But that’s not what annoys me the most, that’s just mildly irritating. What actually pisses me off is that they are factually wrong. The insights, advice and research findings they are sharing are actually subjective opinions. They are an unproven hypothesis. And more often than not, they are simply wrong.

Going back to my college days at Northeastern University in Boston, Massachusetts where I learned Statistics I and Statistics II under the great leadership of Professor Chakraborty. I’m not quite sure how he did this, but Professor Chakraborty made learning Statistics fun. He used to tell us that for any research study findings to be considered statistically significant, it had to pass specific mathematical criteria.

For those inclined to geek out on this, it’s called statistical hypothesis testing. It’s defined as follows: Statistical hypothesis testing is used to determine whether the result of a data set is statistically significant. This test provides a p-value, representing the probability that random chance could explain the result. In general, a p-value of 5% or lower is considered to be statistically significant.

The sample size and target population used in the research has a direct correlation on whether or not your findings are statistically relevant or not (just a hypothesis). That was a long winded way of getting to the crux of what I mean when I refer to a “flawed lecturer”. A flawed lecturer does not understand the math behind statistics or simply ignores it and publishes their hypotheses as facts.

And whenever there is an underlying error in your math formula, any answer that comes out of that formula is inherently wrong or flawed. My admonition to everyone is to challenge the underlying math in any research study findings that are published on the web or Social Media. Question the author and validate that they used proper math in determining their sample size and target population to conduct research against their hypothesis.

I worked for a Marketing Research firm in Boston for a couple of years while in college. We ran extensive focus groups and conducted large research studies for big corporations that sold to consumers. What I learned is that it is quite easy to reverse engineer a survey result or outcome that your customer wants to see. In fact, it is quite common. When you see a TV commercial saying that 4 out 5 dentists recommend a particular brand of toothpaste, do you ask yourself how many dentists were surveyed? Do you ask yourself if the dentists surveyed were given free samples to give out to their patients by that particular toothpaste brand manufacturer? I could go on all day here, but you probably get the point by now.

I’m don’t believe in “trolling” or publicly shaming anyone. Either in person or hiding like a coward through anonymous web trolling. Allow me to share a generic example of flawed lecturers. There are a number of sales technology tool companies out there that cite “data findings” from their platform as factual. One such blog post author cites the fact that “sales discovery” can be counter-productive in certain situations. It went on to say that based on hundreds of thousands of sales calls analyzed, there was an optimal number of discovery questions asked that resulted in a successful sales call outcome. There was additional color commentary that stated as a fact that unsuccessful sales call outcomes asked twice as many discovery questions as the successful sales calls did.

Where is the proverbial fly in the ointment here? No one is challenging the sample size or sample population. Due to the scale (hundreds of thousands of call) of the data, people are simply accepting the findings as facts. Now let me pick apart this flawed hypothesis. The fact is that their tool is almost exclusively used by very inexperienced sales reps (junior inside sales reps). In fact, many of their users are brand new to sales (Sales Development Reps (SDRs) or Business Development Reps (BDRs)).

People that are new to sales or very inexperienced to sales have not had much sales training by definition. Inexperienced sales reps commonly struggle in formulating and asking good discovery questions. As a result of this, they also tend to overcompensate by asking too many sales discovery questions. Or they simply ask bad discovery questions that lead to unsuccessful sales call outcomes. It’s easy to turn off a customer by asking what they consider to be a stupid question.

Customers become offended when they realize through your discovery questioning that you didn’t do your research prior to the sales call. This inherently correlates to asking too many sales discovery questions which in turn reveals to the customer that the sales rep did not even bother to research the customer’s web site to learn what they do and ends up as an unsuccessful sales call outcome.

Are you sensing a trend here? All of their data findings presented as facts can be attributed to other inherent factors in their sample population. The hypothesis should be what are the right sales discovery questions to ask, not the right number. What is the correlation to the research and preparation the sales rep does prior to the call have on the number of sales discovery questions they ask and the corresponding outcome? What is the correlation to the experience and sales training the sales rep has had to the number of discovery questions they ask and the outcome of the call?

Remember, given the newness to sales that this companies users typically are…a good portion of these folks will fail and were never meant to be in sales in the first place. The root causes that should be looked at are:

  • Why did we hire these folks who fail in the first place?
  • What can we do from a sales training and sales coaching perspective to prevent them from failing?
  • How can we teach newer sales reps to ask smarter and more effective sales discovery questions?
  • How can we help newer sales reps learn the importance of doing their research and preparation prior to a sales call so you don’t struggle and ask either the wrong discovery questions, too many discovery questions or offend the customer?
  • How can we help newer sales reps learn the importance of deeply understanding the different stakeholder types we sell to and how to tailor their discovery questions to be relevant to each type?

In closing, just because a flawed lecturer cites hundreds of thousands or millions of data points in their research findings, it does not mean it’s the right data set or that they have drawn factual conclusions from the data.