By Sam Ruchlewicz—We recently worked with a large, investor-backed company in the sporting goods and entertainment space that was seeking to expand their customer base in Maryland. At our initial meeting the company’s leadership team was adamant that their single-largest issue was customer acquisition; they simply were not getting enough “new” customers walking through the door. Further, they were convinced (through anecdotal information) that their target market was comprised of individuals from a ~60-mile radius of their site.
Great data, but unused
Relative to what I usually see, this company had done an excellent job of collecting customer data—but they had never really looked at it, worked with it, or verified their assumptions using it. This is not uncommon in organizations of all sizes, but it does cause some serious problems if you’re making substantial investments (a marketing campaign, product/service offerings, promotions, etc.) based on anecdotal evidence.
One of the first things we asked for was access to the company’s customer data and web analytics, which turned out to be a treasure trove of insights. We added those data streams to some public data resources (so we could see things like populations, HHI, etc.), then conducted a series of analyses designed to test the client’s assumptions, segment their customer base, and ultimately inform our overall marketing strategy.
Wasted ad budget
The first thing that became clear was that the company didn’t have a customer acquisition problem—they were averaging hundreds of new customers per month. What they had was a serious “retention” (though I hate that word; it sounds so cold and calculating. Shouldn’t your customers want to come back and make another purchase?) problem: for every 8 customers they acquired, 7 would never again make a purchase. The churn rate was incredibly high, which was causing all sorts of problems. Especially when we calculated that the cost of acquiring a new customer was substantially higher than the expected profit from a single transaction. This led to a very strange Pareto situation, where about 5% of the customers were responsible for about 50% of the company’s profit.
Further complicating matters was the aforementioned belief that the company was successfully drawing customers from an approximately 60-mile radius (that’s 11,000+ square miles). What the data showed was that:
- Those customers were a relatively small segment of the total customer population;
- Those customers were the most expensive to acquire; and
- They were the most likely to be seeking out a specific product and thus, unlikely to return.
The best analogy to these customers are car buyers who travel out-of-state or a significant distance to purchase a specific vehicle for a specific price; they know what they want and they are only coming to you because you have it (usually at a “loss-leader” price point). As you might expect, they are extremely unlikely to return for future parts, service, or purchases.
Our client was in this same situation, and was spending significant resources trying to advertise to these individuals. I mention the above area of a circle for this reason: let’s assume that advertising dollars were (relatively) evenly distributed across those 11,000+ square miles (which they were); and let’s further assume that the majority of customers coming from 40+ miles away (point-to-point, not road mileage) were of the type described above. Now, you’re spending over 55% of your advertising budget on a relatively small segment of your total customer base, all in the vain hope of getting these people to return. It’s a massive waste of resources.
Potential customers ignored
What compounded this problem was that our client was actually losing customers in their backyard (a 5-20 mile radius). A much smaller overall market, but with a customer population that was significantly more likely to return to the client if prompted. In short, these nearby prospects were the people we needed to be communicating with, but our resources to do so were being squandered in the vain hope of appealing to the faraway, one-time-only customers.
The second major issue was a lack of segmentation among the customer base. Everyone, from the major purchaser to the one-time customer, was receiving the same message, with the same objectives, and the same benchmarks for success. We used a modified k-means clustering algorithm (a variant of PAM, partitioning around medoids) to identify discrete sub-groups in the overall population based on their behaviors and purchase tendencies. We then described each cluster and created custom goals and benchmarks for each, so we were solving for something relevant and achievable relative to the cluster, instead of to the entire customer population.
Think of it this way: Suppose I were to give a standard statistics exam to two groups, one a group of MIT PhDs in statistics and another a representative group of the U.S. population. Now supposed I judged them both on the same A-to-F scale. I’m willing to bet a massive sum of money that the PhDs would score higher. But doing that hides some fundamental insights. The PhDs should score higher. In fact, the trained statisticians should probably all have perfect (or near-perfect) scores, while the U.S. population segment (if we’re grading on a normal distribution) should score in the “C” range.
In both this case and our client case, looking at the objective measure is largely worthless . What we should be doing is looking at how the group performed relative to how they should perform. So, for instance, we might judge the MIT statisticians against similar groups from UPenn, Harvard, Stanford, and CalTech.
That’s exactly what the custom goals for the sub-groups were designed to do. They would level the playing field and provide us with some real, actionable insights on what customer segments were performing better (or worse) than expected, based on their characteristics and behaviors.
All of this provided an actionable foundation upon which to begin our marketing efforts. We designed multiple marketing campaigns for each segment, and used multivariate testing to determine which one(s) were most successful at achieving our desired outcomes. We examined customer experiences to determine what we needed to do throughout the journey to maximize the expected lifetime value of each customer. And, ultimately, we drove substantial increases in revenue (100.1%) and profitability (129.5%). ∞