Putting Your Package to the Test
By Thomas E. Newmaster
Does this sound familiar? A product concept bounces around your company for months. Eventually, the organization decides to develop it, investing millions in machinery, sourcing, logistics, etc. But then, when it’s time to design the package, your firm is suddenly “past deadline” and “over budget”. A package is quickly developed, sometimes in a matter of days, and sent into the marketplace in the hope that consumers will notice it, relate to it and, eventually, buy it.
With so much invested in new products, and with new product failure rates hovering between 40 and 90 percent (depending on who you ask), it’s hard to imagine why a company would forgo packaging research.
But many of them do. New products and line extensions are frequently launched without significant advertising, promotion or point-of-sale support—often leaving packaging as the single sales driver for a brand.
Even in the most disciplined marketing organizations, many packages enter the marketplace based on little more than a brand manager’s best guess.
Package design research itself poses challenges. Few methods accurately recreate the consumer’s decision-making process. A lot of research projects are also prohibitive because they require weeks or even months to complete. And then, of course, there are the costs.
With so much at stake, it’s essential to carefully review your options. Here is an overview of some of the more popular types of packaging research.
Focus groups and mall intercepts
Focus groups or one-on-one interviews (often called mall intercepts) are great for many research objectives, but the process is much too removed from the shopping experience to provide reliable package design results. Think about it … does your customer shop with a large group of people and then spend an hour or more discussing how the package affects him? In the real world, packaging is experienced in short bursts: people come up on it, look at it for a few seconds and either pick it up or move on. Focus group settings are too artificial. And monadic testing (showing each person only one design) is difficult in this setting because it requires at least triple recruiting, which, in turn, significantly increases costs and timelines.
You can get closer to the actual consumer shopping experience with in-store research. Several reputable firms can help you test your package in what is a real, or almost real, store setting. But the costs and timelines can be prohibitive. Typical brand owners might develop $15,000 budgets and plan for four-to-eight-week turnarounds, but most vendors (probably 99 percent of them) can’t deliver results under such constraints. Instead, project fees often reach over $50,000.
Eye-tracking is a scientifically-proven method of measuring eye movements and changes in pupil diameter to gauge a person’s reaction to visual stimuli. A qualified firm can give you valuable information on how each element of packaging affects consumers. It’s an involved process requiring highly trained administrators. And, like many forms of research, it requires a longer timeline and may be costly.
If you’re short on both time and cash, omnibus surveys are a decent choice. Many research firms send out weekly, or daily, online surveys to consumers. And for a fee, sometimes only $3,000, you can add a few questions in about your package. The caveat is that the format is limited (one question with minimal graphics) and respondents can be influenced by other products in the survey. The results are also raw, which leaves you without solid data to base your package design decisions.
Each of these testing options holds value, one way or another. But, more and more, package design firms are developing proprietary research methods to appeal to clients who often express frustrations with existing research tools.
Packaging-specific online research
Last December, our own firm developed an online test-marketing program called Design Check, which was created to accurately measure first impressions and quickly collect quantifiable consumer input—all in a format that could be absorbed into the average product manager’s budget.
With Design Check, respondents view a package for 10 seconds—the average time a consumer takes to make a purchasing decision—and then answer questions based on their memory of that design. Design Check measures message hierarchy and shelf impact and tests up to three package designs against marketing objectives. All this happens in about five business days and for about $5,000.
We recently worked with a major food manufacturer who had conducted extensive pre-launch testing of their product. They were confident of the consumer’s desire for their product’s features, so, logically, they emphasized them on the package. But our Design Check research showed that, while these new features were clearly communicated on the package, they overshadowed the “appetite appeal” of the product. As a result, the product presentation and descriptions were changed to make the product look and sound more delicious.
Decisions about package design are too important to leave to “gut instinct”. Be sure that your research suits your needs and delivers data that you can use to pave the way for a successful brand launch and make critical packaging decisions… before your product hits the shelf. BP
The author, Thomas E. Newmaster, is a 15-year veteran of the package design world and co-owner of William Fox Munroe in Shillington, PA. Tom led the development of Design Check and WFM’s recent expansion of strategic and research services. You can reach him at firstname.lastname@example.org.
STEP ONE: TEST YOURSELF
To make sure you’ll be conducting solid research with actionable results, ask yourself the following:
a. Are all your ducks in a row?
Clients often want to test product messages at the same time they test package design. It’s important to remember, though, that packaging’s role is to attractively communicate the features, benefits and value of a product. If marketers aren’t confident in their product attributes, they aren’t ready to do the packaging, let alone package testing.
b. Have you put your tests to the test? Make sure every research question gives you specific meaningful results—answers that you can act on. And be honest with yourself and your organization. If you get back results that you didn’t expect, or that you disagree with, will you (or your CEO) be willing to change things? Which leads to the next question …
c. Will you be able to defend your research? Play devil’s advocate. Look at your research critically and start shooting holes in your plan. How solid is your methodology? Does your research accurately recreate, at least in some part, the way consumers interact with your package at the point of sale? Are you asking objective and specific questions that challenge the status quo? Or are you just stacking the deck to confirm established opinions? If your research doesn’t stand up to your own attacks, it won’t stand up to anyone else’s.