Using Descriptive Statistics In Marketing

Resources: The Market Research Toolbox, Ch. 11, The Market Research Toolbox, Ch. 14, and 15-minute Oil Change Data Set (see attached files) the textbook’s info is in the first page of the attached word document!use it as a reference! NOTE: use in-text citations according to the references used. And Minimum of 2 peer reviewed references. Scenario: An oil change company is looking for ways to increase customer flow and revenue for the business. The company leaders have hired you to the company’s market research consultant. Using raw data, determine two descriptive statistics regarding the oil change company that can be used to attract customers.Write a 525- to 700-word paper in which you: {show all the questions asked below in the paper as a header (headlines) according to APA format}. Explain the descriptive statistics determined and their direct use in attracting customers.Analyze additional information needed to develop a better marketing strategy.Assess how you would collect this additional information.Format your paper consistent with APA guidelines.
chapter_11_and_chapter_14_.docx

mkt441_r5_15minute_oil_change_data_set_week4.xlsx

Don't use plagiarized sources. Get Your Custom Essay on
Using Descriptive Statistics In Marketing
Just from $13/Page
Order Essay

Unformatted Attachment Preview

Textbook: The Market Research Toolbox
Edward F. McQuarrie
4th Edition
Chapter 11
Experimentation
Experiments can be conducted in the field or in some kind of laboratory, that is, in an artificial
situation constructed by the researcher. The essence of any experiment is the attempt to arrange
conditions in such a way that one can infer causality from the outcomes observed. In practice,
this means creating conditions or treatments that differ in one precise respect and then measuring
some outcome of interest across the different conditions or treatments. The goal is to manipulate
conditions such that differences in that outcome (how many people buy, how many people
choose) can then be attributed unambiguously to the difference between the treatment conditions.
In a word, the experiment is designed to determine whether the treatment difference caused the
observed outcomes to differ. More properly, we should say that with a well-designed experiment,
we can be confident that the treatment difference caused the outcomes to differ. (The role of
probability in hypothesis testing will be discussed in Chapter 14.)
Experimentation should be considered whenever you want to compare a small number of
alternatives in order to select the best. Common examples would include: (1) selecting the best
advertisement from among a pool of several, (2) selecting the optimal price point, (3) selecting
the best from among several product designs (the latter case is often referred to as a “concept
test”), and (4) selecting the best website design. To conduct an experiment in any of these cases,
you would arrange for equivalent groups of customers to be exposed to the ads, prices, or designs
being tested. The ideal way to do this would be by randomly assigning people to the various
conditions. When random assignment is not possible, some kind of matching strategy can be
employed. For instance, two sets of cities can provide the test sites, with the cities making up
each set selected to be as similar as possible in terms of size of population, age and ethnicity of
residents, per capita income, and so on. It has to be emphasized that an experiment is only as
good as its degree of control; if the two groups being compared are not really equivalent, or if the
treatments differ in several respects, some of them unintended (perhaps due to problems of
execution or implementation), then it will no longer be possible to say whether the key treatment
difference caused the difference in outcomes or whether one of those other miscellaneous
differences was in fact the cause. Internal validity is the label given to this kind of issue—how
confident can we be that the specified difference in treatments really did cause the observed
difference in outcomes?
Because experiments are among the less familiar forms of market research, and because many of
the details of implementing an experiment are carried out by specialists, it seems more useful to
give extended examples rather than walk you through the procedural details, as has been done in
other chapters. The examples address four typical applications for experimentation: selecting
among advertisements, price points, product designs, or website designs. Note, however, that
there is another entirely different approach to experimentation which I will refer to as conjoint
analysis. Although conjoint is in fact an application of the experimental method, the differences
that separate conjoint studies from the examples reviewed in this chapter are so extensive as to
justify their treatment in a separate chapter.
Example 1: Crafting Direct Marketing Appeals
This is one type of experiment that virtually any business that uses direct mail appeals, however
large or small the firm, can conduct. (The logic of this example applies equally well to e-mail
marketing, banner ads, search key words, and any other form of direct marketing.) All you need
is a supply of potential customers that numbers in the hundreds or more. First, recognize that any
direct marketing appeal is made up of several components, for each of which you can imagine
various alternatives: what you say on the outside of the envelope (or the subject line in the email), what kind of headline opens the letter (or e-mail), details of the discount or other
incentive, and so forth. The specifics vary by context; for promotional e-mail offers, you can
vary the subject line, the extent to which images are used, which words are in large type, and so
forth; for pop-up ads, you can vary the amount of rich media versus still images, size and layout,
and the like. To keep things simple, let’s imagine that you are torn between using one of two
headlines in your next direct marketing effort:
1. “For a limited time, you can steal this CCD chip.”
2. “Now get the CCD chip rated #1 in reliability.”
These represent, respectively, a low-price come-on versus a claim of superior performance. The
remainder of each version of the letter will be identical. Let’s further assume that the purpose of
the campaign is to promote an inventory clearance sale prior to a model changeover.
To conduct an experiment to determine which of these appeals is going to produce a greater
customer response, you might do the following. First, select two samples of, say, 200 or more
customers from the mailing lists you intend to use, using formulas similar to those discussed
in Chapter 13, the sampling chapter. A statistician can help you compute the exact sample size
you need (larger samples allow you to detect even small differences in the relative effectiveness
of the two appeals, but larger samples also cost more). Next, you would use a probability
sampling technique to draw names for the two samples; for instance, selecting every tenth name
from the mailing list you intend to use for the campaign, with the first name selected assigned to
treatment 1, the second to treatment 2, the third to treatment 1, and so forth. Note how this
procedure is more likely to produce equivalent groups than, say, assigning everyone whose last
name begins with A through L to treatment 1 and everyone whose last name begins with M
through Z to treatment 2. It’s easy to see how differences in the ethnic backgrounds of A to L
versus M to Z patronyms might interfere with the comparison of treatments by introducing
extraneous differences that have nothing to do with the effectiveness or lack thereof of the two
headlines under study.
Next, create and print two alternative versions of the mailing you intend to send out. Make sure
that everything about the two mailings is identical except for the different lead-in: same
envelope, mailed the same day from the same post office, and so forth. Be sure to provide a code
so you can determine the treatment group to which each responding customer had been assigned.
This might be a different extension number if response is by telephone, a code number if
response is by postcard, different URL if referring to a website, and so forth. Most important, be
sure that staff who will process these replies understand that an experiment is under way and that
these codes must be carefully tracked.
After some reasonable interval, tally the responses to the two versions. Perhaps 18 of 200
customers responded to the superior performance appeal, whereas only 5 of 200 customers
responded to the low-price appeal. A statistical test can then determine whether this difference,
given the sample size, is big enough to be trustworthy (see Chapter 14). Next, implement the best
of the two treatments on a large scale for the campaign itself, secure in the knowledge that you
are promoting your sale using the most effective headline from among those considered.
Commentary on Direct Marketing Example
The example just given represents a field experiment: Real customers, acting in the course of
normal business and unaware that they were part of an experiment, had the opportunity to give or
withhold a real response—to buy or not to buy, visit or not visit a website, and so forth. Note the
role of statistical analysis in determining sample size and in assessing whether differences in
response were large enough to be meaningful. Note finally the assumption that the world does
not change between the time when the experiment was conducted and the time when the actual
direct mail campaign is implemented. This assumption is necessary if we are to infer that the
treatment that worked best in the experiment will also be the treatment that works best in the
campaign. If, in the meantime, a key competitor has made some noteworthy announcement, then
the world has changed and your experiment may or may not be predictive of the world today.
In our example, the experiment, assuming it was successfully conducted, that is, all extraneous
differences were controlled for, establishes that the “Rated #1 in reliability” headline was more
effective than the “Steal this chip” headline. Does the experiment then show that quality appeals
are generally more effective than low-price appeals in this market? No, the experiment only
establishes that this particular headline did better than this other particular headline. Only if you
did several such experiments, using carefully structured sets of “low-price” and “quality”
headlines, and getting similar results each time, might you tentatively infer that low-price
appeals in general are less effective for customers in this product market. This one experiment
alone cannot establish that generality. You should also recognize that the experiment in no way
establishes that the “Rated #1 in reliability” headline is the best possibleheadline to use; it only
shows that this headline is better than the one it was tested against. The point here is that
experimentation, as a confirmatory technique, logically comes late in the decision process and
should be preceded by an earlier, more generative stage in which possible direct mail appeals are
identified and explored so that the appeals finally submitted to an experimental test are known to
all be credible and viable. Otherwise, you may be expending a great deal of effort merely to
identify the lesser of two evils without ever obtaining a really good headline.
The other advantage offered by many experiments, especially field experiments, is that in
addition to answering the question “Which one is best?,” they also answer the question “How
much will we achieve (with the best)?” In the direct mail example, the high-quality appeal was
responded to by 18 out of 200, giving a projected response rate of 9 percent. This number, which
will have a confidence interval around it, can be taken as a predictor of what the response rate in
the market will be. If corporate planning has a hurdle rate of 12 percent for proposed direct mail
campaigns, then the direct mail experiment has both selected the best headline and also indicated
that it is may not be worth doing a campaign using even the best of the headlines under
consideration, as it falls below the hurdle.
Much more elaborate field experiments than the direct mail example can be conducted with
magazine and even television advertisements. All that is necessary is the delivery of different
advertising treatments to equivalent groups and a means of measuring outcomes. Thus, splitcable and “single-source” data became available in the 1990s (for consumer packaged goods). In
split cable, a cable TV system in a geographically isolated market has been wired so that half the
viewers can receive one advertisement while a different advertisement is shown to the other half.
Single-source data add to this a panel of several thousand consumers in that market. These
people bring a special card when they go shopping for groceries. It is handed to the cashier so
that the optical scanner at the checkout records under their name every product that they buy.
Because you know which consumers received which version of the advertisement, you can
determine empirically which version of the ad was more successful at stimulating purchase. See
Lodish et al. (1995) for more on split-cable experiments.
One way to grasp the power of experimentation is to consider what alternative kinds of market
research might have been conducted in this case. For instance, suppose you had done a few focus
groups. Perhaps you had a larger agenda of understanding the buying process for CCD chips and
decided to include a discussion of alternative advertising appeals with a focus on the two
headlines being debated. Certainly, at some point in each focus group discussion, you could take
a vote between the two headlines. However, from the earlier discussion in the qualitative
sampling chapter, it should be apparent that a focus group is a decisively inferior approach to
selecting the best appeal among two or three alternatives. The sample is too small to give any
precision. The focus groups will almost certainly give some insight into the kinds of responses to
each appeal that may exist, but that is not your concern at this point. That kind of focus group
discussion might have been useful earlier if your goal was to generate a variety of possible
appeals, but at this point, you simply want to learn which of two specified appeals is best.
You could, alternatively, have tried to examine the attractiveness of these appeals using some
kind of survey. Presumably, in one section of the survey, you would list these two headlines and
ask respondents to rate each one. Perhaps you would anchor the rating scale with phrases such as
“high probability I would respond to this offer” and “zero probability I would respond.” The
problem with this approach is different from that in the case of focus groups—after all, the
survey may obtain a sample that is just as large and projectable as the sample used in the
experiment. The difficulty here lies with interpreting customer ratings obtained via a survey as a
prediction of whether the mass of customers would buy or not buy in response to an in-themarket implementation of these offers. The problem here is one of external validity: First, the
headline is not given in the context of the total offer, as it occurs within an artificial context
(completing a survey rather than going through one’s mail). Second, there is no reason to believe
that respondents have any good insight into the factors that determine whether they respond to
specific mail offers. (You say you never respond to junk mail? Huh, me neither! Funny, I wonder
why there is so much of it out there . . .)
Remember, surveys are a tool for description. When you want prediction—which offer will work
best—you seek out an experiment. If it is a field experiment, then the behavior of the sample in
the experiment is virtually identical, except for time of occurrence, to the behavior you desire to
predict among the mass of customers in the marketplace. Although prediction remains
irreducibly fallible, the odds of predictive success are much higher in the case of a field
experiment than if a survey, or worse, a focus group were to be used for purposes of predicting
some specific subsequent behavior.
Example 2: Selecting the Optimal Price
Pricing is a topic that is virtually impossible to research in a customer visit or other interview. If
asked, “How much would you be willing to pay for this?” you should expect the rational
customer to lie and give a low-ball answer! Similarly, the absurdity of asking a customer,
“Would you prefer to pay $5,000 or $6,000 for this system?” should be readily apparent, whether
the context is an interview or a survey. Experimentation offers one solution to this dilemma;
conjoint analysis offers another, as described subsequently.
The key to conducting a price experiment is to create different treatment conditions
whose only difference is a difference in price. Marketers of consumer packaged goods are often
able to conduct field experiments to achieve this goal. Thus, a new snack product might be
introduced in three sets of two cities, and only in those cities. The three sets are selected to be as
equivalent as possible, and the product is introduced at three different prices, say, $2.59, $2.89,
and $3.19. All other aspects of the marketing effort (advertisements, coupons, sales calls to
distributors) are held constant across the three conditions, and sales are then tracked over time.
While you would, of course, expect more snack food to be sold at the lower $2.59 price, the issue
is how much more. If your cost of goods is $1.99, so that you earn a gross profit of 60 cents per
package at the $2.59 price, then the low-price $2.59 package must sell at twice the level of the
high-price $3.19 package (where you earn $1.20 per package) in order to yield the same total
amount of profit. If the experiment shows that the $2.59 package has sales volume only 50
percent higher than the $3.19 package, then you may be better off with the higher price. Note
how in this example, the precision of estimate supplied by experimentation is part of its
attraction.
Business-to-business and technology marketers often are not able to conduct a field experiment
as just described. Their market may be national or global, or product introductions may be so
closely followed by a trade press that regional isolation cannot be obtained. Moreover, because
products may be very expensive and hence dependent on personal selling, it may not be
possible to set up equivalent treatment conditions. (Who would believe that the 10 salespeople
given the product to sell at $59,000 are going to behave in a manner essentially equivalent to the
10 other salespeople given it to sell at $69,000 and the 10 others given it to sell at $79,000?)
Plus, product life cycles may be so compressed that an in-the-market test is simply not feasible.
As a result, laboratory experiments, in which the situation is to some extent artificial, have to be
constructed in order to run price experiments in the typical business-to-business or technology
situation. Here is an example of how you might proceed.
First, write an experimental booklet (or, if you prefer, construct a web survey) in which each
page (screen) gives a brief description of a competitive product. The booklet or website should
describe all the products that might be considered as alternatives to your product, with one page
in the booklet describing your own product. The descriptions should indicate key
features, including price, in a neutral, factual way. The goal is to provide the kind of information
that a real customer making a real purchase decision would gather and use.
Next, select a response measure. For instance, respondents might indicate their degree of buying
interest for each alternative, or how they would allocate a fixed sum of money toward purchases
among these products. Various measures can be used in this connection; the important thing is
that the measure provide some analogue of a real buying decision. This is why you have to
provide a good description of each product to make responses on the measure of buying interest
as meaningful as possible. Note that an advantage of working with outside vendors on this kind
of study is that they will have resolved these issues of what to measure long ago and will have a
context and history for interpreting the results.
Now you create different versions of the booklet or web survey by varying the price. In one
example, a manufacturer of handheld test meters wished to investigate possible prices of $89,
$109, and $139, requiring three different versions of the booklet. Next, recruit a sample of
potential customers to participate in the experiment. This sample must be some kind of
probability sample drawn from the population of potential customers. Otherwise the responses
are useless for determining the best price. Moreover, members of the sample must be randomly
assigned to the treatment groups. If you use a list of m …
Purchase answer to see full
attachment

GradeAcers
Calculate your paper price
Pages (550 words)
Approximate price: -

Why Work with Us

Top Quality and Well-Researched Papers

We always make sure that writers follow all your instructions precisely. You can choose your academic level: high school, college/university or professional, and we will assign a writer who has a respective degree.

Professional and Experienced Academic Writers

We have a team of professional writers with experience in academic and business writing. Many are native speakers and able to perform any task for which you need help.

Free Unlimited Revisions

If you think we missed something, send your order for a free revision. You have 10 days to submit the order for review after you have received the final document. You can do this yourself after logging into your personal account or by contacting our support.

Prompt Delivery and 100% Money-Back-Guarantee

All papers are always delivered on time. In case we need more time to master your paper, we may contact you regarding the deadline extension. In case you cannot provide us with more time, a 100% refund is guaranteed.

Original & Confidential

We use several writing tools checks to ensure that all documents you receive are free from plagiarism. Our editors carefully review all quotations in the text. We also promise maximum confidentiality in all of our services.

24/7 Customer Support

Our support agents are available 24 hours a day 7 days a week and committed to providing you with the best customer experience. Get in touch whenever you need any assistance.

Try it now!

Calculate the price of your order

Total price:
$0.00

How it works?

Follow these simple steps to get your paper done

Place your order

Fill in the order form and provide all details of your assignment.

Proceed with the payment

Choose the payment system that suits you most.

Receive the final file

Once your paper is ready, we will email it to you.

Our Services

No need to work on your paper at night. Sleep tight, we will cover your back. We offer all kinds of writing services.

Essays

Essay Writing Service

No matter what kind of academic paper you need and how urgent you need it, you are welcome to choose your academic level and the type of your paper at an affordable price. We take care of all your paper needs and give a 24/7 customer care support system.

Admissions

Admission Essays & Business Writing Help

An admission essay is an essay or other written statement by a candidate, often a potential student enrolling in a college, university, or graduate school. You can be rest assurred that through our service we will write the best admission essay for you.

Reviews

Editing Support

Our academic writers and editors make the necessary changes to your paper so that it is polished. We also format your document by correctly quoting the sources and creating reference lists in the formats APA, Harvard, MLA, Chicago / Turabian.

Reviews

Revision Support

If you think your paper could be improved, you can request a review. In this case, your paper will be checked by the writer or assigned to an editor. You can use this option as many times as you see fit. This is free because we want you to be completely satisfied with the service offered.

Order your essay today and save 15% with the discount code DISCOUNT15