Q: 4
A travel company wants to know if it gets additional conversions by relying only on its direct
response strategies, as opposed to combining each strategy with branding campaigns. The company
continuously keeps track of each strategy's performance, but it measures them separately. Also, each
strategy's measurement has its own KPI. These are the latest results:
• Branding campaigns:
• A benchmark of 35 Brand Lift tests, SI.70 USD per additional ad recaller
• An average of 125 conversions per campaign
• Direct response campaigns:
; A benchmark of 20 Lift tests, $2.50 USD per Conversion Lift - An average of 370 conversions per
campaign
What should the company do to test if it gets more incremental conversions from relying only on
direct response strategies?
Options
Discussion
Makes sense to pick C, since running both strategies at the same time and comparing conversion numbers gives a clear picture. That's usually how you'd directly measure incremental conversions in practice. Official guides point toward testing concurrently for real-world impact. Thoughts if anyone used a different method?
Option D is tempting since multi-cell designs are more robust for isolating incrementality. But the question seems to want a practical approach instead of complex testing, so C probably fits best. D feels like an over-engineered trap here, but open to pushback.
C imo. But if "incremental" means lift vs controlling for branding, would multi-cell (D) be better here?
Be respectful. No spam.