How to Run a Revenue Comparison Test

This article explains the key differences in attribution reporting, the exact steps to run a valid comparison, and how to interpret your results with confidence.

Introduction to the Revenue Comparison Test

When reviewing the accuracy of your marketing data, comparing revenue reported by Wicked Reports to revenue reported by platforms (Wicked Revenue ÷ Ad Platform Revenue) , like Facebook Ads Manager, can help you confirm that your tracking is working as expected. That said, because different attribution platforms measure revenue in different ways, you should expect some differences. A healthy benchmark is that Wicked Reports revenue should typically match 80% or more of what your ad platform reports.

Knowing why these differences happen and following the right process makes your comparison meaningful and helps you catch tracking gaps early.

Key Differences in Attribution Reporting

While it’s healthy to compare Wicked Reports to other ad platform, it’s important to understand that no two attribution platforms will ever report the exact same numbers. Here's why:

  • Multi-Channel vs. Single-Channel Attribution: Wicked Reports tracks conversions across all your marketing channels, while platforms like Facebook only focus on their own. Wicked ensures that credit is assigned to the most impactful touchpoints in your entire funnel, rather than isolating conversions to just one platform. This gives you a more comprehensive view of your true marketing performance, avoiding the tunnel vision of single-channel reporting.
  • Short-Term vs. Long-Term Attribution Windows: Ad platforms like Facebook typically use a short attribution window, often looking forward 7 days after a click (or sometimes just a view), which limits their understanding of long-term customer behavior. Wicked Reports, however, enables you to use a much longer time window(90 days+) from the point of conversion. This allows Wicked to give credit to marketing efforts that influenced a customer over a longer period, capturing a fuller picture of your sales cycle and marketing impact.

Steps for a Valid Revenue Comparison Test

Before you run your revenue comparison, check your data at the channel level. If your Wicked Reports revenue is 80% or higher at this level, that’s a strong sign your tracking setup is healthy. If you see a significant gap, drill down to individual campaigns to run the comparison there.

Follow these steps to ensure your comparison is valid and your results are meaningful.

  1. Use the Correct Report and Settings
    Open the FunnelVision Report and ensure you are using the default attribution settings. 

    If comparing Facebook,  use the Revenue Comparison View for Facebook.
  2. Use a Valid Date Range
    Always use the exact same date range in Wicked and your other ad platform. For most accounts, a 7 day date range works well (so long your tracking has been setup and running for seven days)
    1. If you have fewer than 70 sales, extend your range to 14 days or longer to ensure your sample size is meaningful.
    2. If your typical sales cycle is longer than 14 days, match your range to that cycle. Avoid comparing partial weeks or days, which can distort results.
  3. Calculate Your Revenue Comparison Rate
    Use this formula: Wicked Revenue ÷ Ad Platform Revenue

Analyzing The Results

In most cases, Wicked Reports will show more revenue than your ad platform, especially for businesses with longer sales cycles or subscriptions. If your number is slightly lower, it’s often due to differences in attribution models. However, a significant gap can signal a tracking issue, so it’s worth double-checking that your tracking setup is working correctly.

Aim for a comparison rate of 80% or higher for revenue:

  • If you see a rate above 80%, this is normal and expected.

  • If you see a rate below 80%, check that you followed all the steps above. If your comparison is still lower than expected, run the Tracking Validation Checklist to check for tracking setup issues.

If your Tracking Validation Checklist finds no issues but your comparison rate is lower than expected and you think something may still be off, reach out to our Support Team to confirm if any technical factors might be involved.

Finally, if you have low sales volume (fewer than ~300 sales/month) or a longer sales cycle, your expected comparison rate may naturally fall below 80%. In this case, calculate your realistic average based on a valid date range. For example, if your normal is around 60%, then 60% becomes your new working benchmark.

Once you’ve completed this comparison rate test, you’ll know exactly where you stand and whether or not any next steps are needed. This knowledge will ultimately enable you to make confident, data-backed decisions that you can rely on.