Test, Learn and Optimise with Adobe Campaign

TAP CXM
18 Feb 2021

 

As the old adage goes, “half the money spent on marketing is wasted; the trouble is to know which half”. This statement dates back to a pre-digital age and you might think that measure and attribution advancements have now addressed this situation. It is far from being the case.

The fact that not all emails are being opened and clicked-through is proof that not all communications are effective, although we could also argue that some communications impact awareness and help the conversion over time. The underlying issue is that every customer is different, that most marketers have a deep catalogue of products to promote and that the content (images and text) to represent those is virtually unlimited.

This is an optimisation challenge to say the least. The good news is that Adobe Campaign provides a range of automation and measurement capabilities, which combined, can be used to start tackling the issue with a test and learn approach. Not only is it an opportunity to reduce waste and brand fatigue, but it is also an effective way to improve the customer experience.

Let’s take a look at some of the techniques that can be implemented in Adobe Campaign, from a simple A/B test to more advanced machine learning.

Adobe Campaign and A/B testing

A/B testing is the most basic way of testing the response to a different version of the same message. In Adobe Campaign it is automated using workflows.

This form of testing can be implemented in various ways. The idea is to send different variants of a message (usually two but potentially more), to determine which one will generate a better response. Once the message with the best response has been identified, it is sent to the remainder of the recipient list.

The first challenge is to define what the “best response” is. It could simply be the highest level of engagement, in terms of open and click-through to a website but it could also be the highest number of transactions, the greatest margin or even the lowest un-subscription rate. In other words, before diving into the technical solution, thinking about the true and most productive business objective might be a useful exercise.

A/B testing can be applied on any channel supported by an Adobe Campaign instance, bearing in mind that each channel will carry its set of limitations. For example, there is no open tracking on SMS or push notifications. Email is both the most frequently used and the most flexible channel so it is the one we will explore next.

Email A/B testing principles

The obvious place to start testing is the subject line as it has the greatest impact on open rates, the first measurable response when it comes to the email channel. Nonetheless, there are other factors impacting open rates, all of which can be tested as well.

Main elements impacting open rates:

  • Subject Line
  • Send Time
  • From Address and name
  • Preheader text

As part of its Sensei toolset, Adobe has released predictive capabilities using AI to optimise the subject line of an email. A topic that is also the sole focus of third party point solutions such as Phrasee.

Moving on to click-through rate, the content of the message could be tested in various ways. Most commonly, marketing teams will create two copies of an email and benchmark them against one another.

Content variations to keep in mind:

  • Hero Image (main image at the top)
  • Product pictures (which of the many representations of a single product to select)
  • Text itself (think about tone of voice, length, font, style, etc…)
  • Call to action (how to create urgency with the “Buy Now”, “Click Here”, etc…)
  • Message length
  • The use of personalisation
  • Visuals (colours, pictures, GIFs, etc…)
  • Etc…

It is also worth noting here that the duplication of content creation is often cited as the main reason for not doing test and learn. As much as there is a cost to content creation, I would argue that many aspects can be tested with limited changes to the content, and that the potential uplift justifies that investment.

The importance of the Measure.

As we have just seen, what is being tested impacts what can be measured in the sense that the email content cannot physically impact the open rate. But the cascading effect means that the open rate impacts the overall click-through rate, someone who doesn’t open an email cannot click on it. The metric measured can be taken at different points of the conversion funnel and this is not without complications.

Approach #1 focusing only on open and click-through rates.

The issue with focussing on open and click-through rates is that it is only a very partial view of the overall customer journey. Here is a slightly extreme example, if you advertise a free iPad, you will get a huge amount of click-through! Nonetheless, the initial engagement will soon convert into even more frustration if that offer was only for the first 5 customers (as such marketing tricks often are). The metrics measured will look fantastic, the revenue and customer experience generated will be terrible.

Approach #2 focusing only on transactions.

Focusing only on the end game is not a solution either, because as soon as the customer is on the website, there is a huge number of factors not under the control of marketing which are impacting the conversion. The outbound communications could be spot-on in terms of creative and content, but the website experience could prevent transaction from successfully being completed (lack of stock, confusing navigation, technical issues, prohibitive delivery fees, etc…).

Even when focusing on the transactions, the objective could be to sell stock, to improve margin, to increase basket value, to increase the number of new customers, etc…

Technical implications

As illustrated above, the decisioning could be looking at a wide range of metrics. The first implication is that some of the metrics are technically easier to access than others. The open and click-through data is generated by Adobe Campaign and is therefore easily accessible.

If on the other hand we want to optimise for transaction-related metrics, those need to be repatriated from the eCommerce system into the campaign management solution before the winning decision can be made. If looking at margin, there might be another level of product data that might not be available in the eCommerce system either. This data integration requirement is not necessarily complicated, but in this context, it needs to happen at a speed that is compatible with the timing of the decision-making.

Mathematical Significance

Another technical aspect that is often completely overlooked is the mathematical validity of the decision. How big the testing group is, how it is selected and for how long the test should be allowed to run, are all important factors when the aim is to make a mathematically sound decision.

Too often, I see the test being sent to a neat 20% of the target. Sometimes that results in a volume that is far too small, which means the measure will assuredly be within the noise margin. In other cases, 20% can be too much, although the impact of such a situation is limited to a missed opportunity (a subset of the test target could have received the winning message).

There are a number of best practices that enable an approximation to be made and removes the need for overly complex calculations, but I would argue that the understanding of such concepts is critical in avoiding a situation where mistakes are made to the point of bad decisioning going unnoticed.

Best or “less bad”

This final point is probably the most challenging one. Let’s assume the result of a test is that version B has a higher click-through rate; a typical A/B test would see message B being sent to everyone else. Yet, version A also got positive responses whilst version B was not unanimous either. There is a strong argument that it shouldn’t be A or B, but which customers prefer A and which customers prefer B. By deciding to send B, we pick the message that is less bad but not necessarily the distribution that would have got the best result overall. This is where machine learning can bring another level of optimisation, matching customers and content more precisely.

Conclusion

It might feel like there is no solution to this conundrum, but it is merely the reflection of the overall complexity that is inherent to marketing automation. A/B testing two emails is a good way to get started but it needs to be part of a bigger test and learn strategy that looks at optimisation more holistically.

This is the reason why TAP CXM provides technical skills as well as the marketing and data analysis expertise necessary to the implementation of such strategies.

Related posts

Share