What You’ll Learn

It’s now conventional wisdom that creative is the biggest variable for long-term success on paid social, and Meta in particular. As automation simplified targeting and optimization, creative strategy–as opposed to “media buying” prowess–became the primary lever for advertisers to differentiate. And the best responded by placing high-tempo design and sophisticated testing frameworks at the center of their channel strategy.

Perhaps the only more prevalent conventional wisdom about Meta is that iOS 14.5 has re-written the rulebook for effective channel and campaign management. App and web advertisers have spent the last 18 months learning how to optimize through new and baroque attribution frameworks and within a dramatically thinner data environment. And the best have responded by aggressively adapting their media strategy to the constraints of privacy-first measurement.

In spite of the above, there has been little contention with how privacy developments have impacted creative strategy specifically–until now! While iOS 14.5 and AppTrackingTransparency have not dethroned creative from its perch atop the paid social food chain, they have rendered the playbook that came before it obsolete. And advertisers willing to adapt have a unique opportunity to create competitive advantage.

We’ve spent the last 18 months rebuilding our approach to paid social creative and perfecting it over $100M in ad spend for app and DTC brands across every major consumer vertical. This is the first of a four part series where we will share our hard-won learnings and lay out a new strategic framework covering each piece of the creative value chain–design, production, testing, iteration–and how to integrate them into a system that drives reliable success in a post-ATT environment. Here’s what you’ll learn specifically:

1. How creative works in Meta’s auction and optimization systems and what’s different now

2. How to design and implement an ATT-aligned creative testing framework fit to your scale and product funnel

3. How to analyze creative performance and synthesize learnings to drive design and production

4. How to approach design and production in a post-ATT environment and how to organize creative and growth teams

This article will answer the first question and provide a deep dive account of how creative works on Meta, how sophisticated advertisers have traditionally approached creative strategy, and what changed with ATT.

How Creative “Works” on Meta 

Creative’s role on Meta is closely tied to the evolution of Facebook’s machine-learning infrastructure. As the Facebook pixel and SDK became a staple across every major consumer website and mobile app, Meta leveraged this firehose of data to build industry-leading modeling on customer behavior and advertising outcomes. And as Meta progressively baked these models into its ad products, the winning channel strategy became less about granular segmentation and targeting strategies and more so about structuring campaigns to maximize algorithmic efficiency and letting Meta utilize the full weight of its capabilities. These developments elevated creative strategy in two ways.

The first was by a process of elimination. As Meta’s automation capabilities matured, 80% of a successful channel strategy could be encapsulated in a handful of best practices: consolidate your account into as few segments as your product and demo allow, target broadly defined audiences, and optimize towards well-defined 1st party conversion events. All this in the service of providing Meta a broad enough canvas for its machine-learning to do the legwork on modeling and targeting your potential customers.

In this world, creative is the biggest “manual” variable remaining. If you and your competitors are using the same core media strategy, converting customers more effectively by having the hardest working creative is the best move on the chess board. As such, advertisers responded by placing creative at the heart of their account. Whatever degrees of freedom that a consolidated campaign strategy created were consumed by granular creative segmentation and testing schemes. The most successful advertisers orchestrated these efforts in a way that took advantage of the second key creative trend on Meta: its own increasing role at the center of the ad auction and Meta’s broader machine learning infrastructure.

Meta’s treatment of creative clusters around the two ends of the advertising funnel: how it rewards (or penalizes) creative based on on-platform engagement, and how it targets and optimizes creative through attributed off-platform conversion data. A brief explanation:

A large part of how Meta preserves user engagement in the face of increasing ad load is through baking the UX impact of ads into actual performance. Using “upper funnel” engagement metrics–watch time, clickthrough-rate, like and comment behavior–as a proxy for a positive user experience, Meta rewards ads that maximize feed engagement in the auction itself. Assets that are accretive to engagement are rewarded with a premium in its auction bid for an available impression; conversely, ads with poor engagement are systematically deprioritized. In the game of inches that is a competitive advertising auction, this multiplier can become a critical lever to opening up the actual reach of an ad set and winning higher-value inventory. In this sense, creative optimization is targeting and reach optimization in disguise.

This is doubly true when considering the role that “down funnel” conversion data occurring on advertiser-owned properties also plays into creative performance. Just as Meta builds targeting efficiency at the campaign and ad set level by accumulating conversion data to feed its optimization algorithms, creative assets have a similar–although thinner–feedback loop. The more end conversions Meta can attach to an asset, the more granular a profile it can build on likely converters and the more precisely it can bid against them in an auction.

Creative Strategy 1.0

Building a winning creative strategy on Meta starts with understanding how creative serves its two masters at each end of the funnel and then designing a framework that both harnesses those features and manages their tradeoffs appropriately. In particular, there are two properties of creative performance on Meta–directly tied to each of the trends detailed above–that advertisers must contend with.

The first is that an ad account’s creative performance exhibits a power law dynamic: typically a very small percentage of assets drive a majority of channel investment and outcomes. This “winner take all” effect can be attributed to the positive feedback loop mentioned above: successful assets generate more data, and Meta uses that data to make them even more successful. As such, it became commonplace to see individual assets persist at the top for many months, even when facing saturating engagement and click-through performance.

The flip side of this coin is that the majority of assets tested on Facebook fail to get any traction at all. We can attribute this in part to Meta’s preferential bias towards “data rich” assets with a long history of performance, but also in part to Facebook’s auction bias towards assets with strong on-platform engagement. The latter leads to what is a frustratingly quick “kill trigger” for new creative. When a new asset is launched, Meta makes a decision on its viability within just a few thousand impressions. This leads to two frequent observations: that Meta does not give a fair shake to assets that have strong conversion yet mediocre top of funnel performance, and that the types of assets that win or lose in Meta’s natural testing environment can exhibit a high degree of randomness.

How to build a creative strategy that handles these trends successfully? Prior to ATT, you could boil the answer down to three core principles:

1. Volume-Driven Production

With Meta, it’s typically best practice to swim with the tide and not against it. Correspondingly, the winning creative strategy has been to embrace the power law dynamic described above instead of trying to control for it. What does that look like? Optimizing for power law outcomes is a volume game. Finding breakout creative hits typically means testing huge numbers of assets, often concurrently, and trying to maximize your “hit rate” by following directional trends in the data. If you can test a hundred unique assets a month with one out of twenty getting traction and one out of the hundred being a true winner, you are in a good spot.

2. Dedicated Creative Testing 

Most advertisers utilizing the above strategy execute it through a “spray and pray” approach. Throw creative assets into your existing campaigns as they are finished and hope something sticks every once in a while. While simple, this approach often leads to failed outcomes on assets that could otherwise be successful. This is a result of both of the “biases” described above: Meta will often give new assets frustratingly little room to run, and doubly so when they are forced to compete with legacy creative backed by large amounts of historical data.

As such, testing is one of the rare instances where the best outcome comes from utilizing smart media buying to mitigate Meta’s algorithms instead of embracing them. By using dedicated test campaigns as a sandbox environment for piloting new creative, advertisers can control for Meta’s natural bias towards top of funnel engagement and evaluate for outcomes holistically across the funnel. This approach also controls for Meta’s “incumbency bias” towards seasoned creatives with large amounts of data by removing the need for new assets to compete with them for data.

Getting a testing framework right is a tricky endeavor. How many assets to test, how much spend to allocate, what KPIs to select for evaluation, how to define winners and losers, and how to scale winning assets are all variables that need to be calibrated carefully to your specific product funnel and level of scale. In the next article of this series, we’ll deep dive on how to do this successfully in a post-ATT environment. 

3. Modular Design 

Finally, the requirement of testing dozens to hundreds of unique assets per month sets clear boundaries on what kinds of creative is possible from a design standpoint. Not even the most well-resourced creative team can execute that amount of unique concept ideas each month. What works instead? Taking a smaller number of concepts and structuring them with swappable components that can be used to create dozens of sub-variants.

This modular approach happens to align well with a structured, large-scale creative testing approach. The combination led advertisers to focus, often with success, on the “micro”: color palettes, calls to action, text ordering, and other minor iterations within a set conceptual direction. And all three of these principles together carried obvious implications for what kind of creative team and partners to hire, how to set effective cross-functional process and partnerships, and how to plan and resource creative output generally speaking.

Unfortunately, as we’ll demonstrate over the course of this series, ATT and iOS 14.5 have transformed most of these principles and corresponding organizational dynamics into liabilities.

What changed with iOS 14.5 and AppTrackingTransparency?

What makes the above strategy effective is a “signal rich” environment in which advertiser outcomes can be measured at a granular level from account to campaign to individual ads. With a deterministic understanding of what users converted from which ads, Meta could bring the full weight of its machine learning capabilities to bear on anchoring creative to audiences likely to convert. And advertisers can maximize that impact through large-scale iteration and extensive testing.

For mobile apps, the lynchpin enabling this infrastructure was the device identifier, an alphanumeric string uniquely identifying a mobile device that, prior to ATT, was readily accessible by platforms and advertisers via API. When Meta understands which device IDs were served which impressions and the advertiser understands what device IDs performed which actions in their app, both parties have a unit of account for credibly attributing outcomes to advertising spend. And just as importantly, this realtime stream of device IDs mapped to individual user profiles is a major piece of what enabled Meta to build the most effective ad targeting and optimization model in the industry.

With iOS 14.5, Apple made the device ID accessible exclusively through its AppTrackingTransparency library. In order to receive the ID, the app owner must make an explicit permission request to the end user through a strictly worded prompt. The net impact of this has been predictably devastating for the ID’s prior use cases, which were built on the premise of universal and reliable access. With only a fraction of users opting in to share, the device ID now plays a decidedly backseat role in advertising technology outside of Android.

What’s left in its place? iOS attribution for users not opted-in to ATT now takes place through Apple’s own proprietary system called SKAdNetwork. SKAdNetwork creates “privacy” by eliminating the bridge between the activity on an individual user’s device and the activity happening on the advertising network, unless that user specifically allows that connection to be made. With SKAd, Apple has absorbed the “unit of account” role formerly held by the device identifier and transformed how behavioral data is shared between advertisers and advertising platforms on iOS.

To understand how this has impacted creative specifically, it’s worth diving into the particulars of what and how SKAdnetwork shares performance data. When SKAd tells Meta or another ad platform a conversion event occurred, it includes just a handful of data points along with it: the ad platform it attributed the conversion to, a 6-bit advertiser-defined conversion value, whether that user clicked or viewed the ad, and what channel campaign ID was associated with that ad. Critically, no information uniquely identifying the user in question nor any information about the ad set or individual ad within the campaign responsible for the conversion is provided.

 

How does Meta measure creative performance with these constraints? With SKAdNetwork, every advertising platform is limited to one hundred total campaign IDs for any particular advertiser. In the summer of 2020, Meta announced that accounts advertising to users with iOS 14 or above were to be limited to just nine total concurrent campaigns. Why the discrepancy? Meta is reserving the remaining 91 IDs for itself to attach to various sub-campaign parameters that it finds most relevant for understanding conversion behavior.

How Meta uses these IDs in practice is not publicly available information. But let’s assume that they are allocating Meta-reserved IDs evenly across the advertisers nine iOS 14+ campaigns. This leaves each campaign with roughly 10 additional IDs that Meta can use to receive conversion data SKAd on additional information beyond the parent campaign. There’s a lot of potential ground it could cover here: ad set, audience, location, device information, demographics like age and gender, and, of course, creative. Even in a relatively sparse campaign set up with just one to two ad sets and a handful of creative, Meta faces strict limitations and tradeoffs for what it can choose to receive verified conversion data on.

Nevertheless, when you look at individual ad performance on an iOS 14.5+ campaign in the Meta UI, you see counts of attributed conversions. These numbers are not, as they were before ATT, representations of data Meta has received from another party. Instead they are modeled predictions. Meta receives a count of attributed conversions for the overall campaign from SKAdNetwork and then allocates those conversions to ad sets and individual ads based on a mixture of its reserved campaign IDs and a black box of on-platform data it uses to triangulate the potential source of a conversion.

For web advertisers, the story is not much better. While those advertising on web are able to avoid SKAdNetwork, the constraints of AppTrackingTransparency nevertheless complicate measurement for a majority of web advertising investment. Why? The majority of ad impressions for web advertisers still take place in the Facebook or Instagram mobile app, and, for most, iOS remains the most important mobile platform by a substantial margin. If Meta serves an ad impression on it’s iOS app, unless that user is specifically opted-in, it cannot utilize user-level conversion data even if that data is passed from a web pixel or server tag. As such, for a large swath of conversion data, Meta is restricted to a similarly probabilistic (and weaker) approach to measurement.

The net impact here is that creative performance data is now dramatically thinner and also different in kind–both for the advertiser and for Meta. This in turn has lead to broad functional changes in the signals that Meta uses to measure creative performance and how that measurement drives advertiser auction outcomes, budget allocation, targeting and ultimately efficiency. For iOS and web advertisers, the shift towards probabilistic conversion attribution and a greater emphasis on on-platform engagement metrics has created a new metagame for how to do creative effectively that is often at odds with the one that came before it.

What Works Now?

To start, advertisers need to realign their creative testing strategy with what works in a thinner attribution environment. This is not merely about testing fewer assets, but also how to deploy assets and evaluate them in a way that leads to both winning creatives and tractable insights that can inform design.

The buck does not stop at implementation and analysis, however. A successful creative strategy hinges on an integrated approach that aligns design, production, testing, analysis, and iteration. Before ATT, advertisers could get away with outsourcing most design decisions to an algorithm: carve a concept up into dozens of small iterations and let Meta tell you what works and what doesn’t. In an environment where that is no longer effective, the winning design playbook centers on making a smaller number of bigger, more targeted bets. Developing conviction on what bets to make sits in the advertiser’s hands, and it has to be won through a more holistic approach based on thoughtful market and customer research in addition to quantitative rigor.

And finally, a new design approach entails a new approach to production. An emphasis on conceptual diversity and net new production requires a different mix of skills and process versus one based on large scale iteration. Advertisers must be prepared to hire or partner for a wider mix of executional mediums and for more relative strength in art and creative direction versus editing. In many ways, this makes creative a more expensive endeavor. And solving for that efficiently means a fresh approach to planning and cross-functional process.

This series will cover each of these topics in depth and provide the reader with frameworks, examples, and recommendations for how to apply our learnings to your team and product. In the next article, we will dive into creative testing specifically and walk through how to build and apply an ATT-aligned framework that fits your specific product funnel, goals, and media scale. Subscribe below to get it straight to your inbox once it’s here!