Data

attention metrics

Tackling the attention deficit, one impression at a time

As up to 10,000 ads continue to flood our screens every day, the fight to gain real human attention has never been so rife. 

But as this figure rises – while attention spans remain the same – the chance of us seeing an ad inevitably diminishes. This results in what we call the attention deficit – something the ad industry is striving to overcome.  

But how do we begin to measure true attention, let alone improve it?

Combining viewability with attention

For years, the industry has focused on standardising metrics around viewability, i.e., how much of an ad is visible on screen and for how long. And this is still important. After all, you can’t engage with an ad if you can’t see it.

But the industry is beginning to recognise that ‘viewable’ does not equal ‘viewed’. In fact, as consumers, we ignore almost 35% of all programmatic display ads. And so it’s become clear that viewability metrics alone are not enough. They are more like ‘hygiene’ metrics rather than predictors of quality. And that’s why we need to bolster this campaign insight with attention data. 

Steps to leveraging attention

As Rob Hall, CEO at Playground xyz explains, there are generally four steps to leveraging attention.

First, you need solid research. This should be from your own campaign data, but also learnings from other brands who have tested the impact of different ad formats, content types, devices, channels or targeting strategies on attention metrics.

Second, once you’ve gathered this, you can use it to enhance your channel planning, by creating and scheduling campaigns that have the best chance of performing. 

Third comes the task of actually measuring attention, by tracking key attention metrics such as hover rate and touch rate on a mobile device, plus more traditional performance metrics such as CTR, bounce rate and conversion rate.

And finally, to have the best chance of engaging your audience in the long term, you need to be able to optimise in real time according to what’s working and what’s not. This kind of dynamic creative optimisation (DCO), focused on attention, will be a key driver of campaign performance over the coming years. 

Using data across TV, desktop, tablet and mobile, we can measure a whole range of attention metrics. But perhaps the most prevalent is Attention Time.

The new metric in town

Attention Time is important in any campaign because it measures how long a user is physically eyeballing the ad. Once you’ve measured this, you can then look at a host of other user activities, such as clicks, cursor position, touch rate, scroll rate and depth, audio on/off, volume etc. 

Measuring engagement with eye-tracking data

As Lumen Research’s MD Mike Follett suggests, eye tracking data, used at scale across different channels, shows that “when users do look at an ad, it tends to perform really well”.

There are a number of attention technology partners on the market including Lumen, Playground xyz, Amplified Intelligence and Adelaide, that can help marketers determine true engagement through two main types of eye-tracking measurement: proxy-based and gaze-based. 

When testing creative or context, gaze-based (or eye-gaze) metrics are generally thought to be a more accurate method than proxy-based metrics. This is because they demonstrate that a user has actually engaged with an ad. This data comes from vast opt-in panels of consumers allowing eye-tracking cameras to follow the path of their eyes across the screen as they consume the open web. 

By combining this with the other user activities above, you can get more granular with each impression, and optimise the ad creative or format more effectively.

Now is the time to tackle the deficit

When looking for an attention partner, be sure to ask them:

  • how they define attention (e.g. do they include all user actions?)
  • what kind of eye-tracking they use – gaze-based or proxy-based data, or (preferably) both
  • and how long they will apply optimisation to ensure the most accurate measurement

As the range of attention metrics grows, so too does the need for industry standardisation of these metrics. But for now, there’s no time to waste when it comes to experimenting and A/B testing with different creatives and formats. This way, you can get ahead of the curve before attention-first campaign strategies become mainstream, and make every impression count. 

If you need help sharing your thoughts on attention with the world, we’re here to help. Chat to us today to learn more.

This article was originally posted on Digilah (November 2023)

NFT

What is an NFT and why does it matter?

Everywhere you look. there’s an NFT for this or an NFT for that. It’s a buzzword that has been making waves not just in the tech space but also in the art scene over the past couple of years.

What is an NFT?

Born out of Terra Nullius, a Redditor’s interactive Ethereum creation in 2015, NFTs are non-fungible tokens, which simply means that an asset can’t be replaced or copied – it’s completely unique. Take the example of the Mona Lisa.

Sure, you could buy a poster or print out a picture of the painting, but it could never replace Da Vinci’s original masterpiece. 

NFTs work in the same way. They’re essentially bits of data created on the blockchain that enable users to verify and authenticate ownership of digital goods. These goods could be in the form of digital artwork, music, or even GIFs. The point is, they’re unique and unlike anything else on the market.

Though they’ve been around for a while, NFTs only really started to gain momentum in late 2020. According to the Wall Street Journal, the market cap of NFTs soared from just under $50m in 2018 to $338m by 2020 – a pretty large jump. A jump that has excited and sparked interest in the digital token.

While some have marvelled over NFTs – seeing them as clever investments or cool pieces of art – others are a little skeptical – after all, why would you buy something that you can just take a snapshot of? 

Why do they even matter?

After seeing the explosion of the Nyan Cat NFT (selling for just over $690k) we thought we’d do a little digging. If a rainbow-trailing cat made of Pop-Tarts can gain that much traction, then NFTs must be doing something right.

Here’s what we found:

NFTs have transformed the art market 

When you think of art-collecting, you think of auctions and rich men in suits. You don’t necessarily think of university students or Gen Z, but NFTs have opened up the art market to a much wider crowd.

What once was shrouded in mystery has now become super-accessible to all. Everyone from your 21-year-old sister to your 30-year-old bus driver can deal in crypto-art. The only difference is that this ‘art’ is digital. 

By removing physical barriers, NFTs have not only made it easier for people to buy art – they’ve also given artists and creators more opportunities to profit. Those who display their digital work on platforms such as Instagram and Pinterest now have the chance to get their creations into the limelight. Even those who don’t have a large following have a chance to break into the industry and profit from their designs. 

Once an artist has sold an NFT, they’ll continue to receive a percentage of the sales made from future purchases – something which doesn’t happen in the traditional art market.

NFTs could be a precursor to Web3 

A lot of the driving force behind blockchain has been the push to decentralise data and give creators greater power and ownership. In essence, blockchain is already decentralised – no information or data is stored in one single location. It’s all spread out across different networks and servers.

With the rising popularity of cryptocurrency and digitised tokens like NFTs, some believe that it’s only a matter of time before a completely decentralised internet is created (Web3).

If this were to happen, NFTs will have been at the forefront of a major historical shift. And it would be major. If Web3 did happen, governments and major corporations might find themselves struggling to maintain the power they have today. All of our data would be ours alone – not available for third-party organisations like Facebook or Instagram to profit (or profiteer) from. 

There’s no saying this will actually happen (it would take huge buy-in from millions of individuals across the world). However, the fact that NFTs, alongside other digitised systems, are even able to inspire such a big movement, is impressive.

NFTs make ownership transparent

Before NFTs were a thing, digital art always had a slight problem with ownership and protection. As a fungible item, it could easily be copied and shared across the web – all without any reference to the original owner. Traditional art, on the other hand, is a little easier to verify. Take the example of Van Gogh’s ‘Starry Night.’ It’s non-fungible, meaning even if you did photocopy it, it would be pretty clear that it wasn’t the original. 

NFTs give digital artists more security and make ownership super-transparent. Even as a fungible item, an NFT can’t be copied or reused – and that’s all thanks to its makeup. Since all NFTs are stored on individual blockchain ledgers, they’re essentially incorruptible and can’t be copied or stolen by rogue internet users. 

The nature of blockchain itself means that collectors and buyers can see how much a piece is worth – a level of transparency that’s not often seen in the physical art market. 

The future of NFTs

There’s no knowing for certain the extent to which NFTs will play a role digital marketing or our day-to-day lives, but it’s clear they’re already popular in certain real-world – or should that be virtual – applications today.

Metaverse

Are we ready to enter the metaverse?

At Facebook’s Connect 2021 conference, Mark Zuckerberg made some ostensibly major proclamations.

First, a corporate restructuring: Facebook will join Instagram and WhatsApp under a new parent company called ‘Meta Platforms’, just as Google did when it became Alphabet back in 2015. Second, Zuckerberg spent most of his arguably cringey keynote outlining his vision of the next generation of the web, the ‘metaverse’.

What is the metaverse?

The term metaverse is a portmanteau of ‘meta’ (beyond) and ‘universe’. Today, the concept represents an embodied internet comprising a network of shared computer-generated environments, where avatars interact in a 3D virtual space. But the term was originally coined in the popular 1992 cyberpunk novel Snow Crash by Neal Stephenson, and has since expanded to include XR.

Zuckerberg sees the metaverse as the natural successor to the mobile web, where you’re immersed in the digital experience, instead of merely viewing it on a video screen. Since 2D screens can’t fully convey body language and facial expressions, the idea is to accurately recreate the feeling of being physically present, to make virtual interactions with others seem more natural.

While romping through cyberspace à la Ready Player One may sound intriguing, how exactly will it become part of our lives? Here are just five use cases which could become daily activities in the metaverse.

Gaming

Video games have a tendency to drive advances in computing tech, so it’s no surprise that gaming is how most of us will first enter the metaverse. There are now hundreds of VR games using headsets (e.g. Valve Index, HTC Vive Pro 2 and PlayStation VR), as well as countless AR games using smartphone camera systems. While gaming may drive most of the fundamental technology advances, it can’t provide anything near a working metaverse. VR experiences almost always take place in closed environments, rather than connected spaces, and AR games are bound to small 2D screens, instead of immersing the player’s senses. For his part, Zuckerberg predicts that gaming in the metaverse will require continual evolution, with live service games offering regular updates and downloadable content.

Work

With some element of remote working here to stay, the focus now is on hybrid working. However, this makes the workplace more complex. In-person meetings – which have evolved to include some or all of the team joining via videoconferencing – could evolve further. For instance, holographic avatars could provide a virtual presence in the workplace without the commute, while enabling you to “share” physical space and interact with others.

Employees can use their own individual space for minimising distractions, and can also join shared spaces with other team members for activities such as presentations. Facebook’s own Oculus Quest 2 launched Horizon Workrooms in August 2021, and is set to introduce room customisations and new developer frameworks, which will enable this virtual space to integrate with existing internet services and 2D websites.

Ecommerce

Many retail brands have already seen success using AR tools. As far back as 2017, IKEA launched their Place app, allowing customers to render furniture inside their home and assess how it looks on their smartphone. Trying on clothes virtually is also on the horizon, with future shopping potentially involving putting on smartglasses rather than driving to a store. Zuckerberg predicts that digital purchases (items which only exist in virtual space) will become more valuable as the metaverse takes off, with transferability and authentication technologies, such as NFTs, becoming more important.

Education

While learning certainly involves the study of information, many fields naturally require extensive practical (in-person) training. However, the metaverse has the potential to assimilate some of this physical activity. For example, trainee mechanics could use a combined AR schematic and service manual to repair a vehicle. Zuckerberg points to Osso VR which simulates surgery, and the Alchemy Immersive course which allows marine biologist students to virtually swim through the Great Barrier Reef (whilst listening to the dulcet tones of David Attenborough). Meta is also assigning $150 million to train educational content developers and establish a professional curriculum with a certification process for those building it using Spark AR (Facebook’s AR toolkit).

Fitness

With most of us already sold on the idea of exercising at home, a round of convincing virtual boxing, sword fighting, or dancing could be a welcome enhancement. Facebook is releasing a fitness accessory pack for Quest 2, including controller grips and a wipe-clean facial interface.

Facebook’s sleight of (haptic) hand?

Facebook currently faces criticism amid leaked documents (dubbed the “Facebook Papers”) and exposure by former product manager, Francs Haugen, who claimed the company prioritises its profits over the wellbeing of its users. And these are just two events in a string of governmental and privacy investigations. Sceptics might say the timing of Zuckerberg’s Meta announcements is a way of distracting the public from PR issues (and perhaps redirecting those pesky federal fines), rather than heralding an imminent technological shift.

Of course, XR environments have failed before – think Second Life and Google Glass. But were they simply too early? Zuckerberg himself admits the metaverse is a way off. But as building-block technologies improve, and are adopted by younger generations, perhaps it’s only a matter of time before the metaverse is upon us.

In part two, we’ll look at the technologies which need to be built before the metaverse can materialise.

Welcome to the data party

Where’s the data party right now?

The difference between zero-, first-, second-, and third-party data

Data is incredibly valuable to marketers, agencies, and publishers, since it is used to power advertising models and generate revenue. And the amount of data we produce is accelerating. The global volume of data reached 64.2 zettabytes in 2020, and this is predicted to double by 2025.

However, privacy regulations, such as GDPR and CCPA, have made data an increasingly difficult resource to tap. In response, the online advertising industry has developed different classifications of data to help with the handling process. So, what do the terms zero-, first-, second-, and third-party data mean?

Four-party state

The ad tech industry uses four different classifications of data, depending on the degree of separation from the source of information.

Zero-party data

This relatively new term describes data which is willingly shared directly with the recipient, e.g., a customer who bought a sneaker submitting a survey to the brand. Arguably, this is a type of first-party data, the distinction being that it’s intentionally handed over rather than automatically collected.

Zero-party data also refers to edge computing ad solutions, where data is processed on-device rather than transmitted elsewhere online. This protects user privacy by minimising the data collected, while still allowing behavioural targeting and personalised ads, thereby circumventing legal restrictions on the amount of data allowed to be sent to the bidstream.

First-party data

Data which is gathered directly, for instance from customers, visitors, or social media followers. This includes information from email interactions and behaviours on a website or app. First-party data differs from second- or third-party data in that there is no intermediary between the supplier and the recipient of the information.

First-party data includes first-party cookies and advertising IDs which identify individual visitors, such as GAID, IDFA, and UID. Identity solutions use these IDs to track user behaviour while they interact with a website or app, in order to serve behavioural and contextual ads. However, targeted ads relying on first-party data alone can be sub-optimal, since their reach is limited to information from a single platform or site.

Second-party data

This refers to data collected from a partner who has in turn gathered it directly from visitors or customers. Although the data is supplied by another agent, the distinction made from its close third-party relative is that second-party data is sourced from a trusted partner, with whom there is a direct agreement on how the information is to be exchanged and used. The transaction takes place in a closed environment, such as a data cooperative, or a data “clean room” which provides the utmost security and privacy. Second-party data is usually used to periodically enrich first-party data to gain insights.

As with zero-party, there is some contention over the term ‘second-party data’, and the disagreement makes its definition a little woolly. In fact, when the Winterberry Group asked industry experts for the definition, responses ranged from “someone else’s first-party data”, to “there is no such thing”. After pinning down some specifics, the researchers proposed that any commercialisation or licensing of second-party data transforms its state into third-party. But this is by no means a hard-and-fast rule.

Third-party data

This pertains to any data received indirectly, via an intermediary agent who is not the original source. In ad tech, this data is usually gathered, aggregated, and packaged to be sold to companies to build advertising strategies.

Third-party data has historically been the most sought after in personalised advertising, allowing one-to-one retargeting of an online user. This can be achieved via cross-tracking cookies, where a user’s profile follows them online and is appended with data from each site they visit. Knowing so much about a user enables hyper-targeted ads to be served to them. However, third-party data now sees the most legal scrutiny due to the privacy implications of tracking users so closely across different sites, hence the phasing-out of third-party cookies.

So… where’s the party?

If all this has left you confused about the classification of data, you’re not alone. The rise of zero-party data, and the explosion of second-party data, are a result of initiatives by the industry to avoid falling foul of privacy regulations while retaining access to sources of data which make advertising models effective. In the age of big data, just make sure you know the legal requirements when handling it, so you can avoid having your party busted.

Image courtesy of Joshua Sortino

Deepfake technology

Is deepfake technology a threat to society?

You’d be excused for being sucked into the recent hype as one of the most famous actors in the world joined TikTok. The 53-year-old is seen practising his golf swings, falling over in a store, telling anecdotes and performing a magic trick with a coin. Already, the account has 11 million combined views and a following 383,000.

Only… it isn’t Tom Cruise. These videos are highly sleek deepfakes – the latest technology causing a storm across the world.

What are deepfakes and how do they work?

Deepfakes are highly realistic videos or audio recordings that look and sound like the real thing. They are constructed using a new application of machine learning called Generative Adversarial Networks (GANs) in which two deep neural nets are trained in tandem based on the way our human brain works. Both are trained with same data, but each with different a different task.

Input real video and audio data of a specific person and the software recognises patterns in speech and movement. Introduce a new element to the software such as someone else’s face and voice, and a deepfake is born. As with most AI applications, the amount of data available determines the sophistication of the end product. This explains why Tom Cruise – one of the most photographed celebrities – has become a number one deepfake target.

Potential dangers

So far, deepfakes have mostly been created or used by amateurs on social media platforms. However, their future potential to be used in a malicious manner is of real concern. Experts suggest deepfakes are an imminent threat to the erosion of democracy. In an era of fake news and clickbait, widely-circulating deepfakes, such as those of highly authoritative figures making believable yet false claims, is detrimental to reputation and public trust. Deepfakes have the power to skew our perception of reality to such an extent that genuine reality is something we plausibly deny.

Deepfake software applications such as DeepfakesWeb and Faceswap are increasingly accessible to the public with no experience needed to get started. In the wrong hands, deepfakes can easily be used to enter fake events as evidence in court tribunals, as well as posing personal security risks to data currently protected by face and voice recognition. As deepfakes mimic and transcend security barriers, they leave the door open for increased malware and cyber attacks.

There is a strong likelihood that criminals will use deepfakes in the future, for instance in phishing attacks, or in extreme cases, to blackmail individuals for ransom. With the technology being used for such basic ruses as imitating the voice of a family member or friend asking for a money transfer, deepfake technology is undoubtedly establishing smoother routes of operation for cybercriminals – at an alarming rate.

Net positive for humanity?

Using state-of-the-art technology, deepfakes hold considerable potential for everyone, regardless of who we are or how we communicate. In a 2019 campaign for Malaria No More UK, deepfake technology simulated David Beckham delivering an anti-malaria message in nine different languages. Here, the positive global impact of deepfake technology was evident whilst enabling influencer marketing to reach the next level.

Moving away from media, deepfakes are also on track to deliver revolutionary benefits within the healthcare sector by aiding the development of new disease treatment. Researchers have already trained algorithms to create ‘fake’ MRI brain scans that are just as accurate in detecting brain tumours as algorithms trained using real medical images, but without using real patient data.

The potential benefits of deepfakes to society mark exciting tech prospects, equipping us with the ability to impact at scale and speed. However, as the tech becomes more widely available, so too does the opportunity for misuse. We need to be questioning its morality and safety within our society now, before it’s too late.