Archives for May 11, 2018

Online Ad Targeting Does Work—As Long As It's Not Creepy

If you click on the right-hand corner of any advertisement on Facebook, the social network will tell you why it was targeted to you. But what would happen if those buried targeting tactics were transparently displayed, right next to the ad itself? That’s the question at the heart of new research from Harvard Business School published in the Journal of Consumer Research. It turns out advertising transparency can be good for a platform—but it depends on how creepy marketer methods are.

The study has wide-reaching implications for advertising giants like Facebook and Google, which increasingly find themselves under pressure to disclose more about their targeting practices. The researchers found, for example, that consumers are reluctant to engage with ads that they know have been served based on their activity on third-party websites, a tactic Facebook and Google routinely use. Which also suggests that tech giants have a financial incentive to ensure users aren’t aware, at least up front, about how some ads are served.

Don’t Talk Behind My Back

For their study, Tami Kim, Kate Barasz and Leslie K. John conducted a number of online advertising experiments to understand the effect transparency has on user behavior. They found that if sites tell you they’re using unsavory tactics—like tracking you across the web—you’re far less likely to engage with their ads. The same goes for other invasive methods, like inferring something about your life when you haven’t explicitly provided that information. A famous example of this is from 2012, when Target began sending a woman baby-focused marketing mailers, inadvertently divulging to her father that she was pregnant.

“I think it will be interesting to see how firms respond in this age of increasing transparency,” says John, a professor at Harvard Business School and one of the authors of the paper. “Third-party data sharing obviously plays a big part in behaviorally targeted advertising. And behaviorally targeted advertising has been shown to be very effective—in that it increases sales. But our research shows that when we become aware of third-party sharing—and also of firms making inferences about us—we feel intruded upon and as a result ad effectiveness can decline.”

The researchers didn’t find, however, that users react poorly to all forms of ad transparency. If companies readily disclose that they employ targeting methods perceived to be acceptable, like recommending products based on items you’ve clicked in the past, people will make purchases all the same. And the study suggests that if people already trust the platform where those ads are displayed, they might even be more likely to click and buy.

The researchers say their findings mimic social truths in the real world. Tracking users across websites is viewed as an an inappropriate flow of information, like talking behind a friend’s back. Similarly, making inferences is often seen as unacceptable, even if you’re drawing a conclusion the other person would freely disclose. For example, you might tell a friend that you’re trying to lose weight, but find it inappropriate for him to ask if you want to shed some pounds. The same sort of rules apply to the online world, according to the study.

“And this brings to the topic that excites me the most—norms in the digital space are still evolving and less well understood,” says Kim, the lead author of the study and a marketing professor at the University of Virginia’s business school. “For marketers to build relationships with consumers effectively, it’s critical for firms to understand what these norms are and avoid practices that violate these norms.”

Where’d That Ad Come From?

In one experiment, the researchers recruited 449 people from Amazon’s Mechanical Turk platform to look at ads for a fictional bookstore. They were randomly shown two different ad-transparency messages, one saying they were targeted based on products they’ve clicked on in the past, and one saying they were targeted based on their activity on other websites. The study found that ads appended with the second message—revealing that users had been tracked across the web—were 24 percent less effective. (For the lab studies, “effectiveness” was based on how the subjects felt about the ads.)

In another experiment, the researchers looked at whether ads are less effective when companies disclose they’re making inferences about their users. In this scenario, 348 participants were shown an ad for an art gallery, along with a message saying either they were seeing the ad based on “your information that you stated about you,” or “based on your information that we inferred about you.” In this study, ads were less 17 percent effective when it was revealed that they were targeted based on things a website concluded about you on its own, rather than facts you actively provided.

The researchers found that their control ads, which didn’t have any transparency messages, performed just as well as those with “acceptable” ad-transparency disclosures—implying that being up-front about targeting might not impact a company’s bottom line, as long as it’s not being creepy. The problem is that companies do sometimes use unsettling tactics; the Intercept discovered earlier this month, for example, that Facebook has developed a service designed to serve ads based on how it predicts consumers will behave in the future.

In yet another experiment, the academics asked 462 participants to log into their Facebook accounts and look at the first ad they saw. They then were instructed to copy and paste Facebook’s “Why am I seeing this ad” message, as well as the name of the company that purchased it. Responses included standard targeting methods, like “my age I stated on my profile,” as well as invasive, distressing tactics like “my sexual orientation that Facebook inferred based on my Facebook usage.”

Journal of Consumer Research

The researchers coded these responses, and gave them each a “transparency score.” The higher the score, the more acceptable the ad-targeting practice. The subjects were then asked how interested they were in the ad, including whether they would purchase something from the company’s website. The results show participants who were served ads using acceptable practices were more likely to engage than those who were served ads based on practices perceived to be unacceptable.

Then, the researchers tested whether users who distrusted Facebook were less likely to engage with an ad; they found both that and the reverse to be true. People who trust Facebook more are more likely to engage with advertisements—though they have to be targeted in accepted ways. In other words, Facebook has a financial incentive beyond public relations to ensure users trust it. When they don’t, people engage with advertisements less.

Journal of Consumer Research

“What I think will be interesting moving forward is what users define for themselves as transparency. That definition is rapidly changing, and how platforms define it may not align with how users want or need it defined to feel like they understand,” says Susan Wenograd, a digital advertising consultant with a Facebook focus. “No one thought much of quizzes and apps being tied to Facebook before, but of course they do now since the testimony regarding Cambridge Analytica. It’s a fine line to be transparent without scaring users.”

When Transparency Works For Everyone

In some situations, according to the study, being honest about targeting practices can even lead to more clicks and purchases. In another experiment, the researchers worked with two loyalty point-redemption programs, which previous research has shown consumers trust highly. When they showed people messages next to ads saying things like “recommended based on your clicks on our site,” they were more likely to click and make purchases than if no message was present.

That says being honest can actually improve a company’s bottom line—as long as they’re not tracking and targeting users in an invasive way. As the researchers wrote, “even the most personalized, perfectly targeted advertisement will flop if the consumer is more focused on the (un)acceptability of how the targeting was done in the first place.”

The Ad Machine

Congress, Privacy Groups Question Amazon's Echo Dot for Kids

Lawmakers, child development experts, and privacy advocates are expressing concerns about two new Amazon products targeting children, questioning whether they prod kids to be too dependent on technology and potentially jeopardize their privacy.

In a letter to Amazon CEO Jeff Bezos on Friday, two members of the bipartisan Congressional Privacy Caucus raised concerns about Amazon’s smart speaker Echo Dot Kids and a companion service called FreeTime Unlimited that lets kids access a children’s version of Alexa, Amazon’s voice-controlled digital assistant.

“While these types of artificial intelligence and voice recognition technology offer potentially new educational and entertainment opportunities, Americans’ privacy, particularly children’s privacy, must be paramount,” wrote Senator Ed Markey (D-Massachusetts) and Representative Joe Barton (R-Texas), both cofounders of the privacy caucus.

The letter includes a dozen questions, including requests for details about how audio of children’s interactions is recorded and saved, parental control over deleting recordings, a list of third parties with access to the data, whether data will be used for marketing purposes, and Amazon’s intentions on maintaining a profile on kids who use these products.

Echo Dot Kids is the latest in a wave of products from dominant tech players targeting children, including Facebook’s communications app Messenger Kids and Google’s YouTube Kids, both of which have been criticized by child health experts concerned about privacy and developmental issues.

Like Amazon, toy manufacturers are also interested in developing smart speakers that would live in a child’s room. In September, Mattel pulled Aristotle, a smart speaker and digital assistant aimed at children, after a similar letter from Markey and Barton, as well as a petition that garnered more than 15,000 signatures.

One of the organizers of the petition, the nonprofit group Campaign for a Commercial Free Childhood, is now spearheading a similar effort against Amazon. In a press release Friday, timed to the letter from Congress, a group of child development and privacy advocates urged parents not to purchase Echo Dot Kids because the device and companion voice service pose a threat to children’s privacy and well-being.

“Amazon wants kids to be dependent on its data-gathering device from the moment they wake up until they go to bed at night,” said the group’s executive director Josh Golin. “The Echo Dot Kids is another unnecessary ‘must-have’ gadget, and it’s also potentially harmful. AI devices raise a host of privacy concerns and interfere with the face-to-face interactions and self-driven play that children need to thrive.”

FreeTime on Alexa includes content targeted at children, like kids’ books and Alexa skills from Disney, Nickelodeon, and National Geographic. It also features parental controls, such as song filtering, bedtime limits, disabled voice purchasing, and positive reinforcement for using the word “please.”

Despite such controls, the child health experts warning against Echo Dot Kids wrote, “Ultimately, though, the device is designed to make kids dependent on Alexa for information and entertainment. Amazon even encourages kids to tell the device ‘Alexa, I’m bored,’ to which Alexa will respond with branded games and content.”

In Amazon’s April press release announcing Echo Dot Kids, the company quoted one representative from a nonprofit group focused on children that supported the product, Stephen Balkam, founder and CEO of the Family Online Safety Institute. Balkam referenced a report from his institute, which found that the majority of parents were comfortable with their child using a smart speaker. Although it was not noted in the press release, Amazon is a member of FOSI and has an executive on the board.

In its review of the product, BuzzFeed wrote, “Unless your parents purge it, your Alexa will hold on to every bit of data you have ever given it, all the way back to the first things you shouted at it as a 2-year-old.”

Amazon did not immediately respond to questions from WIRED.

Far From Sesame Street

  • Facebook funded most of the experts who vetted its Messenger Kids, an app for children as young as 6.

  • Child-health advocates asked Facebook to discontinue Messenger Kids, claiming it will undermine childhood development.

  • In a complaint to the FTC, child health and privacy groups allege that YouTube is violating a law that protects children online.