How to Develop the Right Idea at the Right Time

How to Develop the Right Idea at the Right Time written by John Jantsch read more at Duct Tape Marketing

Marketing Podcast with Allen Gannett
Podcast Transcript

Allen Gannett

My guest for this week’s episode of the Duct Tape Marketing Podcast is Allen Gannett. He is the CEO and co-founder of TrackMaven, a marketing insights platform. He and I discuss his new book, The Creative Curve: How to Develop the Right Idea at the Right Time.

Gannett’s mission in life is to make people realize and live up to their potential. He believes “creativity” is accessible to all, most people just don’t have the right tools.

He has been on the “30 Under 30” lists for both Inc. and Forbes and is a contributor for where he writes on the intersection of technology and human nature. Previously, he was a co-founder and General Partner of Acceleprise Ventures, the leading SaaS startup accelerator. He was also once a runner-up on Wheel of Fortune.

Questions I ask Allen Gannett:

  • What is the science behind creativity?
  • What are the four laws of the Creative Curve?
  • How has the research you’ve done impacted your own business?

What you’ll learn if you give a listen:

  • Why creativity is misunderstood
  • What you need in order to develop a skill
  • How Gannett’s research influenced his views on hiring

Key takeaways from the episode and more about Allen Gannett:

Like this show? Click on over and give us a review on iTunes, please!

Duct Tape Marketing

Transcript of How to Develop the Right Idea at the Right Time

Transcript of How to Develop the Right Idea at the Right Time written by John Jantsch read more at Duct Tape Marketing

Back to Podcast


John Jantsch: Hello and welcome to another episode of the Duct Tape Marketing podcast. This is John Jantsch, and my guest today is Allen Gannett. He is the CEO and founder of TrackMaven, a marketing insights platform. He’s also the author of a book we’re going to talk about today called The Creative Curve: How to Develop the Right Idea at the Right Time. Allen, thanks for joining me.

Allen Gannett: Thanks for having me, man.

John Jantsch: A big premise of the book is to kind of debunk the creativity myth that you sit around and get this inspiration from a muse at some point in your life and that, in fact, there’s a science behind it. You want to tell me kind of your … it’s really the big idea of the book, I suppose, so you want to unpack that for us?

Allen Gannett: Creativity is one of those things that we talk about a lot in our culture. It’s on the cover of all these magazines. It’s this big topic in boardrooms. In Western culture, we have this notion of creativity as this magical, mystical thing that strikes a few certain people each generation, and there’s the Elon Musk and Steve Jobs of the world and the Mozarts and the JK Rowlings, but for the rest of us normies, we’re just sort of left out in the cold.

Allen Gannett: The thing that always bothered me is I’d always been someone who’d been a big reader of autobiographies and some of the literature around creativity. I run a marketing analytics company, so I spend a lot of time with marketers, and I didn’t realize the extent to which this had internalized with people. I thought people sort of knew that was the story but knew that, of course, that’s not actually how it works. I realized that, no, no, this is really how people believe creativity works, and so the book sort of came out of this frustration I had that I saw all these very smart people limiting their potential.

Allen Gannett: The book is split into two halves. The first half of the book I interviewed all of the living academics who study creativity, and I break down the myths around how creativity works using science and some of the real histories. I tell some of the real stories behind things like Paul McCartney’s creation of the song Yesterday, which has been over-hyped and over-sold for decades, and Mozart, which there was a whole bunch of, literally, things like forged letters and forged articles about Mozart that have become part of our common myths around Mozart.

Allen Gannett: In the second half of the book, I interviewed about 25 living creative geniuses. These are everyone from billionaires like David Rubenstein, Ted Sarandos, the chief content officer at Netflix, Nina Jacobson, the former president of Walt Disney Motion Pictures. She’s the producer of The Hunger Games. I interviewed even folks like Casey Neistat from YouTube and … really eclectic set of creative geniuses with the goal of saying, okay, if the science shows us that you can actually learn to become more creative, well then how have people actually done that? How have they accomplished that? The book is meant to both be a sort of myth-busting book but also actually be a practical guide to actually leveraging this yourself.

John Jantsch: I think there’s actually a lot of misunderstanding or misuse of the word creativity anyway.

Allen Gannett: Oh, totally.

John Jantsch: I do think that a lot of people that I run into, “Oh, I’m not creative,” which means, “I can’t paint like Picasso,” or something when, in fact, in my business, I’m not … If you set me down and say, “Make something,” I’m not a maker, but I could … I’ve built my entire career around taking other ideas and seeing how they fit together better, and I think that’s a creative science.

Allen Gannett: Oh, and totally, and this is one of the things that people … We have sort of a book cover mentality of creativity, I like to call it, where I wrote a book, there’s one name on the cover, but there’s so many people involved who are creative who make that happen. I mean there’s agents, editors, marketers, copy editors, proofreaders, research assistants, feedback readers, right? Every creative endeavor you see actually has a lot of different people involved, but we sort of have this book cover phenomenon, or I sometimes call it the front man phenomenon. In a band, we talk about the lead singer all the time even though there’s five people in the band. With creativity, we sort of talk about Steve Jobs and Elon Musk as if they’re these sort of Tony Stark-esque characters, and we forget the fact that Steve Jobs had Steve Wozniak. Elon Musk literally has the world’s best rocket scientists working for him.

Allen Gannett: The idea that these people are rolling these boulders up a hill by themselves is just not true, and so I think we’re surprisingly susceptible to these sort of PR person propagated narratives around creativity, because I also think, John, we kind of like it. We kind of like the idea that there’s something out there for all of us that’s going to be easy. When we talk about our passion, I think we’re slightly actually talking about, well, waiting for something to be easy, but nothing in life is easy.

Allen Gannett: You look at Mozart, and we talk about him as if he popped out of the womb playing the piano, but the reality is, when he was three years old, his dad, who’s basically a helicopter dad, was like, “You need to become a great musician.” Under the conditional love of his father, he started taking lessons with literally the best music teachers in all of Europe, and he practiced three hours seven days a week his entire childhood. This is not the story of it being easy for Mozart. This is the story of him doing the really hard part when he was young. I think we like this idea that, for some people, it’s easier, for some things it easy, because it kind of gives us an excuse.

John Jantsch: Well, and I also think that the narrative that is simple is a really useful device too because people can then share it, and they don’t have to … What you just went through, nobody wants to tell that story.

Allen Gannett: Of course, 100%. Everyone wants to believe it’s just straightforward.

John Jantsch: Yeah. I think you go as far as saying that just about anybody with the right motivation and the right process could practice and develop a skill, so let’s … Since I mentioned Picasso, could I paint if I had the right motivation?

Allen Gannett: Yes.

John Jantsch: I mean, right now, I will tell you I can’t.

Allen Gannett: Yes.

John Jantsch: I don’t think I could paint anything that anybody would see commercially interesting, but-

Allen Gannett: Totally.

John Jantsch: Right.

Allen Gannett: There’s two different parts of creativity. There’s the technical skill, and then there’s creating the right idea at the right time. On the technical skill side, we actually have now decades of research on talent development. What’s amazing, this is something I didn’t … I didn’t expect it to be this much of a consensus when I started writing the book, but the people, the researchers who spend their time studying talent development have come to the conclusion that, at best, natural-born talent is very rare and [wholefully 00:06:47] overblown, but more likely than not, the idea of natural-born talent actually doesn’t really exist.

Allen Gannett: It’s really that these people typically start very young. They have access to a lot of resources or maybe they were working on another skill, like the daughter who always played baseball in the backyard with her dad and then, by the time she was 12 and she went to her first-ever track practice, she was such a fast runner, and they’re like how did she learn this? It’s like, well, she was playing baseball in the backyard for seven years.

Allen Gannett: In the book, I actually profile the story … It’s actually one of the few stories we have of someone tracking their skill development over a long period of time. It’s the story of Jonathan Hardesty, who’s this painter who, at the age of 22, having never painted before, decided that he wanted to become a professional painter, and he proceeded to … For whatever reason, he was active on a online forum, and he created this forum thread which said that, “Every day, I’m going to post a picture of my painting. I’m going to paint every single day,” and for the next 13 years he did this, 13 years.

Allen Gannett: It’s a really amazing story being able to see he was such a terrible painter when he started. I got permission from him to use one of his first-ever sketches in the book and one of his sketches from much later, and it’s shocking. What he did is he followed, actually, all of the best practices that we have from research on talent and skill development on becoming a great painter, and now he teaches all these courses and classes on becoming a fine art painter and all this stuff, and his paintings sell for five figures, and so he’s a really great rare example of someone starting when they’re old. I think it’s hard because, when you’re older, you’re busy. You don’t have that much time, and there’s not a father or mother figure sort of bearing down on you, forcing you to get through the hard part.

John Jantsch: Well, and I do want to get to your four laws of the creative curve because I think that’s … obviously, that’s a big part of the book, but I think it’s also … I think people need to hear that process, but I want to start with something before that. One of the things that I have observed in my own life and in watching a lot of other people is that motivation has a tremendous amount to do with this.

John Jantsch: I’ll give you an example. I taught myself how to play the guitar when I was in junior high, and it wasn’t because I ever envisioned becoming a famous rock star. I saw it as a great … It turns out junior high girls love guitar players. That was a huge motivation for me to just take this thing on and do it myself. As silly as that example is, I think that that is probably the key to unlocking the whole thing. Isn’t it?

Allen Gannett: I mean this is one of the things that people sort of don’t realize. I think the reason why we see so many young people who seem to be very creative, it’s because their parents forced them. Right?

John Jantsch: Right, right.

Allen Gannett: That’s powerful [inaudible 00:09:37]. It’s Freudian. It’s developmental, whatever sort of psychological perspective you want to put on it, but over and over again we see that the idea of a stage parent is actually … plays a huge role in a lot of these young, creative lives. It’s a lot easier to be world-class by the time you’re 30 if you started when you were 3 than if you started when you were 25.

John Jantsch: Right, right, right. Yeah, I had to beg my parents to buy a used guitar, by the way. All right, so let’s talk about, then, the four laws because I do think that a lot of … there are definitely a lot of people, this is kind of ironic, a lot of people that are more left brain, and they need a process to be creative. I mean it makes total sense. You should pick up the bird, the book, I’m sorry, The Creative Curve.

Allen Gannett: And the bird.

John Jantsch: And the bird, to get really in-depth in this, but I’d like Allen to introduce his four laws.

Allen Gannett: Yeah. Basically, when we talk about creativity, there’s two types of creativity. There’s lower-case C creativity, and there’s upper-case C creativity. This is how academics differentiate them. Lower-case C creativity is just like creating something new. Upper-case C creativity is what most of us actually want to do, which is creating something that’s both new and valuable. Value is a subjective assessment, right? Creating something that we deem society to be valuable, well, people have to see it. They have to experience it. They have to deem it valuable, so there’s a bit of a circular phenomenon that happens.

Allen Gannett: The back half of the book deals with this sort of upper-case C creativity. How do you actually get this? How do you actually develop the right idea at the tight time? It turns out that we actually have a lot of really good science about what drives human preference. I explained it a lot more in detail in the book, but the short version is that we like ideas that are a blend of the familiar and the novel. They’re not too unfamiliar to be scary, because we’re biologically worried to fear the unfamiliar because we worry it might kill us, like if we went to a cave as a caveman that we’d never been in before versus a cave we’ve been in many times, but then we also … turns out we like things that are novel because they represent potential sources of reward. You can think about when we were hunter-gatherers why this was important.

Allen Gannett: These two seemingly contradictory ideas, our fear of the unfamiliar and our pursuit of the novelty, lead to this really elegant relationship where we like ideas that are a blend of the familiar and the novel. The first Star Wars, for example, was a Western in space. Right now, every city has a bunch of these sushi burrito places popping up. They’re just giant sushi rolls. They’re familiar but they’re novel. You see that this is a huge driver of human behavior, and so the four laws really explain how do you nail this timing?

Allen Gannett: The first law that I talk about is consumption. We talk about how creatives are always doing. They’re very active. There’s that annoying social media meme you might have seen, which is like, “90% of people consume, 9% engage, 1% create. #HUSTLE.” It’s not only stupid, but it’s also wrong because it actually turns out that, since familiarity is such an important part of the creative process, consumption, so you know what’s already out there, is actually a huge part of it, and so I talk about why and how.

Allen Gannett: Ted Sarandos, the chief content officer of Netflix, told me this wonderful story about how he started his career as a video store clerk who watched every single movie in the store. JK Rowling, when she was a kid, would close her bedroom door and just read book after book after book after book. The second-

John Jantsch: Right. I think the piece that maybe people are tripping up on is what I just heard you describe. It was intentional consumption.

Allen Gannett: Exactly, so it’s actually … What’s really interesting-

John Jantsch: It’s not just like, “Oh, I’m going to go on Facebook and see all the blah, blah, blah.” There’s intent in what you’re doing.

Allen Gannett: Yes, and it’s not just how much they consume, but it’s … exactly. It’s how they consume, and that goes into the second law, which is imitation. How these great creatives actually consume is in this way that’s very interactive. The best way you could summarize it is they’re imitating it.

Allen Gannett: I tell the story in the book about Ben Franklin and how we think of him as this great writer but, at the age of 18, he viewed himself as a terrible writer, probably because his dad told him so, again, this parent thing. He decided that he was going to start imitating some of the structures of articles he loved in a magazine called The Spectator. What you see is this sort of Mad Libification by these creative geniuses of other creative works where, instead of just reading a novel, they’ll outline, well, how is it structured? What’s the story arc?

Allen Gannett: Kurt Vonnegut, for his master’s thesis, literally created these charts showing the different story arcs of great novels, and this was one of the foundational things for him as a storyteller. You see that it’s not just that these great creatives consume a lot, and they do, but they also do it in a way which is much more interactive than we typically do and much more focused on imitation. That’s this-

John Jantsch: Yeah. Actually, a process that I’ve used for years in writing my books … I wrote a book called The Referral Engine, and so I’m looking for ideas on building community, and referrals, and different word-of-mouth things. I’ll read book that are unrelated to business, on math, on architecture. It’s amazing. When you go into it with that filter, I’m looking for ideas that I could apply to community building and referrals, and it’s amazing how the book is a whole different book in that [crosstalk 00:15:09]-

Allen Gannett: Oh, 100%. I mean I obviously … If you ever want to feel a lot of pressure, write a book on creating hits.

John Jantsch: Yeah, right.

Allen Gannett: It’s a lot of pressure, or write a book on creativity, and it has all this meta stuff to it. I mean, for me, it was like one of the things I, as a first-time author, was struggling with was the best way to go to switch between chapters. It’s just something I didn’t have a natural knack for, and so I went … ended up, as I was writing the book, using a lot of the methods in the book, and so going and seeing some of the different ways that other people did it. That helped give me the framework for realizing, okay, what are the different was I can do it? What do I like? What do I not like? How can I repurpose this in a way that fits my voice and my style versus, if I just kept sitting there looking at it and hoping an idea would hit me, I’d still be here, right, thinking how to end my chapters.

John Jantsch: All right, so I think we’re up to number three, creative [crosstalk 00:15:57]-

Allen Gannett: Okay, number three. Yeah, so number three I talk about in the book is that we think of these creative geniuses as these solo actors, Steve Jobs, Elon Musk, Oprah, but reality is, since there’s this social construct element to creativity, since it’s about what is valuable, you actually have to have a lot of different people involved, and I describe the different roles that you have to have in your creative communities, and there’s four that I talk about in the book.

Allen Gannett: Then the fourth and final law is all about data-driven iterations. I think we have this notion of the novelist who goes into the woods and writes their book in a writing cabin and, only once they write the end, period, do the come out. The reality is that, since these … The creatives who are the best at it realize that there’s this whole social construct element, that the relationship with their audience is so important that they are actually very focused on, early and often, getting feedback and then using that to iterate over and over again.

Allen Gannett: I talk about, in the book, everything from the movie industry to romance writers to … One of my favorite stories is I spent a day with the flavor team at Ben & Jerry’s who creates new flavors. That process, which is a culinary process, is shockingly data-driven. They literally do surveys and all this fascinating stuff. It’s not super expensive what they’re doing, they use a lot of email surveys, but it is data-driven.

Allen Gannett: I think that’s one of the big mistakes that aspiring creators have is that, oftentimes, aspiring creators are creating for themselves, and they’re not creating for their audience. The best creators are creating for their audience. Since they know that, they are much more likely to actually listen to their audience.

John Jantsch: Well, and it’s interesting. Over the last decade, I think that the adoption of blogging, wherever that is today, 10 years ago, I think some … there were a heck of a lot of authors that were iterating every day-

Allen Gannett: Completely.

John Jantsch: … because they were writing content that eventually made it into a book. I know I’ve done that numerous times, and I’ve seen a lot of other people that their blogs kind of blew up into books because of comments, and feedback, and the ability to say, “Oh, that resonated. I should go deeper there.” I think there are plenty of examples of a lot of books that became big hits started out as daily blogs.

Allen Gannett: Oh, 100%, and you see this, and they become … I mean Gary Vaynerchuk’s done a great job of this, right, just sort of getting community feedback, Tim Ferriss, obviously. You see this a lot of times. You’ll see these guys, they’ll … Even journalists will write an article for The New Yorker. It does really well. It goes viral. Then they’ll sell the book, and then they’ll sort of work through that.

Allen Gannett: The reality is that the best creative processes are messy, and gross, and involve lots of shades of gray, and all this stuff. I think we have this romantic notion. JK Rowling’s a great example. I mean the story about JK Rowling is she was on a train. She had the idea for Harry Potter. She started writing it on a napkin. First of all, she didn’t have a napkin. She didn’t have a pen. She was on a train. She had the idea for the character Harry Potter and some of his sidekicks, but then it took her five years to write the first book, five years. In one interview, she actually showed the interviewer the box of all 15 different versions of Chapter One she had written because she couldn’t figure out how she wanted to start the book, 15 different versions. This is not the story of her waking up one day with a multi-billion-dollar idea.

John Jantsch: No. Yeah, and then the process of selling that book was just as messy.

Allen Gannett: Yeah, totally. I interviewed, for the book, her first agent and her first publisher. I mean, that book, there was thought behind how to roll it out to the market. They were very mindful of how to do it.

John Jantsch: Yeah. Well, and the rest is history, of course, but you’re right. I mean I do think that we have a tendency in our culture, the social media, YouTube culture, to really kind of hold those ideas out there and think of the billions of other successes that we’ve never heard of that probably went through the same process. I mean they were successful in a different way at a different level, but we obviously all look at all of the stories that hit the one or two kind of social media viral hits.

Allen Gannett: Totally.

John Jantsch: Tell me a little bit about how this research that you’ve done has shaped or evolved your own business TrackMaven.

Allen Gannett: Oh, I mean it’s super interesting. One, it’s affected how I coach people. I think I always had confidence that people were generally underselling themselves when it came to their own talents and development, but writing this book, which took me even further on the side that natural-born talent doesn’t really exist, has made me, I think, a much more practical but also much more aggressive coach to my team where I think I really push people hard to get rid of those things they’ve put on themselves. I mean there’s these famous studies that were done in the ’90s where 86% of kindergartners tested at creative genius levels of creative potential, but I think it was like 16% of high school seniors, something in the teens.

Allen Gannett: Yeah, and it’s like … and you totally see this. There’s this entire social set of constructs we’ve put in ourselves, the social conditioning where we believe that we were meant to be X, and we can’t be Y, and it’s so, so, so, so, so much not real. It’s just in our heads. It’s what we’ve been told. It’s the result of middle-class parents telling kids to get their safe job, to be professional, whatever it is. I think it’s really dangerous, and so, for me as a manager and as a leader, I think I have become much more aggressive at trying to coach people out of that.

John Jantsch: Yeah. I think that times have changed a bit, but a lot of high school kids, the creatives were the nerds. You know?

Allen Gannett: Yeah.

John Jantsch: Of course, now they’re running the world, but I think that actually … Somebody who was really … peer pressure stopped them from pursuing kind of an interest because of that. I think that’s the real shame-

Allen Gannett: Exactly.

John Jantsch: … in not kind of bringing this out as, hey, this is the cool kids or whatever we want to call it now, so it’s interesting, as I heard you talk about that, I wonder what the implications are just for hiring in general.

Allen Gannett: I think I tend to very much focus hiring around potential. I tend not to be … and this is obviously as a young CEO. I think, also, you just tend to be a little more experience skeptical because you also see the downsides of experience around people having their own cognitive biases around previous experience and, “This worked before, so I’m going to do that again.” I tend to think I’m much more potential-oriented. The result is we have a lot of managers who are sort of battlefield promotions, so to speak, where they’ve grown up in the organization, and I think that makes them … They know a lot of the context. They’re more loyal, all that sort of stuff. I think that’s probably the biggest change for me as a leader is just really, yeah, being willing to take more risks on who I hire.

John Jantsch: Yeah. I mean I think we need creativity out of every position, so I guess if you make that a part of the process where you’re going to, as you said, coach and teach a process of creativity or at least to bring out the creativity in everybody, then there isn’t any reason to necessarily just say, “Oh, you have a creative background.”

Allen Gannett: Exactly.

John Jantsch: Allen, tell people where they can get the book and find out more about TrackMaven and everything else you’re up to.

Allen Gannett: You can check out the book at and anywhere books are sold. Check out and for more on me.

John Jantsch: All right. Thanks, Allen. Hopefully, we’ll run into you out there in the world someday.

Allen Gannett: Bye.

Duct Tape Marketing

WTF is dark pattern design?

If you’re a UX designer you won’t need this article to tell you about dark pattern design. But perhaps you chose to tap here out of a desire to reaffirm what you already know — to feel good about your professional expertise.

Or was it that your conscience pricked you? Go on, you can be honest… or, well, can you?

A third possibility: Perhaps an app you were using presented this article in a way that persuaded you to tap on it rather than on some other piece of digital content. And it’s those sorts of little imperceptible nudges — what to notice, where to tap/click — that we’re talking about when we talk about dark pattern design.

But not just that. The darkness comes into play because UX design choices are being selected to be intentionally deceptive. To nudge the user to give up more than they realize. Or to agree to things they probably wouldn’t if they genuinely understood the decisions they were being pushed to make.

To put it plainly, dark pattern design is deception and dishonesty by design… Still sitting comfortably?

The technique, as it’s deployed online today, often feeds off and exploits the fact that content-overloaded consumers skim-read stuff they’re presented with, especially if it looks dull and they’re in the midst of trying to do something else — like sign up to a service, complete a purchase, get to something they actually want to look at, or find out what their friends have sent them.

Manipulative timing is a key element of dark pattern design. In other words when you see a notification can determine how you respond to it. Or if you even notice it. Interruptions generally pile on the cognitive overload — and deceptive design deploys them to make it harder for a web user to be fully in control of their faculties during a key moment of decision.

Dark patterns used to obtain consent to collect users’ personal data often combine unwelcome interruption with a built in escape route — offering an easy way to get rid of the dull looking menu getting in the way of what you’re actually trying to do.

Brightly colored ‘agree and continue’ buttons are a recurring feature of this flavor of dark pattern design. These eye-catching signposts appear near universally across consent flows — to encourage users not to read or contemplate a service’s terms and conditions, and therefore not to understand what they’re agreeing to.

It’s ‘consent’ by the spotlit backdoor.

This works because humans are lazy in the face of boring and/or complex looking stuff. And because too much information easily overwhelms. Most people will take the path of least resistance. Especially if it’s being reassuringly plated up for them in handy, push-button form.

At the same time dark pattern design will ensure the opt out — if there is one — will be near invisible; Greyscale text on a grey background is the usual choice.

Some deceptive designs even include a call to action displayed on the colorful button they do want you to press — with text that says something like ‘Okay, looks great!’ — to further push a decision.

Likewise, the less visible opt out option might use a negative suggestion to imply you’re going to miss out on something or are risking bad stuff happening by clicking here.

The horrible truth is that deceptive designs can be awfully easy to paint.

Where T&Cs are concerned, it really is shooting fish in a barrel. Because humans hate being bored or confused and there are countless ways to make decisions look off-puttingly boring or complex — be it presenting reams of impenetrable legalese in tiny greyscale lettering so no-one will bother reading it combined with defaults set to opt in when people click ‘ok’; deploying intentionally confusing phrasing and/or confusing button/toggle design that makes it impossible for the user to be sure what’s on and what’s off (and thus what’s opt out and what’s an opt in) or even whether opting out might actually mean opting into something you really don’t want…

Friction is another key tool of this dark art: For example designs that require lots more clicks/taps and interactions if you want to opt out. Such as toggles for every single data share transaction — potentially running to hundreds of individual controls a user has to tap on vs just a few taps or even a single button to agree to everything. The weighing is intentionally all one way. And it’s not in the consumer’s favor.

Deceptive designs can also make it appear that opting out is not even possible. Such as default opting users in to sharing their data and, if they try to find a way to opt out, requiring they locate a hard-to-spot alternative click — and then also requiring they scroll to the bottom of lengthy T&Cs to unearth a buried toggle where they can in fact opt out.

Facebook used that technique to carry out a major data heist by linking WhatsApp users’ accounts with Facebook accounts in 2016. Despite prior claims that such a privacy u-turn could never happen. The vast majority of WhatsApp users likely never realized they could say no — let alone understood the privacy implications of consenting to their accounts being linked.

Ecommerce sites also sometimes suggestively present an optional (priced) add-on in a way that makes it appear like an obligatory part of the transaction. Such as using a brightly colored ‘continue’ button during a flight check out process but which also automatically bundles an optional extra like insurance, instead of plainly asking people if they want to buy it.

Or using pre-selected checkboxes to sneak low cost items or a small charity donation into a basket when a user is busy going through the check out flow — meaning many customers won’t notice it until after the purchase has been made.

Airlines have also been caught using deceptive design to upsell pricier options, such as by obscuring cheaper flights and/or masking prices so it’s harder to figure out what the most cost effective choice actually is.

Dark patterns to thwart attempts to unsubscribe are horribly, horribly common in email marketing. Such as an unsubscribe UX that requires you to click a ridiculous number of times and keep reaffirming that yes, you really do want out.

Often these additional screens are deceptively designed to resembled the ‘unsubscribe successful’ screens that people expect to see when they’ve pulled the marketing hooks out. But if you look very closely, at the typically very tiny lettering, you’ll see they’re actually still asking if you want to unsubscribe. The trick is to get you not to unsubscribe by making you think you already have. 

Another oft-used deceptive design that aims to manipulate online consent flows works against users by presenting a few selectively biased examples — which gives the illusion of helpful context around a decision. But actually this is a turbocharged attempt to manipulate the user by presenting a self-servingly skewed view that is in no way a full and balanced picture of the consequences of consent.

At best it’s disingenuous. More plainly it’s deceptive and dishonest.

Here’s just one example of selectively biased examples presented during a Facebook consent flow used to encourage European users to switch on its face recognition technology. Clicking ‘continue’ leads the user to the decision screen — but only after they’ve been shown this biased interstitial…

Facebook is also using emotional manipulation here, in the wording of its selective examples, by playing on people’s fears (claiming its tech will “help protect you from a stranger”) and playing on people’s sense of goodwill (claiming your consent will be helpful to people with visual impairment) — to try to squeeze agreement by making people feel fear or guilt.

You wouldn’t like this kind of emotionally manipulative behavior if a human was doing it to you. But Facebook frequently tries to manipulate its users’ feelings to get them to behave how it wants.

For instance to push users to post more content — such as by generating an artificial slideshow of “memories” from your profile and a friend’s profile, and then suggesting you share this unasked for content on your timeline (pushing you to do so because, well, what’s your friend going to think if you choose not to share it?). Of course this serves its business interests because more content posted to Facebook generates more engagement and thus more ad views.

Or — in a last ditch attempt to prevent a person from deleting their account — Facebook has been known to use the names and photos of their Facebook friends to claim such and such a person will “miss you” if you leave the service. So it’s suddenly conflating leaving Facebook with abandoning your friends.

Distraction is another deceptive design technique deployed to sneak more from the user than they realize. For example cutesy looking cartoons that are served up to make you feel warn and fluffy about a brand — such as when they’re periodically asking you to review your privacy settings.

Again, Facebook uses this technique. The cartoony look and feel around its privacy review process is designed to make you feel reassured about giving the company more of your data.

You could even argue that Google’s entire brand is a dark pattern design: Childishly colored and sounding, it suggests something safe and fun. Playful even. The feelings it generates — and thus the work it’s doing — bear no relation to the business the company is actually in: Surveillance and people tracking to persuade you to buy things.

Another example of dark pattern design: Notifications that pop up just as you’re contemplating purchasing a flight or hotel room, say, or looking at a pair of shoes — which urge you to “hurry!” as there’s only X number of seats or pairs left.

This plays on people’s FOMO, trying to rush a transaction by making a potential customer feel like they don’t have time to think about it or do more research — and thus thwart the more rational and informed decision they might otherwise have made.

The kicker is there’s no way to know if there really was just two seats left at that price. Much like the ghost cars Uber was caught displaying in its app — which it claimed were for illustrative purposes, rather than being exactly accurate depictions of cars available to hail — web users are left having to trust what they’re being told is genuinely true.

But why should you trust companies that are intentionally trying to mislead you?

Dark patterns point to an ethical vacuum

The phrase dark pattern design is pretty antique in Internet terms, though you’ll likely have heard it being bandied around quite a bit of late. Wikipedia credits UX designer Harry Brignull with the coinage, back in 2010, when he registered a website ( to chronicle and call out the practice as unethical.

“Dark patterns tend to perform very well in A/B and multivariate tests simply because a design that tricks users into doing something is likely to achieve more conversions than one that allows users to make an informed decision,” wrote Brignull in 2011 — highlighting exactly why web designers were skewing towards being so tricksy: Superficially it works. The anger and mistrust come later.

Close to a decade later, Brignull’s website is still valiantly calling out deceptive design. So perhaps he should rename this page ‘the hall of eternal shame’. (And yes, before you point it out, you can indeed find brands owned by TechCrunch’s parent entity Oath among those being called out for dark pattern design… It’s fair to say that dark pattern consent flows are shamefully widespread among media entities, many of which aim to monetize free content with data-thirsty ad targeting.)

Of course the underlying concept of deceptive design has roots that run right through human history. See, for example, the original Trojan horse. (A sort of ‘reverse’ dark pattern design — given the Greeks built an intentionally eye-catching spectacle to pique the Trojan’s curiosity, getting them to lower their guard and take it into the walled city, allowing the fatal trap to be sprung.)

Basically, the more tools that humans have built, the more possibilities they’ve found for pulling the wool over other people’s eyes. The Internet just kind of supercharges the practice and amplifies the associated ethical concerns because deception can be carried out remotely and at vast, vast scale. Here the people lying to you don’t even have to risk a twinge of personal guilt because they don’t have to look into your eyes while they’re doing it.

Nowadays falling foul of dark pattern design most often means you’ll have unwittingly agreed to your personal data being harvested and shared with a very large number of data brokers who profit from background trading people’s information — without making it clear they’re doing so nor what exactly they’re doing to turn your data into their gold. So, yes, you are paying for free consumer services with your privacy.

Another aspect of dark pattern design has been bent towards encouraging Internet users to form addictive habits attached to apps and services. Often these kind of addiction forming dark patterns are less visually obvious on a screen — unless you start counting the number of notifications you’re being plied with, or the emotional blackmail triggers you’re feeling to send a message for a ‘friendversary’, or not miss your turn in a ‘streak game’.

This is the Nir Eyal ‘hooked’ school of product design. Which has actually run into a bit of a backlash of late, with big tech now competing — at least superficially — to offer so-called ‘digital well-being’ tools to let users unhook. Yet these are tools the platforms are still very much in control of. So there’s no chance you’re going to be encouraged to abandon their service altogether.

Dark pattern design can also cost you money directly. For example if you get tricked into signing up for or continuing a subscription you didn’t really want. Though such blatantly egregious subscription deceptions are harder to get away with. Because consumers soon notice they’re getting stung for $ 50 a month they never intended to spend.

That’s not to say ecommerce is clean of deceptive crimes now. The dark patterns have generally just got a bit more subtle. Pushing you to transact faster than you might otherwise, say, or upselling stuff you don’t really need.

Although consumers will usually realize they’ve been sold something they didn’t want or need eventually. Which is why deceptive design isn’t a sustainable business strategy, even setting aside ethical concerns.

In short, it’s short term thinking at the expense of reputation and brand loyalty. Especially as consumers now have plenty of online platforms where they can vent and denounce brands that have tricked them. So trick your customers at your peril.

That said, it takes longer for people to realize their privacy is being sold down the river. If they even realize at all. Which is why dark pattern design has become such a core enabling tool for the vast, non-consumer facing ad tech and data brokering industry that’s grown fat by quietly sucking on people’s data — thanks to the enabling grease of dark pattern design.

Think of it as a bloated vampire octopus wrapped invisibly around the consumer web, using its myriad tentacles and suckers to continuously manipulate decisions and close down user agency in order to keep data flowing — with all the A/B testing techniques and gamification tools it needs to win.

“It’s become substantially worse,” agrees Brignull, discussing the practice he began critically chronicling almost a decade ago. “Tech companies are constantly in the international news for unethical behavior. This wasn’t the case 5-6 years ago. Their use of dark patterns is the tip of the iceberg. Unethical UI is a tiny thing compared to unethical business strategy.”

“UX design can be described as the way a business chooses to behave towards its customers,” he adds, saying that deceptive web design is therefore merely symptomatic of a deeper Internet malaise.

He argues the underlying issue is really about “ethical behavior in US society in general”.

The deceitful obfuscation of commercial intention certainly runs all the way through the data brokering and ad tech industries that sit behind much of the ‘free’ consumer Internet. Here consumers have plainly been kept in the dark so they cannot see and object to how their personal information is being handed around, sliced and diced, and used to try to manipulate them.

From an ad tech perspective, the concern is that manipulation doesn’t work when it’s obvious. And the goal of targeted advertising is to manipulate people’s decisions based on intelligence about them gleaned via clandestine surveillance of their online activity (so inferring who they are via their data). This might be a purchase decision. Equally it might be a vote.

The stakes have been raised considerably now that data mining and behavioral profiling are being used at scale to try to influence democratic processes.

So it’s not surprising that Facebook is so coy about explaining why a certain user on its platform is seeing a specific advert. Because if the huge surveillance operation underpinning the algorithmic decision to serve a particular ad was made clear, the person seeing it might feel manipulated. And then they would probably be less inclined to look favorably upon the brand they were being urged to buy. Or the political opinion they were being pushed to form. And Facebook’s ad tech business stands to suffer.

The dark pattern design that’s trying to nudge you to hand over your personal information is, as Birgnull says, just the tip of a vast and shadowy industry that trades on deception and manipulation by design — because it relies on the lie that people don’t care about their privacy.

But people clearly do care about privacy. Just look at the lengths to which ad tech entities go to obfuscate and deceive consumers about how their data is being collected and used. If people don’t mind companies spying on them, why not just tell them plainly it’s happening?

And if people were really cool about sharing their personal and private information with anyone, and totally fine about being tracked everywhere they go and having a record kept of all the people they know and have relationships with, why would the ad tech industry need to spy on them in the first place? They could just ask up front for all your passwords.

The deception enabled by dark pattern design not only erodes privacy but has the chilling effect of putting web users under pervasive, clandestine surveillance, it also risks enabling damaging discrimination at scale. Because non-transparent decisions made off of the back of inferences gleaned from data taken without people’s consent can mean that — for example — only certain types of people are shown certain types of offers and prices, while others are not.

Facebook was forced to make changes to its ad platform after it was shown that an ad-targeting category it lets advertisers target ads against, called ‘ethnic affinity’ — aka Facebook users whose online activity indicates an interest in “content relating to particular ethnic communities” — could be used to run housing and employment ads that discriminate against protected groups.

More recently the major political ad scandals relating to Kremlin-backed disinformation campaigns targeting the US and other countries via Facebook’s platform, and the massive Facebook user data heist involving the controversial political consultancy Cambridge Analytica deploying quiz apps to improperly suck out people’s data in order to build psychographic profiles for political ad targeting, has shone a spotlight on the risks that flow from platforms that operate by systematically keeping their users in the dark.

As a result of these scandals, Facebook has started offering a level of disclosure around who is paying for and running some of the ads on its platform. But plenty of aspects of its platform and operations remain shrouded. Even those components that are being opened up a bit are still obscured from view of the majority of users — thanks to the company’s continued use of dark patterns to manipulate people into acceptance without actual understanding.

And yet while dark pattern design has been the slickly successful oil in the engines of the ad tech industry for years, allowing it to get away with so much consent-less background data processing, gradually, gradually some of the shadier practices of this sector are being illuminated and shut down — including as a consequence of shoddy security practices, with so many companies involved in the trading and mining of people’s data. There are just more opportunities for data to leak. 

Laws around privacy are also being tightened. And changes to EU data protection rules are a key reason why dark pattern design has bubbled back up into online conversations lately. The practice is under far greater legal threat now as GDPR tightens the rules around consent.

This week a study by the Norwegian Consumer Council criticized Facebook and Google for systematically deploying design choices that nudge people towards making decisions which negatively affect their own privacy — such as data sharing defaults, and friction injected into the process of opting out so that fewer people will.

Another manipulative design decision flagged by the report is especially illustrative of the deceptive levels to which companies will stoop to get users to do what they want — with the watchdog pointing out how Facebook paints fake red dots onto its UI in the midst of consent decision flows in order to encourage the user to think they have a message or a notification. Thereby rushing people to agree without reading any small print.

Fair and ethical design is design that requires people to opt in affirmatively to any actions that benefit the commercial service at the expense of the user’s interests. Yet all too often it’s the other way around: Web users have to go through sweating toil and effort to try to safeguard their information or avoid being stung for something they don’t want.

You might think the types of personal data that Facebook harvests are trivial — and so wonder what’s the big deal if the company is using deceptive design to obtain people’s consent? But the purposes to which people’s information can be put are not at all trivial — as the Cambridge Analytica scandal illustrates.

One of Facebook’s recent data grabs in Europe also underlines how it’s using dark patterns on its platform to attempt to normalize increasingly privacy hostile technologies.

Earlier this year it began asking Europeans for consent to processing their selfies for facial recognition purposes — a highly controversial technology that regulatory intervention in the region had previously blocked. Yet now, as a consequence of Facebook’s confidence in crafting manipulative consent flows, it’s essentially figured out a way to circumvent EU citizens’ fundamental rights — by socially engineering Europeans to override their own best interests.

Nor is this type of manipulation exclusively meted out to certain, more tightly regulated geographies; Facebook is treating all its users like this. European users just received its latest set of dark pattern designs first, ahead of a global rollout, thanks to the bloc’s new data protection regulation coming into force on May 25.

CEO Mark Zuckerberg even went so far as to gloat about the success of this deceptive modus operandi on stage at a European conference in May — claiming the “vast majority” of users were “willingly” opting in to targeted advertising via its new consent flow.

In truth the consent flow is manipulative, and Facebook does not even offer an absolute opt out of targeted advertising on its platform. The ‘choice’ it gives users is to agree to its targeted advertising or to delete their account and leave the service entirely. Which isn’t really a choice when balanced against the power of Facebook’s platform and the network effect it exploits to keep people using its service.

Forced consent‘ is an early target for privacy campaign groups making use of GDPR opening the door, in certain EU member states, to collective enforcement of individuals’ data rights.

Of course if you read Facebook or Google’s PR around privacy they claim to care immensely — saying they give people all the controls they need to manage and control access to their information. But controls with dishonest instructions on how to use them aren’t really controls at all. And opt outs that don’t exist smell rather more like a lock in. 

Platforms certainly remain firmly in the driving seat because — until a court tells them otherwise — they control not just the buttons and levers but the positions, sizes, colors, and ultimately the presence or otherwise of the buttons and levers.

And because these big tech ad giants have grown so dominant as services they are able to wield huge power over their users — even tracking non-users over large swathes of the rest of the Internet, and giving them even fewer controls than the people who are de facto locked in, even if, technically speaking, service users might be able to delete an account or abandon a staple of the consumer web. 

Big tech platforms can also leverage their size to analyze user behavior at vast scale and A/B test the dark pattern designs that trick people the best. So the notion that users have been willingly agreeing en masse to give up their privacy remains the big lie squatting atop the consumer Internet.

People are merely choosing the choice that’s being pre-selected for them.

That’s where things stand as is. But the future is looking increasingly murky for dark pattern design.

Change is in the air.

What’s changed is there are attempts to legally challenge digital disingenuousness, especially around privacy and consent. This after multiple scandals have highlighted some very shady practices being enabled by consent-less data-mining — making both the risks and the erosion of users’ rights clear.

Europe’s GDPR has tightened requirements around consent — and is creating the possibility of redress via penalties worth the enforcement. It has already caused some data-dealing businesses to pull the plug entirely or exit Europe.

New laws with teeth make legal challenges viable, which was simply not the case before. Though major industry-wide change will take time, as it will require waiting for judges and courts to rule.

“It’s a very good thing,” says Brignull of GDPR. Though he’s not yet ready to call it the death blow that deceptive design really needs, cautioning: “We’ll have to wait to see whether the bite is as strong as the bark.”

In the meanwhile, every data protection scandal ramps up public awareness about how privacy is being manhandled and abused, and the risks that flow from that — both to individuals (e.g. identity fraud) and to societies as a whole (be it election interference or more broadly attempts to foment harmful social division).

So while dark pattern design is essentially ubiquitous with the consumer web of today, the deceptive practices it has been used to shield and enable are on borrowed time. The direction of travel — and the direction of innovation — is pro-privacy, pro-user control and therefore anti-deceptive-design. Even if the most embedded practitioners are far too vested to abandon their dark arts without a fight.

What, then, does the future look like? What is ‘light pattern design’? The way forward — at least where privacy and consent are concerned — must be user centric. This means genuinely asking for permission — using honesty to win trust by enabling rather than disabling user agency.

Designs must champion usability and clarity, presenting a genuine, good faith choice. Which means no privacy-hostile defaults: So opt ins, not opt outs, and consent that is freely given because it’s based on genuine information not self-serving deception, and because it can also always be revoked at will.

Design must also be empathetic. It must understand and be sensitive to diversity — offering clear options without being intentionally overwhelming. The goal is to close the perception gap between what’s being offered and what the customer thinks they’re getting.

Those who want to see a shift towards light patterns and plain dealing also point out that online transactions honestly achieved will be happier and healthier for all concerned — because they will reflect what people actually want. So rather than grabbing short term gains deceptively, companies will be laying the groundwork for brand loyalty and organic and sustainable growth.

The alternative to the light pattern path is also clear: Rising mistrust, rising anger, more scandals, and — ultimately — consumers abandoning brands and services that creep them out and make them feel used. Because no one likes feeling exploited. And even if people don’t delete an account entirely they will likely modify how they interact, sharing less, being less trusting, less engaged, seeking out alternatives that they do feel good about using.

Also inevitable if the mass deception continues: More regulation. If businesses don’t behave ethically on their own, laws will be drawn up to force change.

Because sure, you can trick people for a while. But it’s not a sustainable strategy. Just look at the political pressure now being piled on Zuckerberg by US and EU lawmakers. Deception is the long game that almost always fails in the end.

The way forward must be a new ethical deal for consumer web services — moving away from business models that monetize free access via deceptive data grabs.

This means trusting your users to put their faith in you because your business provides an innovative and honest service that people care about.

It also means rearchitecting systems to bake in privacy by design. Blockchain-based micro-payments may offer one way of opening up usage-based revenue streams that can offer an alternative or supplement to ads.

Where ad tech is concerned, there are also some interesting projects being worked on — such as the blockchain-based Brave browser which is aiming to build an ad targeting system that does local, on-device targeting (only needing to know the user’s language and a broad-brush regional location), rather than the current, cloud-based ad exchange model that’s built atop mass surveillance.

Technologists are often proud of their engineering ingenuity. But if all goes to plan, they’ll have lots more opportunities to crow about what they’ve built in future — because they won’t be too embarrassed to talk about it.

Social – TechCrunch

Mailing to New Customers and Managing Deliverability Risk

List size is an important metric for many marketers. It dictates the number of inboxes they have access to and can drive internal conversations around budgets, initiatives, and available resources. As a result, the same question is often repeated to our deliverability operations team:

How do we grow our list and mail to new users?

Today, I want to focus on the second half of that question: How do we mail to new users. It is important to understand that mailing to new email addresses comes with a unique set of challenges and pitfalls separate than those associated with general mailings. These are addresses that have never previously been included in your marketing campaigns and are inherently risky as a result. In short, brands should not forget that new users are strangers. Applying scrutiny to these addresses before considering them potential customers will do tremendous good toward protecting sender reputation.

Stranger Danger

Any new address can cause real harm to a mailing list as a potential spam trap, invalid contact, or unengaged user. To avoid reputation ramifications, the first thing a marketer should do is consider the motivation a particular user had for signing up for emails.

All acquisition channels come with their own unique drawbacks:

  • In-store sign ups may not have realized they were providing contact information for more than a simple receipt.
  • Shoppers seeking to collect on discounts or sign up incentives may not be interested in mailing content long term.
  • Form completion addresses may have simply been trying to get beyond the paywall or pop-up add blocking their view.

All are susceptible to improperly set user expectations, and the likelihood that users have supplied false, or inaccurate data is high. As such, no marketer should simply release a new address into the full scope of their email ecosystem.

Put Your Users to Work

Especially in the wake of new global privacy regulations like GDPR, implementing the correct procedures surrounding consent is critical for mailers. Implementing a confirmed opt in allows the user to do a portion of this work for you. A confirmed opt in requires further action from a user in order to confirm that they do wish to opt into receiving messages from your brand.

After signing up, a welcome email is triggered to these users prompting this confirmation. From there, the path is clear: Those who take action to complete this confirmation can be funneled into regularly scheduled campaigns – those who do not, should not.

Shortcuts Aren’t Worth the Risk

Inevitably, there will be senders who do not have the patience for organic list growth and development. From this vantage point, list purchasing and appending can sound very appealing.

But let’s be quite clear about this:

  • Email addresses added to mailing lists should *never* be purchased.
  • Email addresses that are acquired for mailing should *never* be from appended lists.

These strategies not only go against Oracle recommendations and myriad privacy regulations, but they are also guaranteed to negatively impact your sender reputation in the eyes of ISPs. Spam traps and invalid addresses will enter your mailing stream via these methods, and spam complaints, hard bounce rates, trap hits, and unengaged users will all increase as you attempt to contact them. Spam folder placement directly correlates with these negative metrics, and an inevitable blacklisting will further destroy your inboxing rates and overall standing in the eyes of ISPs.

Once lost, mailing reputation requires weeks of pristine sending to correct. Ask yourself: Is it worth it? Instead, stick to best practices, use a confirmed opt in for your users, and slowly release your new senders into your larger mailing campaigns. Your performance will be stronger as a result.

Learn how to achieve email deliverability that really delivers. Download Email Deliverability: Guide for Modern Marketers.

Email Deliverability Guide

Oracle Blogs | Oracle Marketing Cloud

The Agency Partner Directory: How Agencies & Clients Work Better Together

In my experience working with and for agencies, what rises to the surface during times of challenge is as varied as the areas of expertise in that agency: staying up on current trends and technology, keeping a full pipeline of new business or building bigger client retainers, finding and keeping talent on-staff… the list goes on.

But one thing that comes up that may surprise those not familiar with our industry (or who have spent most of their time on the client-side)—a huge focus for agencies is finding fit.

Agencies & Clients Need to Find Fit

Whether that’s in terms of finding fit in budget size or project scope—or it’s seeking out internal stakeholders and client teams that can match temperament and work style—most client relationships are not much different from what makes any relationship successful.

What’s less surprising is that clients want the same thing.

As project owner, day-to-day contact for agency workers, or member of an internal team that’s dependent on an agency’s output—roles all of which I’ve filled—fit is what makes or breaks project or overall campaign success.

Clients want fit. Agencies want fit. There’s balance in the need.

We know that a mismatch in the relationship between an agency and a client, regardless of who is at fault, means an end to the work.

So what can Sprout do to connect the two?

A Two-Sided Equation With an X Factor

When we look at how we can help our varied set of customers level-up their social strategies and marketing campaigns, both agencies and businesses from the startup to the enterprise are looking for and can provide tools to help the other.

We’ve got the supply of agencies.

And our customers from the startup to the enterprise have the demand.

So how do they find each other? And how do they prioritize fit and getting to know each other before all else?

We hope to help answer these questions with the Agency Partner Directory: a resource for agencies to tell their story and be discoverable by potential clients.

The Agency Partner Directory is a powerful tool for businesses of all sizes who want to:

  • Level up their social strategy.
  • Work with a partner on day-to-day management of social campaigns, using the full power of the Sprout Social platform.
  • Match with agencies who not only provide social media services and expertise, but a wide variety of marketing services, like content marketing, branding, SEO, paid social, and other digital marketing services.

Why Work With a Sprout Partner?

We know that clients are looking for the right fit and a level of trust. And all of our Sprout-certified partners are not only ready and willing to help them build better campaigns, but they have access to a host of resources and the full functionality of the Sprout platform to help keep their social strategy on the cutting edge.

So whether existing Sprout customers want to work with a trusted resource for their strategies outside of social, or a business wants to use Sprout but needs an agency with whom they can partner on strategy or day-to-day management—the Agency Partner Directory helps you find or get matched with a solution.

Check out our certified agency partners today.

And if you’re an agency interested in becoming a certified agency partner to be found by prospective clients, you can find out more information about the Agency Partner Program and reach out to us today.

This post The Agency Partner Directory: How Agencies & Clients Work Better Together originally appeared on Sprout Social.

Sprout Social

Facebook gives US lawmakers the names of 52 firms it gave deep data access to

In a major Friday night data dump, Facebook handed Congress a ~750-page document with responses to the 2,000 or so questions it received from US lawmakers sitting on two committees in the Senate and House back in April.

The document (which condensed into a tellingly apt essence — “people data… Facebook information” — above, when we ran it through Word It Out‘s word cloud tool) would probably come in handy if you needed to put a small child to sleep, given Facebook repeats itself a distressing amount of times.

TextMechanic‘s tool spotted 3,434 lines of duplicate text in its answers — including Facebook’s current favorite line to throw at politicians, where it boldly states: “Facebook is generally not opposed to regulation but wants to ensure it is the right regulation”, followed by the company offering to work with regulators like Congress “to craft the right regulations”. Riiiiight.

While much of what Facebook’s policy staffers have inked here is an intentional nightcap made of misdirection and equivocation (with lashings of snoozy repetition), one nugget of new intel that jumps out is a long list of partners Facebook gave special data access to — via API agreements it calls “integration partnerships”.

Some names on the list have previously been reported by the New York Times. And as the newspaper pointed out last month, the problem for scandal-hit Facebook is these data-sharing arrangements appear to undermine some of its claims about how it respects privacy because users were not explicitly involved in consenting to the data sharing.

Below is the full list of 52 companies Facebook has now provided to US lawmakers — though it admits the list might not actually be comprehensive, writing: “It is possible we have not been able to identify some integrations, particularly those made during the early days of our company when our records were not centralized. It is also possible that early records may have been deleted from our system”. 

The listed companies are also by no means just device makers — including also the likes of mobile carriers, software makers, security firms, even the chip designer Qualcomm. So it’s an illustrative glimpse of quite how much work Facebook did to embed into services across the mobile web — predicated upon being able to provide so many third party businesses with user data.

Company names below that are appended with * denote partnerships that Facebook says it is “still in the process of ending” (it notes three exceptions: Tobii, Apple and Amazon, which it says will continue beyond October 2018), while ** denotes data partnerships that will continue but without access to friends’ data.

1. Accedo
2. Acer
3. Airtel
4. Alcatel/TCL
5. Alibaba**
6. Amazon*
7. Apple*
8. AT&T
9. Blackberry
10. Dell
11. DNP
12. Docomo
13. Garmin
14. Gemalto*
15. HP/Palm
16. HTC
17. Huawei
18. INQ
19. Kodak
20. LG
21. MediaTek/ Mstar
22. Microsoft
23. Miyowa /Hape Esia
24. Motorola/Lenovo
25. Mozilla**
26. Myriad*
27. Nexian
28. Nokia*
29. Nuance
30. O2
31. Opentech ENG
32. Opera Software**
33. OPPO
34. Orange
35. Pantech
36. PocketNet
37. Qualcomm
38. Samsung*
39. Sony
40. Sprint
41. T-Mobile
42. TIM
43. Tobii*
44. U2topia*
45. Verisign
46. Verizon
47. Virgin Mobile
48. Vodafone*
49. Warner Bros
50. Western Digital
51. Yahoo*
52. Zing Mobile*

NB: Number 46 on the list — Verizon — is the parent company of TechCrunch’s parent, Oath. 

Last month the New York Times revealed that Facebook had given device makers deep access to data on Facebook users and their friends, via device-integrated APIs.

The number and scope of the partnerships raised fresh privacy concerns about how Facebook (man)handles user data, casting doubt on its repeat claims to have “locked down the platform” in 2014/15, when it changed some of its APIs to prevent other developers doing a ‘Kogan‘ and sucking out masses of data via its Friends API.

After the Cambridge Analytica story (re)surfaced in March Facebook’s crisis PR response to the snowballing privacy scandal was to claim it had battened down access to user data back in 2015, when it shuttered the friends’ data API.

But the scope of its own data sharing arrangements with other companies show it was in fact continuing to quietly pass over people’s data (including friend data) to a large number of partners of its choosing — without obtaining users’ express consent.

This is especially pertinent because of a 2011 consent decree that Facebook signed with the Federal Trade Commission — agreeing it would avoid misrepresenting the privacy or security of user data — to settle charges that it had deceived its customers by “telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public”.

Yet, multiple years later, Facebook had inked data-sharing API integrations with ~50 companies that afforded ongoing access to Facebook users’ data — and apparently only started to wind down some of these partnerships this April, right after Cambridge Analytica blew up into a major global scandal.

Facebook says in the document that 38 of the 52 have now been discontinued — though it does not specify exactly when they were ended — adding that an additional seven will be shut down by the end of July, and another one will be closed by the end of October.

“Three partnerships will continue: (1) Tobii, an accessibility app that enables people with ALS to access Facebook; (2) Amazon; and (3) Apple, with whom we have agreements that extend beyond October 2018,” it adds, omitting to say what exactly Amazon does with Facebook data. (Perhaps an integration with its Fire line of mobile devices.)

“We also will continue partnerships with Mozilla, Alibaba and Opera — which enable people to receive notifications about Facebook in their web browsers — but their integrations will not have access to friends’ data,” it adds, so presumably the three companies were previously getting access to friend data.

Facebook claims its integration partnerships “differed significantly” from third-party app developers’ use of its published APIs to build apps for consumers on its developer platform — because its staff were approving the applications its partners could build. 

It further says partners “were not permitted to use data received through Facebook APIs for independent purposes unrelated to the approved integration without user consent” — specifying that staff in its partnerships and engineering teams managed the arrangements, including by reviewing and approving how licensed APIs were integrated into the partner’s products.

“By contrast, our Developer Operations (“Dev Ops”) team oversees third-party developers, which determine for themselves how they will build their apps — subject to Facebook’s general Platform Policies and Dev Ops approval for apps seeking permission to use most published APIs,” it writes, essentially admitting it was running a two-tier system related to user data access, with third party developers on its platform not being subject to the same kind of in-house management and reviews as its chosen integration partners. 

Aleksandr Kogan, the Cambridge University academic who made the quiz app which harvested Facebook users’ data in 2014 so that he could sell the information to Cambridge Analytica, has argued Facebook did not have a valid developer policy because it was not actively enforcing its T&Cs.

And certainly the company is admitting it made fewer checks on what developers were doing with user data vs companies it selectively gave access to.

In further responses to US lawmakers — who had asked Facebook to explain what “integrated with” means, vis-a-vis its 2016 data policy, where it stated: “When you use third-party apps, websites or other services that use, or are integrated with, our Services, they may receive information about what you post or share” — Facebook also makes a point of writing that integration partnerships were “typically… defined by specially-negotiated agreements that provided limited rights to use APIs to create specific integrations approved by Facebook, not independent purposes determined by the partner”.

The word “typically” is a notable choice there — suggesting some of these partnerships were rather more bounded than others. Though Facebook does not go into further detail.

We asked the company for more information — such as whether it will be listing the purposes for each of these integration partnerships, including the types of user and friends data each partner received, and the dates/durations for each arrangement — but a spokesman said it has nothing more to add at the moment.

In the document, Facebook lists four uses for people’s information as being some of the purposes its integration partners had for the data — namely: Saying some partners built version of its app for their device, OS or product that “replicated essential Facebook features that we built directly on the Facebook website and in our mobile apps”; some built social networking ‘hubs’ — which aggregated messages from multiple social services; some built syncing integrations to enable people to sync their Facebook data with their device in order to integrate Facebook features on their device (such as to upload pictures to Facebook and to download their Facebook pictures to their phones, or to integrate their Facebook contacts into their address book); and some developed USSD services — to provide Facebook notifications and content via text message, such as for feature phone users without mobile Internet access. 

So we can but speculate what other Facebook-approved integrations were built as a result of the partnerships.

Also notably Facebook does not specify exactly when the integration partnerships began — writing instead that they:

“[B]egan before iOS and Android had become the predominant ways people around the world accessed the internet on their mobile phones. People went online using a wide variety of text-only phones, feature phones, and early smartphones with varying capabilities. In that environment, the demand for internet services like Facebook, Twitter, and YouTube outpaced our industry’s ability to build versions of our services that worked on every phone and operating system. As a solution, internet companies often engaged device manufacturers and other partners to build ways for people to access their experiences on a range of devices and products.”

Which sounds like a fairly plausible explanation for why some of the data-sharing arrangements began. What’s less clear is why many were apparently continuing until just a few weeks ago. 

Facebook faces another regulatory risk related to its user data-sharing arrangements because it’s a signatory of the EU-US Privacy Shield, using the data transfer mechanism to authorize exporting hundreds of millions of EU users’ information to the US for processing.

However legal pressure has been mounting on this mechanism for some time. And just last month an EU parliament committee called for it to be suspended — voicing specific concerns about the Facebook Cambridge Analytica scandal, and saying companies that fail to safeguard EU citizens’ data should be removed from Privacy Shield.

Facebook remains a signatory of Privacy Shield for now but the company can be removed by US oversight bodies if it is deemed not to have fulfilled its obligations to safeguard EU users’ data.

And in March the FTC confirmed it had opened a fresh investigation into its privacy practices following revelations that data on tens of millions of Facebook users had been passed to third parties without most people’s knowledge or consent.

If the FTC finds Facebook violated the consent decree because it mishandled people’s data, there would be huge pressure for Facebook to be removed from Privacy Shield — which would mean the company has to scramble to put in place alternative legal mechanisms to transfer EU users’ data. Or potentially risk major fines, given the EU’s new GDPR data protection regime.

Facebook’s current use of one alternative data transfer method — called Standard Contractual Clauses — is also already under separate legal challenge.

Extra data-sucking time for all sorts of apps

In the document, Facebook also lists 61 developers (below) who it granted a data-access extension after ending the friends data API, in May 2015 — saying they were given a “one-time extension of less than six months beyond May 2015 to come into compliance” — with one exception, Serotek, an accessibility app, which was given an 8 months extension to January 2016.

Among the developers getting extra time to suck on Facebook friend data were dating apps, chat apps, games, music streaming apps, data analytics apps, news aggregator apps to name a few…

1. ABCSocial, ABC Television Network
2. Actiance
3. Adium
4. Anschutz Entertainment Group
5. AOL
6. Arktan / Janrain
7. Audi
8. biNu
9. Cerulean Studios
10. Coffee Meets Bagel
11. DataSift
12. Dingtone
13. Double Down Interactive
14. Endomondo
15. Flowics, Zauber Labs
16. Garena
17. Global Relay Communications
18. Hearsay Systems
19. Hinge
20. HiQ International AB
21. Hootsuite
22. Krush Technologies
23. LiveFyre / Adobe Systems
25. MiggoChat
26. Monterosa Productions Limited
27. AS
28. NIKE
29. Nimbuzz
30. NISSAN MOTOR CO / Airbiquity Inc.
31. Oracle
32. Panasonic
33. Playtika
34. Postano, TigerLogic Corporation
35. Raidcall
36. RealNetworks, Inc.
37. RegED / Stoneriver RegED
38. Reliance/Saavn
39. Rovi
40. Salesforce/Radian6
41. SeaChange International
42. Serotek Corp.
43. Shape Services
44. Smarsh
45. Snap
46. Social SafeGuard
47. Socialeyes LLC
48. SocialNewsdesk
49. Socialware / Proofpoint
50. SoundayMusic
51. Spotify
52. Spredfast
53. Sprinklr / Sprinklr Japan
54. Storyful Limited / News Corp
55. Tagboard
56. Telescope
57. Tradable Bits, TradableBits Media Inc.
58. UPS
59. Vidpresso
60. Vizrt Group AS
61. Wayin

NB: Number 5 on the list — AOL — is a former brand of TechCrunch’s parent company, Oath. 

Facebook also reveals that as part of its ongoing app audit, announced in the wake of the Cambridge Analytica scandal, it has found a “very small” number of companies “that theoretically could have accessed limited friends’ data as a result of API access that they received in the context of a beta test”.

It names these as:

1. Activision / Bizarre Creations
2. Fun2Shoot
3. Golden Union Co.
4. IQ Zone / PicDial
5. PeekSocial

“We are not aware that any of this handful of companies used this access, and we have now revoked any technical capability they may have had to access any friends’ data,” it adds.

Update: Facebook has just announced some additional API restrictions which it says it’s putting in place “to better protect people’s information”.  It’s detailed the changes here.

It says it will work with developers as it deprecates or changes APIs.

Social – TechCrunch

Customer Experience, Where It is Now, and Where It Will Go Next

Dear Feedburner Readers, we’ll soon be shutting down our feedburner plugin. If you’d still like to receive news from Brian Solis, please scroll to the bottom of and subscribe to the newsletter.

Leo Bertelli was gracious enough to invite me for an interview on the state and future of CX.  The conversation was so engrossing that I wanted to share it with you here. I hope it helps you!

What do you think have been the most significant advances in CX in previous years?

You can’t talk about customer experience if you don’t appreciate what the word experience means – and as the author of a book on experience design, I struggled a lot to find some satisfying definitions and build on them.

You look at all these experts wielding their CX wands, trying to inspire companies to change; most of them have never taken a step back and asked themselves: what does experience mean, what do customers actually experience and what do we want them to experience? These are the most important questions to start your conversation.

A person’s experience can be summarized in a series of moments – if you want to understand the entire experience you have to address every feeling that is expressed in a certain moment instead of looking at it as a whole. If these moments are left to chance the picture is distorted – that’s what I think we’re missing at the moment.

As an analyst, anthropologist and a human being, I think the best experiences in our lives are recorded as memories – and that’s what we like talking about.

The main challenges around CX can seem obvious at the moment — what do you think is the most under-acknowledged challenge that brands are facing?

What’s happening today is many brands are trying to either improve their moments or modernize them. That could be through mobile, chatbots, AI, even VR and AR.

We tend to associate these new features with good experiences – so we do our best to make them frictionless and delightful. But I think we need to understand the meaning of ‘experience’ before we start executing – I want people to feel the moments first, and then design the experience around them.

Last year I wrote a series of follow-on articles after publishing X: The Experience When Business Meets Design, which introduces the idea of an experience style guide. We already have brand style guides in place which tell you what a brand should look like, but as consumers become more demanding and discerning, we will have to consider brands as a sum of experiences.

For example, if you go to Disneyland, you’ll see the park is a manifestation of experience architecture. I‘ve never seen a purer form of CX before, from the trashcans and the concrete to the building facades, all the way to the uniforms in the park, everything is designed to evoke a profound emotional response.

But if you visit, you lose that depth and design of imagineering. If you go into a Disney hotel it feels like you’re completely disconnected from that magical experience. It’s easy to realize that the people who designed the e-commerce experience aren’t the same as the ones who created their “small world”.

I think we have a lot of work to do when it comes to designing and connecting experience design – very few brands think about how it works, how people interact with each other to create these moments.

How do you design the ideal experience? Is it possible to over-engineer this?

Any experience should be natural, desirable and sought after – I don’t think you could over-engineer something that matters to people if you understand what’s important to them.

What we’re seeing at the moment is people who are trying to create something that woos you, engages you or entertains you – we are witnessing the age of manufactured engagement.

Every experience has to begin with ‘what does the individual prefer and value?’ in order to work – just think about how many emails you have to delete every day; people are trying to reach you without understanding you and that’s a problem.

How about joining the dots between online and offline experiences?

This is an area where too many brands are still paying lip service without executing their actions properly. We can’t have the conversation about online, offline and the transcendent experience between the two because we haven’t looked at what our customers value, love, desire and how they behave. Once we do, we can find opportunities to design new, better, unified experiences online and in the real world.

This is what I usually call the experience divide. We should start by asking what’s our brand promise and then mapping the customer experience against that promise. If you do this you’ll quickly unravel the experience divides your company has. You’ll find out exactly where you’re failing and why.

This divide represents the reason why consumers are bypassing traditional means to get the experience they want.

I’ve done a lot of work with Google in the past couple of years for example, discussing the way in which empowered customers makes decisions and how their core values motivate them to take the next step – from their standpoint, brands are in dire risk of losing to the next generation of consumers.

What are the best examples of innovation in customer experience that you have seen?

To be honest I always defer this question because it’s important to understand the mission in CX, which is designing your customer’s experience.

I think a better question to ask or answer is: What is valuable to the people we’re trying to reach, and how do we create experiences that matter to them?

When we look at brands and how they are delivering the experiences, we tend to get very tactical and technological very fast. We don’t explore enough about the trends happening behind the scenes.

This is very important as it shapes our digital transformation now. We are adding to existing experiences without fixing what’s broken and inventing or creating what’s new upon blank slates.

If we don’t fix that core problem which lies underneath the experience – such as lack of communication or collaboration internally, dated and disconnected touch points, aimless journeys, etc., we risk operating in a culture detached from what’s actually happening and is instead only focused on conversions, reach, and shareholder value. There’s much to fix and even more opportunities for innovation on every front.

These are the questions I’m exploring at the moment – how can we fix that? How can we build something new? What can we borrow from our archetypes of experience in order to create something new, and better?

Where do you see CX growing and evolving in the next 5x years?

As experience architects, we have to start looking into the smallest places where CX is already benchmarked and build them into entire customer journeys.

Uber is often cited as an example of innovation and disruption. But what they’ve actually done, which is “breakthrough” innovative, but rarely mentioned, is changing the benchmark for a great customer experience when using an app as an integrated service.

So in many ways, whether you’re a dentist or a bank, you’re competing with Uber in terms of experiential standards. It’s fast, transparent, personal, frictionless, and evolving.

In the next few years, we have to look at where people are having great experience individually in order to integrate that in our services…especially outside of our industry. We will see our customers from entirely new viewpoints – and that will already be a massive step forward in itself.

Tell us a bit more about your latest book X: The Experience When Business Meets Design, and what you’re working on next?

It feels like the book just came out yesterday as I haven’t been able to move on from that moment. In a way, the book was transformative for myself as an author because it forced me to change – you can’t talk about designing experiences for a digital economy if you’re not willing to disrupt yourself.

I found it ironic that I was going to ask readers to challenge their own conventions and beliefs and re-imagine experiences for a new world yet I was going to do so in basic book form.

X to me was a culmination of innovation – it took me about 3 and a half years to turn the concept into something tangible, and the process changed every aspect about how I operate, think and write. I learned from mobile behaviors and app UX and UI and translated those insights onto paper.

To follow that book up is almost too great a task – in fact, I’ve been trying to figure out what to do next that surpasses the effort. In the meantime, X has never been more timely and I’m giving it my full support.

Brian Solis

Brian Solis is principal analyst and futurist at Altimeter, the digital analyst group at Prophet, Brian is world renowned keynote speaker and 7x best-selling author. His latest book, X: Where Business Meets Design, explores the future of brand and customer engagement through experience design. Invite him to speak at your event or bring him in to inspire and change executive mindsets.

Connect with Brian!

Twitter: @briansolis
Facebook: TheBrianSolis
LinkedIn: BrianSolis
Instagram: BrianSolis
Youtube: BrianSolisTV
Snapchat: BrianSolis

The post Customer Experience, Where It is Now, and Where It Will Go Next appeared first on Brian Solis.

Brian Solis