Instagram Stories now lets its 400M users add soundtracks

The right music can make a boring photo or video epic, so Instagram is equipping users with a way to add popular songs to their Stories. TechCrunch had the scoop on the music feature’s prototype in early May, and now it’s launching to iOS and Android users in 6 countries including, the U.S. Thanks to Facebook’s recent deals with record labels, users will be able to choose from thousands of songs from artists including Bruno Mars, Dua Lipa, Calvin Harris and Guns N’ Roses. The launch could make Instagram Stories more fun to post and watch in a way that copyrights won’t allow on Snapchat, while giving the app a way to compete with tween favorite

And just a week after revealing its app has 1 billion monthly users, the company also announced today that Instagram Stories has 400 million daily users, up from 300 million in November and 250 million a year ago. That means Instagram Stories is growing about six times faster than Snapchat’s whole app, which only added 13 million daily users over the six months of Q4 2017 and Q1 2018 to reach 191 million.

Snapchat’s growth rate fell to its slowest pace ever last quarter amidst a despised redesign, while Instagram Stories has steadily added unique and popular features like Highlights, Superzoom and resharing of public feed posts. Instagram said last September that it had 500 million total daily users, so it’s likely that a majority of community is now hooked on the Stories format Snapchat invented.

Instagram Stories music

“Now you can add a soundtrack to your story that fits any moment and helps you express how you’re feeling,” Instagram writes. To access the new music feature, users will be able to choose a special song sticker after they shoot a photo or video. They can search for a specific song or artist, or browse by mood, genre or what’s popular. Once they select a song, they can pick the specific snippet they want to have accompany their content. Alternatively, iOS users can switch to the Music shutter mode in the Stories camera to pick a song before they capture a photo or video so they can sync up their actions to the music. That will come to Android eventually, and the whole feature will roll out to more countries soon following today’s launch in Australia, New Zealand, France, Germany, Sweden, the UK and the U.S. [Correction: The feature is launch in version 51 of Instagram, not in 51 countries.]

When friends watch a music-equipped Story, the song will post automatically. They’ll also be able to tap on the sticker to see artist and song title info, but for now these stickers won’t link out to a musician’s Instagram page or their presence on streaming services — though that would certainly be helpful. I also suggest that Instagram should create deeplinks that artists can share with their fans that automatically opens the Stories camera with that song’s sticker added.

It’s easy to imagine users lip syncing to their favorite jams, adding clashing background music for comedic effect or earnestly trying to compose something emotionally powerful. Suddenly people ‘Gramming from home will be a new way to entertain themselves and their pals.

Instagram tells me that musicians and rights holders will be compensated for the use of their songs, but wouldn’t specify how those payments would work. Facebook secured deals with all the major record labels and many independents to pave the way for this feature. Facebook has since announced that users can also add copyrighted music soundtracks to videos on their own before uploading and they wouldn’t be taken down like before. It’s also started testing a Lip Sync Live feature with a collection of chart-topping hits.

The big question will be whether the “thousands” of songs available through today’s launch will cover what most users want to hear, otherwise they might just be disappointed. With a few enhancements and a widened catalog, Instagram Music could become a powerful way for artists to go viral. All those shaky phone camera clips are going to start looking more like indie music videos you’ll watch til the end.

Social – TechCrunch

Twitter gets a re-org and new product head

Twitter has a new product manager in the wake of a large re-org of the company announced this week. The changes will see Twitter dividing its business into groups including engineering, product, revenue product, design and research, and more, while also bringing on Kayvon Beykpour, the GM of video and former Periscope CEO, as product head.

Beykpour will replace Ed Ho, vice president of product and engineering, as Ho steps down into a part-time role. In a series of tweets, Ho explains his decision was based on a family loss, and says he hopes to return full-time in the future. He had been on leave from Twitter since May.

As Recode noted, these change will make Beykpour the sixth exec to head up product since early 2014.

Meanwhile, Ho’s other role — head of engineering — will now be overseen by Mike Montano, who is stepping up from product engineering.

Twitter CEO’s announcement of the changes, below, was tweeted out on Thursday:

Social – TechCrunch

WTF is dark pattern design?

If you’re a UX designer you won’t need this article to tell you about dark pattern design. But perhaps you chose to tap here out of a desire to reaffirm what you already know — to feel good about your professional expertise.

Or was it that your conscience pricked you? Go on, you can be honest… or, well, can you?

A third possibility: Perhaps an app you were using presented this article in a way that persuaded you to tap on it rather than on some other piece of digital content. And it’s those sorts of little imperceptible nudges — what to notice, where to tap/click — that we’re talking about when we talk about dark pattern design.

But not just that. The darkness comes into play because UX design choices are being selected to be intentionally deceptive. To nudge the user to give up more than they realize. Or to agree to things they probably wouldn’t if they genuinely understood the decisions they were being pushed to make.

To put it plainly, dark pattern design is deception and dishonesty by design… Still sitting comfortably?

The technique, as it’s deployed online today, often feeds off and exploits the fact that content-overloaded consumers skim-read stuff they’re presented with, especially if it looks dull and they’re in the midst of trying to do something else — like sign up to a service, complete a purchase, get to something they actually want to look at, or find out what their friends have sent them.

Manipulative timing is a key element of dark pattern design. In other words when you see a notification can determine how you respond to it. Or if you even notice it. Interruptions generally pile on the cognitive overload — and deceptive design deploys them to make it harder for a web user to be fully in control of their faculties during a key moment of decision.

Dark patterns used to obtain consent to collect users’ personal data often combine unwelcome interruption with a built in escape route — offering an easy way to get rid of the dull looking menu getting in the way of what you’re actually trying to do.

Brightly colored ‘agree and continue’ buttons are a recurring feature of this flavor of dark pattern design. These eye-catching signposts appear near universally across consent flows — to encourage users not to read or contemplate a service’s terms and conditions, and therefore not to understand what they’re agreeing to.

It’s ‘consent’ by the spotlit backdoor.

This works because humans are lazy in the face of boring and/or complex looking stuff. And because too much information easily overwhelms. Most people will take the path of least resistance. Especially if it’s being reassuringly plated up for them in handy, push-button form.

At the same time dark pattern design will ensure the opt out — if there is one — will be near invisible; Greyscale text on a grey background is the usual choice.

Some deceptive designs even include a call to action displayed on the colorful button they do want you to press — with text that says something like ‘Okay, looks great!’ — to further push a decision.

Likewise, the less visible opt out option might use a negative suggestion to imply you’re going to miss out on something or are risking bad stuff happening by clicking here.

The horrible truth is that deceptive designs can be awfully easy to paint.

Where T&Cs are concerned, it really is shooting fish in a barrel. Because humans hate being bored or confused and there are countless ways to make decisions look off-puttingly boring or complex — be it presenting reams of impenetrable legalese in tiny greyscale lettering so no-one will bother reading it combined with defaults set to opt in when people click ‘ok’; deploying intentionally confusing phrasing and/or confusing button/toggle design that makes it impossible for the user to be sure what’s on and what’s off (and thus what’s opt out and what’s an opt in) or even whether opting out might actually mean opting into something you really don’t want…

Friction is another key tool of this dark art: For example designs that require lots more clicks/taps and interactions if you want to opt out. Such as toggles for every single data share transaction — potentially running to hundreds of individual controls a user has to tap on vs just a few taps or even a single button to agree to everything. The weighing is intentionally all one way. And it’s not in the consumer’s favor.

Deceptive designs can also make it appear that opting out is not even possible. Such as default opting users in to sharing their data and, if they try to find a way to opt out, requiring they locate a hard-to-spot alternative click — and then also requiring they scroll to the bottom of lengthy T&Cs to unearth a buried toggle where they can in fact opt out.

Facebook used that technique to carry out a major data heist by linking WhatsApp users’ accounts with Facebook accounts in 2016. Despite prior claims that such a privacy u-turn could never happen. The vast majority of WhatsApp users likely never realized they could say no — let alone understood the privacy implications of consenting to their accounts being linked.

Ecommerce sites also sometimes suggestively present an optional (priced) add-on in a way that makes it appear like an obligatory part of the transaction. Such as using a brightly colored ‘continue’ button during a flight check out process but which also automatically bundles an optional extra like insurance, instead of plainly asking people if they want to buy it.

Or using pre-selected checkboxes to sneak low cost items or a small charity donation into a basket when a user is busy going through the check out flow — meaning many customers won’t notice it until after the purchase has been made.

Airlines have also been caught using deceptive design to upsell pricier options, such as by obscuring cheaper flights and/or masking prices so it’s harder to figure out what the most cost effective choice actually is.

Dark patterns to thwart attempts to unsubscribe are horribly, horribly common in email marketing. Such as an unsubscribe UX that requires you to click a ridiculous number of times and keep reaffirming that yes, you really do want out.

Often these additional screens are deceptively designed to resembled the ‘unsubscribe successful’ screens that people expect to see when they’ve pulled the marketing hooks out. But if you look very closely, at the typically very tiny lettering, you’ll see they’re actually still asking if you want to unsubscribe. The trick is to get you not to unsubscribe by making you think you already have. 

Another oft-used deceptive design that aims to manipulate online consent flows works against users by presenting a few selectively biased examples — which gives the illusion of helpful context around a decision. But actually this is a turbocharged attempt to manipulate the user by presenting a self-servingly skewed view that is in no way a full and balanced picture of the consequences of consent.

At best it’s disingenuous. More plainly it’s deceptive and dishonest.

Here’s just one example of selectively biased examples presented during a Facebook consent flow used to encourage European users to switch on its face recognition technology. Clicking ‘continue’ leads the user to the decision screen — but only after they’ve been shown this biased interstitial…

Facebook is also using emotional manipulation here, in the wording of its selective examples, by playing on people’s fears (claiming its tech will “help protect you from a stranger”) and playing on people’s sense of goodwill (claiming your consent will be helpful to people with visual impairment) — to try to squeeze agreement by making people feel fear or guilt.

You wouldn’t like this kind of emotionally manipulative behavior if a human was doing it to you. But Facebook frequently tries to manipulate its users’ feelings to get them to behave how it wants.

For instance to push users to post more content — such as by generating an artificial slideshow of “memories” from your profile and a friend’s profile, and then suggesting you share this unasked for content on your timeline (pushing you to do so because, well, what’s your friend going to think if you choose not to share it?). Of course this serves its business interests because more content posted to Facebook generates more engagement and thus more ad views.

Or — in a last ditch attempt to prevent a person from deleting their account — Facebook has been known to use the names and photos of their Facebook friends to claim such and such a person will “miss you” if you leave the service. So it’s suddenly conflating leaving Facebook with abandoning your friends.

Distraction is another deceptive design technique deployed to sneak more from the user than they realize. For example cutesy looking cartoons that are served up to make you feel warn and fluffy about a brand — such as when they’re periodically asking you to review your privacy settings.

Again, Facebook uses this technique. The cartoony look and feel around its privacy review process is designed to make you feel reassured about giving the company more of your data.

You could even argue that Google’s entire brand is a dark pattern design: Childishly colored and sounding, it suggests something safe and fun. Playful even. The feelings it generates — and thus the work it’s doing — bear no relation to the business the company is actually in: Surveillance and people tracking to persuade you to buy things.

Another example of dark pattern design: Notifications that pop up just as you’re contemplating purchasing a flight or hotel room, say, or looking at a pair of shoes — which urge you to “hurry!” as there’s only X number of seats or pairs left.

This plays on people’s FOMO, trying to rush a transaction by making a potential customer feel like they don’t have time to think about it or do more research — and thus thwart the more rational and informed decision they might otherwise have made.

The kicker is there’s no way to know if there really was just two seats left at that price. Much like the ghost cars Uber was caught displaying in its app — which it claimed were for illustrative purposes, rather than being exactly accurate depictions of cars available to hail — web users are left having to trust what they’re being told is genuinely true.

But why should you trust companies that are intentionally trying to mislead you?

Dark patterns point to an ethical vacuum

The phrase dark pattern design is pretty antique in Internet terms, though you’ll likely have heard it being bandied around quite a bit of late. Wikipedia credits UX designer Harry Brignull with the coinage, back in 2010, when he registered a website ( to chronicle and call out the practice as unethical.

“Dark patterns tend to perform very well in A/B and multivariate tests simply because a design that tricks users into doing something is likely to achieve more conversions than one that allows users to make an informed decision,” wrote Brignull in 2011 — highlighting exactly why web designers were skewing towards being so tricksy: Superficially it works. The anger and mistrust come later.

Close to a decade later, Brignull’s website is still valiantly calling out deceptive design. So perhaps he should rename this page ‘the hall of eternal shame’. (And yes, before you point it out, you can indeed find brands owned by TechCrunch’s parent entity Oath among those being called out for dark pattern design… It’s fair to say that dark pattern consent flows are shamefully widespread among media entities, many of which aim to monetize free content with data-thirsty ad targeting.)

Of course the underlying concept of deceptive design has roots that run right through human history. See, for example, the original Trojan horse. (A sort of ‘reverse’ dark pattern design — given the Greeks built an intentionally eye-catching spectacle to pique the Trojan’s curiosity, getting them to lower their guard and take it into the walled city, allowing the fatal trap to be sprung.)

Basically, the more tools that humans have built, the more possibilities they’ve found for pulling the wool over other people’s eyes. The Internet just kind of supercharges the practice and amplifies the associated ethical concerns because deception can be carried out remotely and at vast, vast scale. Here the people lying to you don’t even have to risk a twinge of personal guilt because they don’t have to look into your eyes while they’re doing it.

Nowadays falling foul of dark pattern design most often means you’ll have unwittingly agreed to your personal data being harvested and shared with a very large number of data brokers who profit from background trading people’s information — without making it clear they’re doing so nor what exactly they’re doing to turn your data into their gold. So, yes, you are paying for free consumer services with your privacy.

Another aspect of dark pattern design has been bent towards encouraging Internet users to form addictive habits attached to apps and services. Often these kind of addiction forming dark patterns are less visually obvious on a screen — unless you start counting the number of notifications you’re being plied with, or the emotional blackmail triggers you’re feeling to send a message for a ‘friendversary’, or not miss your turn in a ‘streak game’.

This is the Nir Eyal ‘hooked’ school of product design. Which has actually run into a bit of a backlash of late, with big tech now competing — at least superficially — to offer so-called ‘digital well-being’ tools to let users unhook. Yet these are tools the platforms are still very much in control of. So there’s no chance you’re going to be encouraged to abandon their service altogether.

Dark pattern design can also cost you money directly. For example if you get tricked into signing up for or continuing a subscription you didn’t really want. Though such blatantly egregious subscription deceptions are harder to get away with. Because consumers soon notice they’re getting stung for $ 50 a month they never intended to spend.

That’s not to say ecommerce is clean of deceptive crimes now. The dark patterns have generally just got a bit more subtle. Pushing you to transact faster than you might otherwise, say, or upselling stuff you don’t really need.

Although consumers will usually realize they’ve been sold something they didn’t want or need eventually. Which is why deceptive design isn’t a sustainable business strategy, even setting aside ethical concerns.

In short, it’s short term thinking at the expense of reputation and brand loyalty. Especially as consumers now have plenty of online platforms where they can vent and denounce brands that have tricked them. So trick your customers at your peril.

That said, it takes longer for people to realize their privacy is being sold down the river. If they even realize at all. Which is why dark pattern design has become such a core enabling tool for the vast, non-consumer facing ad tech and data brokering industry that’s grown fat by quietly sucking on people’s data — thanks to the enabling grease of dark pattern design.

Think of it as a bloated vampire octopus wrapped invisibly around the consumer web, using its myriad tentacles and suckers to continuously manipulate decisions and close down user agency in order to keep data flowing — with all the A/B testing techniques and gamification tools it needs to win.

“It’s become substantially worse,” agrees Brignull, discussing the practice he began critically chronicling almost a decade ago. “Tech companies are constantly in the international news for unethical behavior. This wasn’t the case 5-6 years ago. Their use of dark patterns is the tip of the iceberg. Unethical UI is a tiny thing compared to unethical business strategy.”

“UX design can be described as the way a business chooses to behave towards its customers,” he adds, saying that deceptive web design is therefore merely symptomatic of a deeper Internet malaise.

He argues the underlying issue is really about “ethical behavior in US society in general”.

The deceitful obfuscation of commercial intention certainly runs all the way through the data brokering and ad tech industries that sit behind much of the ‘free’ consumer Internet. Here consumers have plainly been kept in the dark so they cannot see and object to how their personal information is being handed around, sliced and diced, and used to try to manipulate them.

From an ad tech perspective, the concern is that manipulation doesn’t work when it’s obvious. And the goal of targeted advertising is to manipulate people’s decisions based on intelligence about them gleaned via clandestine surveillance of their online activity (so inferring who they are via their data). This might be a purchase decision. Equally it might be a vote.

The stakes have been raised considerably now that data mining and behavioral profiling are being used at scale to try to influence democratic processes.

So it’s not surprising that Facebook is so coy about explaining why a certain user on its platform is seeing a specific advert. Because if the huge surveillance operation underpinning the algorithmic decision to serve a particular ad was made clear, the person seeing it might feel manipulated. And then they would probably be less inclined to look favorably upon the brand they were being urged to buy. Or the political opinion they were being pushed to form. And Facebook’s ad tech business stands to suffer.

The dark pattern design that’s trying to nudge you to hand over your personal information is, as Birgnull says, just the tip of a vast and shadowy industry that trades on deception and manipulation by design — because it relies on the lie that people don’t care about their privacy.

But people clearly do care about privacy. Just look at the lengths to which ad tech entities go to obfuscate and deceive consumers about how their data is being collected and used. If people don’t mind companies spying on them, why not just tell them plainly it’s happening?

And if people were really cool about sharing their personal and private information with anyone, and totally fine about being tracked everywhere they go and having a record kept of all the people they know and have relationships with, why would the ad tech industry need to spy on them in the first place? They could just ask up front for all your passwords.

The deception enabled by dark pattern design not only erodes privacy but has the chilling effect of putting web users under pervasive, clandestine surveillance, it also risks enabling damaging discrimination at scale. Because non-transparent decisions made off of the back of inferences gleaned from data taken without people’s consent can mean that — for example — only certain types of people are shown certain types of offers and prices, while others are not.

Facebook was forced to make changes to its ad platform after it was shown that an ad-targeting category it lets advertisers target ads against, called ‘ethnic affinity’ — aka Facebook users whose online activity indicates an interest in “content relating to particular ethnic communities” — could be used to run housing and employment ads that discriminate against protected groups.

More recently the major political ad scandals relating to Kremlin-backed disinformation campaigns targeting the US and other countries via Facebook’s platform, and the massive Facebook user data heist involving the controversial political consultancy Cambridge Analytica deploying quiz apps to improperly suck out people’s data in order to build psychographic profiles for political ad targeting, has shone a spotlight on the risks that flow from platforms that operate by systematically keeping their users in the dark.

As a result of these scandals, Facebook has started offering a level of disclosure around who is paying for and running some of the ads on its platform. But plenty of aspects of its platform and operations remain shrouded. Even those components that are being opened up a bit are still obscured from view of the majority of users — thanks to the company’s continued use of dark patterns to manipulate people into acceptance without actual understanding.

And yet while dark pattern design has been the slickly successful oil in the engines of the ad tech industry for years, allowing it to get away with so much consent-less background data processing, gradually, gradually some of the shadier practices of this sector are being illuminated and shut down — including as a consequence of shoddy security practices, with so many companies involved in the trading and mining of people’s data. There are just more opportunities for data to leak. 

Laws around privacy are also being tightened. And changes to EU data protection rules are a key reason why dark pattern design has bubbled back up into online conversations lately. The practice is under far greater legal threat now as GDPR tightens the rules around consent.

This week a study by the Norwegian Consumer Council criticized Facebook and Google for systematically deploying design choices that nudge people towards making decisions which negatively affect their own privacy — such as data sharing defaults, and friction injected into the process of opting out so that fewer people will.

Another manipulative design decision flagged by the report is especially illustrative of the deceptive levels to which companies will stoop to get users to do what they want — with the watchdog pointing out how Facebook paints fake red dots onto its UI in the midst of consent decision flows in order to encourage the user to think they have a message or a notification. Thereby rushing people to agree without reading any small print.

Fair and ethical design is design that requires people to opt in affirmatively to any actions that benefit the commercial service at the expense of the user’s interests. Yet all too often it’s the other way around: Web users have to go through sweating toil and effort to try to safeguard their information or avoid being stung for something they don’t want.

You might think the types of personal data that Facebook harvests are trivial — and so wonder what’s the big deal if the company is using deceptive design to obtain people’s consent? But the purposes to which people’s information can be put are not at all trivial — as the Cambridge Analytica scandal illustrates.

One of Facebook’s recent data grabs in Europe also underlines how it’s using dark patterns on its platform to attempt to normalize increasingly privacy hostile technologies.

Earlier this year it began asking Europeans for consent to processing their selfies for facial recognition purposes — a highly controversial technology that regulatory intervention in the region had previously blocked. Yet now, as a consequence of Facebook’s confidence in crafting manipulative consent flows, it’s essentially figured out a way to circumvent EU citizens’ fundamental rights — by socially engineering Europeans to override their own best interests.

Nor is this type of manipulation exclusively meted out to certain, more tightly regulated geographies; Facebook is treating all its users like this. European users just received its latest set of dark pattern designs first, ahead of a global rollout, thanks to the bloc’s new data protection regulation coming into force on May 25.

CEO Mark Zuckerberg even went so far as to gloat about the success of this deceptive modus operandi on stage at a European conference in May — claiming the “vast majority” of users were “willingly” opting in to targeted advertising via its new consent flow.

In truth the consent flow is manipulative, and Facebook does not even offer an absolute opt out of targeted advertising on its platform. The ‘choice’ it gives users is to agree to its targeted advertising or to delete their account and leave the service entirely. Which isn’t really a choice when balanced against the power of Facebook’s platform and the network effect it exploits to keep people using its service.

Forced consent‘ is an early target for privacy campaign groups making use of GDPR opening the door, in certain EU member states, to collective enforcement of individuals’ data rights.

Of course if you read Facebook or Google’s PR around privacy they claim to care immensely — saying they give people all the controls they need to manage and control access to their information. But controls with dishonest instructions on how to use them aren’t really controls at all. And opt outs that don’t exist smell rather more like a lock in. 

Platforms certainly remain firmly in the driving seat because — until a court tells them otherwise — they control not just the buttons and levers but the positions, sizes, colors, and ultimately the presence or otherwise of the buttons and levers.

And because these big tech ad giants have grown so dominant as services they are able to wield huge power over their users — even tracking non-users over large swathes of the rest of the Internet, and giving them even fewer controls than the people who are de facto locked in, even if, technically speaking, service users might be able to delete an account or abandon a staple of the consumer web. 

Big tech platforms can also leverage their size to analyze user behavior at vast scale and A/B test the dark pattern designs that trick people the best. So the notion that users have been willingly agreeing en masse to give up their privacy remains the big lie squatting atop the consumer Internet.

People are merely choosing the choice that’s being pre-selected for them.

That’s where things stand as is. But the future is looking increasingly murky for dark pattern design.

Change is in the air.

What’s changed is there are attempts to legally challenge digital disingenuousness, especially around privacy and consent. This after multiple scandals have highlighted some very shady practices being enabled by consent-less data-mining — making both the risks and the erosion of users’ rights clear.

Europe’s GDPR has tightened requirements around consent — and is creating the possibility of redress via penalties worth the enforcement. It has already caused some data-dealing businesses to pull the plug entirely or exit Europe.

New laws with teeth make legal challenges viable, which was simply not the case before. Though major industry-wide change will take time, as it will require waiting for judges and courts to rule.

“It’s a very good thing,” says Brignull of GDPR. Though he’s not yet ready to call it the death blow that deceptive design really needs, cautioning: “We’ll have to wait to see whether the bite is as strong as the bark.”

In the meanwhile, every data protection scandal ramps up public awareness about how privacy is being manhandled and abused, and the risks that flow from that — both to individuals (e.g. identity fraud) and to societies as a whole (be it election interference or more broadly attempts to foment harmful social division).

So while dark pattern design is essentially ubiquitous with the consumer web of today, the deceptive practices it has been used to shield and enable are on borrowed time. The direction of travel — and the direction of innovation — is pro-privacy, pro-user control and therefore anti-deceptive-design. Even if the most embedded practitioners are far too vested to abandon their dark arts without a fight.

What, then, does the future look like? What is ‘light pattern design’? The way forward — at least where privacy and consent are concerned — must be user centric. This means genuinely asking for permission — using honesty to win trust by enabling rather than disabling user agency.

Designs must champion usability and clarity, presenting a genuine, good faith choice. Which means no privacy-hostile defaults: So opt ins, not opt outs, and consent that is freely given because it’s based on genuine information not self-serving deception, and because it can also always be revoked at will.

Design must also be empathetic. It must understand and be sensitive to diversity — offering clear options without being intentionally overwhelming. The goal is to close the perception gap between what’s being offered and what the customer thinks they’re getting.

Those who want to see a shift towards light patterns and plain dealing also point out that online transactions honestly achieved will be happier and healthier for all concerned — because they will reflect what people actually want. So rather than grabbing short term gains deceptively, companies will be laying the groundwork for brand loyalty and organic and sustainable growth.

The alternative to the light pattern path is also clear: Rising mistrust, rising anger, more scandals, and — ultimately — consumers abandoning brands and services that creep them out and make them feel used. Because no one likes feeling exploited. And even if people don’t delete an account entirely they will likely modify how they interact, sharing less, being less trusting, less engaged, seeking out alternatives that they do feel good about using.

Also inevitable if the mass deception continues: More regulation. If businesses don’t behave ethically on their own, laws will be drawn up to force change.

Because sure, you can trick people for a while. But it’s not a sustainable strategy. Just look at the political pressure now being piled on Zuckerberg by US and EU lawmakers. Deception is the long game that almost always fails in the end.

The way forward must be a new ethical deal for consumer web services — moving away from business models that monetize free access via deceptive data grabs.

This means trusting your users to put their faith in you because your business provides an innovative and honest service that people care about.

It also means rearchitecting systems to bake in privacy by design. Blockchain-based micro-payments may offer one way of opening up usage-based revenue streams that can offer an alternative or supplement to ads.

Where ad tech is concerned, there are also some interesting projects being worked on — such as the blockchain-based Brave browser which is aiming to build an ad targeting system that does local, on-device targeting (only needing to know the user’s language and a broad-brush regional location), rather than the current, cloud-based ad exchange model that’s built atop mass surveillance.

Technologists are often proud of their engineering ingenuity. But if all goes to plan, they’ll have lots more opportunities to crow about what they’ve built in future — because they won’t be too embarrassed to talk about it.

Social – TechCrunch

Facebook gives US lawmakers the names of 52 firms it gave deep data access to

In a major Friday night data dump, Facebook handed Congress a ~750-page document with responses to the 2,000 or so questions it received from US lawmakers sitting on two committees in the Senate and House back in April.

The document (which condensed into a tellingly apt essence — “people data… Facebook information” — above, when we ran it through Word It Out‘s word cloud tool) would probably come in handy if you needed to put a small child to sleep, given Facebook repeats itself a distressing amount of times.

TextMechanic‘s tool spotted 3,434 lines of duplicate text in its answers — including Facebook’s current favorite line to throw at politicians, where it boldly states: “Facebook is generally not opposed to regulation but wants to ensure it is the right regulation”, followed by the company offering to work with regulators like Congress “to craft the right regulations”. Riiiiight.

While much of what Facebook’s policy staffers have inked here is an intentional nightcap made of misdirection and equivocation (with lashings of snoozy repetition), one nugget of new intel that jumps out is a long list of partners Facebook gave special data access to — via API agreements it calls “integration partnerships”.

Some names on the list have previously been reported by the New York Times. And as the newspaper pointed out last month, the problem for scandal-hit Facebook is these data-sharing arrangements appear to undermine some of its claims about how it respects privacy because users were not explicitly involved in consenting to the data sharing.

Below is the full list of 52 companies Facebook has now provided to US lawmakers — though it admits the list might not actually be comprehensive, writing: “It is possible we have not been able to identify some integrations, particularly those made during the early days of our company when our records were not centralized. It is also possible that early records may have been deleted from our system”. 

The listed companies are also by no means just device makers — including also the likes of mobile carriers, software makers, security firms, even the chip designer Qualcomm. So it’s an illustrative glimpse of quite how much work Facebook did to embed into services across the mobile web — predicated upon being able to provide so many third party businesses with user data.

Company names below that are appended with * denote partnerships that Facebook says it is “still in the process of ending” (it notes three exceptions: Tobii, Apple and Amazon, which it says will continue beyond October 2018), while ** denotes data partnerships that will continue but without access to friends’ data.

1. Accedo
2. Acer
3. Airtel
4. Alcatel/TCL
5. Alibaba**
6. Amazon*
7. Apple*
8. AT&T
9. Blackberry
10. Dell
11. DNP
12. Docomo
13. Garmin
14. Gemalto*
15. HP/Palm
16. HTC
17. Huawei
18. INQ
19. Kodak
20. LG
21. MediaTek/ Mstar
22. Microsoft
23. Miyowa /Hape Esia
24. Motorola/Lenovo
25. Mozilla**
26. Myriad*
27. Nexian
28. Nokia*
29. Nuance
30. O2
31. Opentech ENG
32. Opera Software**
33. OPPO
34. Orange
35. Pantech
36. PocketNet
37. Qualcomm
38. Samsung*
39. Sony
40. Sprint
41. T-Mobile
42. TIM
43. Tobii*
44. U2topia*
45. Verisign
46. Verizon
47. Virgin Mobile
48. Vodafone*
49. Warner Bros
50. Western Digital
51. Yahoo*
52. Zing Mobile*

NB: Number 46 on the list — Verizon — is the parent company of TechCrunch’s parent, Oath. 

Last month the New York Times revealed that Facebook had given device makers deep access to data on Facebook users and their friends, via device-integrated APIs.

The number and scope of the partnerships raised fresh privacy concerns about how Facebook (man)handles user data, casting doubt on its repeat claims to have “locked down the platform” in 2014/15, when it changed some of its APIs to prevent other developers doing a ‘Kogan‘ and sucking out masses of data via its Friends API.

After the Cambridge Analytica story (re)surfaced in March Facebook’s crisis PR response to the snowballing privacy scandal was to claim it had battened down access to user data back in 2015, when it shuttered the friends’ data API.

But the scope of its own data sharing arrangements with other companies show it was in fact continuing to quietly pass over people’s data (including friend data) to a large number of partners of its choosing — without obtaining users’ express consent.

This is especially pertinent because of a 2011 consent decree that Facebook signed with the Federal Trade Commission — agreeing it would avoid misrepresenting the privacy or security of user data — to settle charges that it had deceived its customers by “telling them they could keep their information on Facebook private, and then repeatedly allowing it to be shared and made public”.

Yet, multiple years later, Facebook had inked data-sharing API integrations with ~50 companies that afforded ongoing access to Facebook users’ data — and apparently only started to wind down some of these partnerships this April, right after Cambridge Analytica blew up into a major global scandal.

Facebook says in the document that 38 of the 52 have now been discontinued — though it does not specify exactly when they were ended — adding that an additional seven will be shut down by the end of July, and another one will be closed by the end of October.

“Three partnerships will continue: (1) Tobii, an accessibility app that enables people with ALS to access Facebook; (2) Amazon; and (3) Apple, with whom we have agreements that extend beyond October 2018,” it adds, omitting to say what exactly Amazon does with Facebook data. (Perhaps an integration with its Fire line of mobile devices.)

“We also will continue partnerships with Mozilla, Alibaba and Opera — which enable people to receive notifications about Facebook in their web browsers — but their integrations will not have access to friends’ data,” it adds, so presumably the three companies were previously getting access to friend data.

Facebook claims its integration partnerships “differed significantly” from third-party app developers’ use of its published APIs to build apps for consumers on its developer platform — because its staff were approving the applications its partners could build. 

It further says partners “were not permitted to use data received through Facebook APIs for independent purposes unrelated to the approved integration without user consent” — specifying that staff in its partnerships and engineering teams managed the arrangements, including by reviewing and approving how licensed APIs were integrated into the partner’s products.

“By contrast, our Developer Operations (“Dev Ops”) team oversees third-party developers, which determine for themselves how they will build their apps — subject to Facebook’s general Platform Policies and Dev Ops approval for apps seeking permission to use most published APIs,” it writes, essentially admitting it was running a two-tier system related to user data access, with third party developers on its platform not being subject to the same kind of in-house management and reviews as its chosen integration partners. 

Aleksandr Kogan, the Cambridge University academic who made the quiz app which harvested Facebook users’ data in 2014 so that he could sell the information to Cambridge Analytica, has argued Facebook did not have a valid developer policy because it was not actively enforcing its T&Cs.

And certainly the company is admitting it made fewer checks on what developers were doing with user data vs companies it selectively gave access to.

In further responses to US lawmakers — who had asked Facebook to explain what “integrated with” means, vis-a-vis its 2016 data policy, where it stated: “When you use third-party apps, websites or other services that use, or are integrated with, our Services, they may receive information about what you post or share” — Facebook also makes a point of writing that integration partnerships were “typically… defined by specially-negotiated agreements that provided limited rights to use APIs to create specific integrations approved by Facebook, not independent purposes determined by the partner”.

The word “typically” is a notable choice there — suggesting some of these partnerships were rather more bounded than others. Though Facebook does not go into further detail.

We asked the company for more information — such as whether it will be listing the purposes for each of these integration partnerships, including the types of user and friends data each partner received, and the dates/durations for each arrangement — but a spokesman said it has nothing more to add at the moment.

In the document, Facebook lists four uses for people’s information as being some of the purposes its integration partners had for the data — namely: Saying some partners built version of its app for their device, OS or product that “replicated essential Facebook features that we built directly on the Facebook website and in our mobile apps”; some built social networking ‘hubs’ — which aggregated messages from multiple social services; some built syncing integrations to enable people to sync their Facebook data with their device in order to integrate Facebook features on their device (such as to upload pictures to Facebook and to download their Facebook pictures to their phones, or to integrate their Facebook contacts into their address book); and some developed USSD services — to provide Facebook notifications and content via text message, such as for feature phone users without mobile Internet access. 

So we can but speculate what other Facebook-approved integrations were built as a result of the partnerships.

Also notably Facebook does not specify exactly when the integration partnerships began — writing instead that they:

“[B]egan before iOS and Android had become the predominant ways people around the world accessed the internet on their mobile phones. People went online using a wide variety of text-only phones, feature phones, and early smartphones with varying capabilities. In that environment, the demand for internet services like Facebook, Twitter, and YouTube outpaced our industry’s ability to build versions of our services that worked on every phone and operating system. As a solution, internet companies often engaged device manufacturers and other partners to build ways for people to access their experiences on a range of devices and products.”

Which sounds like a fairly plausible explanation for why some of the data-sharing arrangements began. What’s less clear is why many were apparently continuing until just a few weeks ago. 

Facebook faces another regulatory risk related to its user data-sharing arrangements because it’s a signatory of the EU-US Privacy Shield, using the data transfer mechanism to authorize exporting hundreds of millions of EU users’ information to the US for processing.

However legal pressure has been mounting on this mechanism for some time. And just last month an EU parliament committee called for it to be suspended — voicing specific concerns about the Facebook Cambridge Analytica scandal, and saying companies that fail to safeguard EU citizens’ data should be removed from Privacy Shield.

Facebook remains a signatory of Privacy Shield for now but the company can be removed by US oversight bodies if it is deemed not to have fulfilled its obligations to safeguard EU users’ data.

And in March the FTC confirmed it had opened a fresh investigation into its privacy practices following revelations that data on tens of millions of Facebook users had been passed to third parties without most people’s knowledge or consent.

If the FTC finds Facebook violated the consent decree because it mishandled people’s data, there would be huge pressure for Facebook to be removed from Privacy Shield — which would mean the company has to scramble to put in place alternative legal mechanisms to transfer EU users’ data. Or potentially risk major fines, given the EU’s new GDPR data protection regime.

Facebook’s current use of one alternative data transfer method — called Standard Contractual Clauses — is also already under separate legal challenge.

Extra data-sucking time for all sorts of apps

In the document, Facebook also lists 61 developers (below) who it granted a data-access extension after ending the friends data API, in May 2015 — saying they were given a “one-time extension of less than six months beyond May 2015 to come into compliance” — with one exception, Serotek, an accessibility app, which was given an 8 months extension to January 2016.

Among the developers getting extra time to suck on Facebook friend data were dating apps, chat apps, games, music streaming apps, data analytics apps, news aggregator apps to name a few…

1. ABCSocial, ABC Television Network
2. Actiance
3. Adium
4. Anschutz Entertainment Group
5. AOL
6. Arktan / Janrain
7. Audi
8. biNu
9. Cerulean Studios
10. Coffee Meets Bagel
11. DataSift
12. Dingtone
13. Double Down Interactive
14. Endomondo
15. Flowics, Zauber Labs
16. Garena
17. Global Relay Communications
18. Hearsay Systems
19. Hinge
20. HiQ International AB
21. Hootsuite
22. Krush Technologies
23. LiveFyre / Adobe Systems
25. MiggoChat
26. Monterosa Productions Limited
27. AS
28. NIKE
29. Nimbuzz
30. NISSAN MOTOR CO / Airbiquity Inc.
31. Oracle
32. Panasonic
33. Playtika
34. Postano, TigerLogic Corporation
35. Raidcall
36. RealNetworks, Inc.
37. RegED / Stoneriver RegED
38. Reliance/Saavn
39. Rovi
40. Salesforce/Radian6
41. SeaChange International
42. Serotek Corp.
43. Shape Services
44. Smarsh
45. Snap
46. Social SafeGuard
47. Socialeyes LLC
48. SocialNewsdesk
49. Socialware / Proofpoint
50. SoundayMusic
51. Spotify
52. Spredfast
53. Sprinklr / Sprinklr Japan
54. Storyful Limited / News Corp
55. Tagboard
56. Telescope
57. Tradable Bits, TradableBits Media Inc.
58. UPS
59. Vidpresso
60. Vizrt Group AS
61. Wayin

NB: Number 5 on the list — AOL — is a former brand of TechCrunch’s parent company, Oath. 

Facebook also reveals that as part of its ongoing app audit, announced in the wake of the Cambridge Analytica scandal, it has found a “very small” number of companies “that theoretically could have accessed limited friends’ data as a result of API access that they received in the context of a beta test”.

It names these as:

1. Activision / Bizarre Creations
2. Fun2Shoot
3. Golden Union Co.
4. IQ Zone / PicDial
5. PeekSocial

“We are not aware that any of this handful of companies used this access, and we have now revoked any technical capability they may have had to access any friends’ data,” it adds.

Update: Facebook has just announced some additional API restrictions which it says it’s putting in place “to better protect people’s information”.  It’s detailed the changes here.

It says it will work with developers as it deprecates or changes APIs.

Social – TechCrunch

Study calls out ‘dark patterns’ in Facebook and Google that push users toward less privacy

More scrutiny than ever is in place on the tech industry, and while high-profile cases like Mark Zuckerberg’s appearance in front of lawmakers garner headlines, there are subtler forces at work. This study from a Norway watchdog group eloquently and painstakingly describes the ways that companies like Facebook and Google push their users towards making choices that negatively affect their own privacy.

It was spurred, like many other new inquiries, by Europe’s GDPR, which has caused no small amount of consternation among companies for whom collecting and leveraging user data is their main source of income.

The report (PDF) goes into detail on exactly how these companies create an illusion of control over your data while simultaneously nudging you towards making choices that limit that control.

Although the companies and their products will be quick to point out that they are in compliance with the requirements of the GDPR, there are still plenty of ways in which they can be consumer-unfriendly.

In going through a set of privacy popups put out in May by Facebook, Google, and Microsoft, the researchers found that the first two especially feature “dark patterns, techniques and features of interface design mean to manipulate users…used to nudge users towards privacy intrusive options.”

Flowchart illustrating the Facebook privacy options process – the green boxes are the “easy” route.

It’s not big obvious things — in fact, that’s the point of these “dark patterns”: that they are small and subtle yet effective ways of guiding people towards the outcome preferred by the designers.

For instance, in Facebook and Google’s privacy settings process, the more private options are simply disabled by default, and users not paying close attention will not know that there was a choice to begin with. You’re always opting out of things, not in. To enable these options is also a considerably longer process: 13 clicks or taps versus 4 in Facebook’s case.

That’s especially troubling when the companies are also forcing this action to take place at a time of their choosing, not yours. And Facebook added a cherry on top, almost literally, with the fake red dots that appeared behind the privacy popup, suggesting users had messages and notifications waiting for them even if that wasn’t the case.

When choosing the privacy-enhancing option, such as disabling face recognition, users are presented with a tailored set of consequences: “we won’t be able to use this technology if a stranger uses your photo to impersonate you,” for instance, to scare the user into enabling it. But nothing is said about what you will be opting into, such as how your likeness could be used in ad targeting or automatically matched to photos taken by others.

Disabling ad targeting on Google, meanwhile, warns you that you will not be able to mute some ads going forward. People who don’t understand the mechanism of muting being referred to here will be scared of the possibility — what if an ad pops up at work or during a show and I can’t mute it? So they agree to share their data.

Before you make a choice, you have to hear Facebook’s case.

In this way users are punished for choosing privacy over sharing, and are always presented only with a carefully curated set of pros and cons intended to cue the user to decide in favor of sharing. “You’re in control,” the user is constantly told, though those controls are deliberately designed to undermine what control you do have and exert.

Microsoft, while guilty of the biased phrasing, received much better marks in the report. Its privacy setup process put the less and more private options right next to each other, presenting them as equally valid choices rather than some tedious configuration tool that might break something if you’re not careful. Subtle cues do push users towards sharing more data or enabling voice recognition, but users aren’t punished or deceived the way they are elsewhere.

You may already have been aware of some of these tactics, as I was, but it makes for interesting reading nevertheless. We tend to discount these things when it’s just one screen here or there, but seeing them all together along with a calm explanation of why they are the way they are makes it rather obvious that there’s something insidious at play here.

Social – TechCrunch

Tinder bolsters its security to ward off hacks and blackmail

This week, Tinder responded to a letter from Oregon Senator Ron Wyden calling for the company to seal up security loopholes in its app that could lead to blackmail and other privacy incursions.

In a letter to Sen. Wyden, Match Group General Counsel Jared Sine describes recent changes to the app, noting that as of June 19, “swipe data has been padded such that all actions are now the same size.” Sine added that images on the mobile app are fully encrypted as of February 6, while images on the web version of Tinder were already encrypted.

The Tinder issues were first called out in a report by a research team at Checkmarx describing the app’s “disturbing vulnerabilities” and their propensity for blackmail:

The vulnerabilities, found in both the app’s Android and iOS versions, allow an attacker using the same network as the user to monitor the user’s every move on the app. It is also possible for an attacker to take control over the profile pictures the user sees, swapping them for inappropriate content, rogue advertising or other type of malicious content (as demonstrated in the research).

While no credential theft and no immediate financial impact are involved in this process, an attacker targeting a vulnerable user can blackmail the victim, threatening to expose highly private information from the user’s Tinder profile and actions in the app.

In February, Wyden called for Tinder to address the vulnerability by encrypting all data that moves between its servers and the app and by padding data to obscure it from hackers. In a statement to TechCrunch at the time, Tinder indicated that it heard Sen. Wyden’s concerns and had recently implemented encryption for profile photos in the interest of moving toward deepening its privacy practices.

“Like every technology company, we are constantly working to improve our defenses in the battle against malicious hackers and cyber criminals,” Sine said in the letter. “… Our goal is to have protocols and systems that not only meet, but exceed industry best practices.”

Social – TechCrunch

Benchmark’s Mitch Lasky will reportedly step down from Snap’s board of directors

Benchmark partner Mitch Lasky, who has served on Snap’s board of directors since December 2012, is not expected to stand for re-election to Snap’s board of directors and will thus be stepping down, according to a report by The Information.

Early investors stepping down from the board of directors — or at least not seeking re-election — isn’t that uncommon as once-private companies grow into larger public ones. Benchmark partner Peter Fenton did not seek re-election for Twitter’s board of directors in April last year. As Snap continues to navigate its future, especially as it has declined precipitously since going public and now sits at a valuation of around $ 16.5 billion. Partners with an expertise in the early-stage and later-stage startup life cycle may end up seeing themselves more useful taking a back seat and focusing on other investments. The voting process for board member re-election happens during the company’s annual meeting, so we’ll get more information when an additional proxy filing comes out ahead of the meeting later this year.

Benchmark is, or at least was at the time of going public last year, one of Snap’s biggest shareholders. According to the company’s 424B filing prior to going public in March last year, Benchmark held ownership of 23.1% of Snap’s Class B common stock and 8.2% of Snap’s Class A common stock. Lasky has been with Benchmark since April 2007, and also serves on the boards of a number of gaming companies like Riot Games and thatgamecompany, the creators of PlayStation titles flower and Journey. At the time, Snap said in its filing that Lasky was “qualified to serve as a member of our board of directors due to his extensive experience with social media and technology companies, as well as his experience as a venture capitalist investing in technology companies.”

The timing could be totally coincidental, but an earlier Recode report suggested Lasky had been talking about stepping down in future funds for Benchmark. The firm only recently wrapped up a very public battle with Uber, which ended up with Benchmark selling a significant stake in the company and a new CEO coming in to replace co-founder Travis Kalanick. Benchmark hired its first female general partner, Sarah Tavel, earlier this year.

We’ve reached out to both Snap and a representative from Benchmark for comment and will update the story when we hear back.

Social – TechCrunch

Instagram tests questions in Stories

Instagram has been incredibly busy of late, announcing IGTV, Instagram Lite and a slate of features including Stories Soundtracks. But the Facebook-owned photo-sharing service doesn’t show any signs of letting up.

Android Police today noted that Instagram is testing a feature that would allow users to post questions to their followers and receive answers.

Instagram already offers the ability to publish polls to followers with multiple-choice options for answering. But this test seems to point toward the option to offer lengthier responses to users’ questions.

One user in Indonesia sent to Android Police a screencap of the feature(pictured above), and a user in Spain also spotted the feature. That said, we still have very little information on just how this might work.

Right now, when a user posts to their Story, their followers can respond via DM. With more open-ended questions and responses, it’s unclear if responses will still come in via DM or be bundled together as part of the story.

The latter seems more in keeping with Instagram’s push to make Stories as interactive as possible. The open-ended question could serve as a jumping off point for a collaborative story comprised of everyone’s responses.

That said, this feature hasn’t been confirmed by Instagram, though we’ve reached out and will update the post when we learn more.

Social – TechCrunch

Instagram’s Do Not Disturb and ‘Caught Up’ deter overgramming

Instagram is turning the Time Well Spent philosophy into features to help users avoid endless scrolling and distraction by notifications. Today, Instagram is rolling out its “You’re All Caught Up – You’ve seen all new posts from the past 2 days” warning in the feed, which TechCrunch broke the news about in May. Past that notice will only be posts that iOS and Android users have already seen or that were posted more than 48 hours ago. This will help Instagram’s 1 billion monthly users stop fiendishly scrolling in search of new posts scattered by the algorithm. While sorting the feed has made it much better at displaying the most interesting posts, it also can make people worry they’ve missed something. This warning should give them peace of mind.

Meanwhile, TechCrunch has learned that both Facebook and Instagram are prototyping Do Not Disturb features that let users shut off notifications from the apps for 30 minutes, one hour, two hours, eight hours, one day or until they’re turned back on manually. WhatsApp Beta and Matt Navarra spotted the Instagram and Facebook Do Not Disturb features. Facebook is also considering allowing users to turn off sound or vibration on its notifications. Both apps have these Do Not Disturb features buried in their code and may have begun testing them.

Both Facebook and Instagram declined to comment on building new Do Not Disturb features. “You’re All Caught Up” could prevent extra scrolling that doesn’t provide much value that could make Instagram show up atop your list of biggest time sinks. And an in-app Do Not Disturb mode with multiple temporary options could keep you from permanently disabling Instagram or Facebook.


We referenced Instagram Do Not Disturb in our scoop about Instagram building a Usage Insights dashboard detailing how much time you spent on the app. Both Facebook and Instagram are preparing these screens that show you how much time you’ve spent on their apps per day, in average over the past week and that let you set a daily limit after which you’ll get a notification reminding you to look up from your screen.

When we first reported on Usage Insights, Instagram CEO Kevin Systrom tweeted a link to the article, confirming that Instagram was getting behind the Time Well Spent movement. “It’s true . . . We’re building tools that will help the IG community know more about the time they spend on Instagram – any time should be positive and intentional . . . Understanding how time online impacts people is important, and it’s the responsibility of all companies to be honest about this. We want to be part of the solution. I take that responsibility seriously.”

Now we’re seeing this perspective manifest itself in Instagram’s product. Instagram’s interest conveniently comes just as Apple and Google are releasing screen time and digital well-being tools as part of the next versions of their mobile operating systems. These will show you which apps you’re spending the most time in, and set limits on their use. By self-policing now, Instagram and Facebook could avoid being outed by iOS and Android as the enemies of your attention.

In other recent Instagram news:

Social – TechCrunch

Facebook is shutting down Hello, Moves and the anonymous teen app tbh due to ‘low usage’

Facebook, the world’s largest social network with 2.2 billion users, is all about capitalizing on scale, and so today it announced that it would be sunsetting three apps in its stable that simply weren’t keeping up. After failing to gain traction, Hello, Moves and tbh will all be depreciated in the coming weeks, the company announced today. The three apps are being shut down at varying times we’re noting below. Facebook says that all user data from all three of these apps will be deleted within 90 days.

“We regularly review our apps to assess which ones people value most. Sometimes this means closing an app and its accompanying APIs,” said Facebook. “We know some people are still using these apps and will be disappointed — and we’d like to take this opportunity to thank them for their support. But we need to prioritize our work so we don’t spread ourselves too thin. And it’s only by trial and error that we’ll create great social experiences for people.”

But “low usage” is a pretty wide range, it turns out. Sensor Tower notes that Hello had only 570,000 installs — that is, total downloads — but tbh had 6.4 million and Moves 13 million. Still, these numbers are all just blips in comparison to billions of downloads and users of Facebook and the other popular apps that it owns: Instagram, WhatsApp and Messenger.

The three getting sunset are all examples of the different angles that Facebook has explored over the years to evolve its business into newer areas — not all of which have panned out.

Moves came to Facebook by way of an acquisition four years ago of the fitness and tracking app. At the time, Facebook appeared to be interested in exploring more about how people might use their Facebook social graphs to share more data about their own fitness regimes, and to possibly use Facebook not just as a place to share but to track progress. With its acquisition of Moves, it might have been the case that Facebook believed that it could take a more direct role in that process.

Early on, there was promise: Moves already had amassed four million downloads before the acquisition. However, things simply did not continue to bulk up much after that point, either because Facebook saw that there wasn’t a large enough critical mass of people interested in making fitness social, or because its own spin on how to do that wasn’t where the market has moved. (You could argue that there has always been a huge social element in exercise — gyms and exercise classes being two obvious examples — but these are more about people in physical spaces doing things together.)

In the end, Moves the app hasn’t been updated in more than a year, and it languishes at around 616 in the fitness category today. It will be shut down in the coming weeks, Facebook said.

Hello, launched in 2015, was part of Facebook’s wider strategy to build more communications services to bridge the gap with users, targeting those specifically in emerging markets.

In the case of Hello, the app was Android-only and worked in the U.S., Nigeria and Brazil. The app is a bit like TrueCaller: people could link up their Facebook accounts to a dialer, which would then show you the Facebook identity of a caller so you could decide whether or not you would like to take the call.

As with Moves, Hello came amid a time when many thought Facebook had big plans for communications, with rumors abounding of Facebook phones and Facebook wanting to take on carriers with its own voice services. Hello, however, never expanded — neither in geography nor features — and so now we say goodbye. The Hello app and its API are both getting depreciated on July 31. The app was actually removed from the Android store on June 26, when it had a ranking of 509.

Lastly, tbh is the youngest of the apps to be getting the chop — in more ways than one. The “anonymous compliment” app was made specifically for teens, a relatively new category for Facebook, and the company was only acquired by the social network in October 2017. Indeed, tbh was young and hardly ubiquitous when Facebook snapped it up, and although the company seemed interested in letting it run its course, to be honest, it’s no surprise to see it also go away.

Facebook is not giving a date for its disappearance: the app is still live at the moment. App Annie, however, notes that its ranking currently in the U.S. is 205 in social networking.

Facebook is no stranger to spring cleaning and clearing out unpopular apps, as well as a wide swathe of other services such as APIs that are no longer core to what it’s working on. Other dead app efforts have included M, the personal assistant app, its Snapchat clone Lifestage and its Groups app. And just today, it issued a notice of several APIs that would be shut down to better reign in how its user data is tapped by third parties.

Social – TechCrunch