Social Anxiety

The last 12 months or so have been interesting for the biggest social platforms, with Facebook, Twitter and Snapchat all facing a variety of challenges and controversies relating to everything from Russia to mental health, from advertising to fake news. Tyrone Stewart reports.

The big social media platforms will all be hoping for a far more positive year in 2018, following what can only be described as a nightmare within the world of internet socialising. Russia had Facebook and Twitter on the ropes, and even managed to drag Google into the mix, by taking advantage of the platforms to influence both the US presidential election and the Brexit vote in the UK. Meanwhile, Snapchat failed to really get off the ground financially following its initial public offering (IPO) and saw its stock slump.

From Russia with bots
The biggest controversy of the year within the social spectrum – and one that continues to rage on – is the way in which Russia managed to use Facebook, Twitter and Google to influence both the US presidential election and the UK’s Brexit referendum, despite all three companies insisting that their platforms had not been exploited.

The three tech giants quickly changed their tunes, however, when Facebook revealed in September 2017 that it had identified 470 phony accounts and pages that had run political ads between June 2015 and May 2017. These ads were found to have reached approximately 126m Americans, despite Facebook initially stating they had only reached around 10m. Interestingly, none of the ads directly referenced the presidential election, instead addressing divisive topics such as race, LGBTQ issues, immigration and gun rights.

The discovery of the ads prompted Google and Twitter to launch investigations into their own platforms. These revealed that Facebook wasn’t alone in being used by Russia – well, mainly Russia’s propaganda machine, the Internet Research Agency, anyway – in attempting to sway voters on major political decisions in other countries.

These findings led to the three tech companies being forced to answer questions from US lawmakers in congressional hearings. The trio were asked why it took them so long to discover that Russia had been using their platforms to influence the presidential election. The three admitted their mistakes, and promised they were doing everything in their power to fix them.

The hearings also saw the House Intelligence Committee release a whole load of Facebook and Instagram ads, and Twitter handles, linked to Russia-registered accounts. These accounts were found to be impersonating news organisations, political parties, and groups focused on social and political issues. Meanwhile, the ads and posts from these accounts targeted anybody and everybody – including those on the far left and far right, Christians, Muslims, the LGBTQ community, Black Lives Matter activists, gun owners, people with differing views on immigration, and more.

“Russia exploited real vulnerabilities that exist across online platforms, and we must identify, expose and defend ourselves against similar covert influence operations in the future,” said Adam Schiff, ranking member of the House Intel Committee, during the hearing. “The companies here today must play a central role as we seek to better protect legitimate political expression, while preventing cyberspace from being misused by our adversaries.”

The internet giants have since had similar run-ins with the UK government over the revelation that Russia also exploited their platforms in order to influence the Brexit vote – although Twitter has claimed that only 1 per cent of the fake accounts surrounding the EU referendum originated in Russia.
In the midst of all this disapproval from the US and UK governments, both Twitter and Facebook have made changes to the way political ads are managed, and have promised to be more transparent.

Among the changes made by Twitter was the introduction of a transparency centre where everyone is able to see who is advertising on the microblogging platform. Here, Twitter users can see all the ads currently running on the platform, how long each of the ads has been running, any creative associated with the ads and which ads are targeted at a specific kind of user. In addition, it enables users to report inappropriate ads and give negative feedback.

To take it a step further, Twitter now requires all political ads to be identified as such and has also put stricter rules in place for who can serve these ads. These political ads have their own special section within the transparency centre.
Facebook followed suit with the announcement that it would introduce a ‘View Ads’ button on pages this year, ahead of the US mid-term elections in November. This button will show the active ads each page has running on Facebook, Instagram and Messenger.

The social network has also, like Twitter, put more controls in place for who can run political ads – now requiring advertisers to go through a verification process.

Face off
Russia hasn’t been the only problem facing Mark Zuckerberg’s social media platform in the past year. Facebook has also been heavily criticised for its failures in stopping the spread of fake news – which does have some overlap with the Russian fake accounts and bots – as well as its failures in protecting its users from abuse and hate speech.

The issues surrounding Russia, fake news and abuse led to Zuckerberg setting himself the ‘personal challenge’ of this year addressing these failings and fixing the monster he has created.

The Facebook founder has a tough task ahead of him, though he and his platform have already started addressing the rise of fake news with the overhaul of the social network’s core feature: the news feed.

Under the changes, Facebook has begun prioritising content from family members and friends over that of brands and publishers. On top of that, the internet behemoth has been asking users to let it know which news outlets they deem to be ‘trustworthy’, and has decided to push more local stories to
users.

This update, as you might expect, hasn’t gone down particularly well with publishers, with Rupert Murdoch among those speaking out against the changes. And even those outside of the publishing industry have questioned whether the changes will actually be successful in eradicating fake news and
misinformation.

“Facebook came up against unprecedented challenges in 2017 with the rise of fake news and Russian ads,” says Theo Watt, senior copywriter at Social Chain. “Despite this, Facebook’s user growth continues to dwarf Twitter and Snapchat, and Mark Zuckerberg will be expecting similar results when crunching Q4 2017’s numbers. You have to ask yourself whether anyone, with the exception of Google, can hurt Facebook in 2018?

“The platform’s latest move to relegate publishers in the news feed should come as no surprise – the news industry needs Facebook more than Facebook needs it, especially when regulating this space is nigh on impossible. Instead, the Silicon Valley behemoth will be putting its efforts into creating a long-form video platform (Watch) to rival YouTube, Netflix and Amazon Prime, along with several AI home devices to strengthen its value to brands in the brave new world of voice search.”

Elsewhere, Facebook has in the past year put in place more moderators to review content, as it tries to tackle the other problems it has, like abuse, hate speech, cyberbullying, revenge porn and extremism. These issues have also seen the social network turn to the use of artificial intelligence (AI) to detect potentially unsavoury content before the need for its users to report it.

Examples of the use of AI by Facebook include using the technology to detect when copies of revenge porn have been shared across its platforms, and to detect individuals who are showing patterns of suicidal thoughts.

Despite Facebook’s efforts to rid its platforms of cyberbullying and help those facing mental health issues, respective reports from the Royal Society for Public Health and anti-bullying charity Ditch the Label found that Facebook’s image-sharing platform, Instagram, is the social platform that is both the worst for mental health and has the highest incidence of cyberbullying of young people. So, its supposed best efforts don’t seem to be good enough.

Beginning to snap
Before jumping back to Twitter – which, outside of Russia’s use of its platform, hasn’t had the worst year ever – it’s only right to take a look back at Snapchat’s last 12 or so months, which have arguably been the worst of the lot. No Russian involvement here, more a case of Facebook and Instagram copying its features.
The first few months of 2017 went pretty well for Snapchat. But ever since March, when it went public, more-or-less everything has been heading on a downwards trajectory.

Despite seeing its daily active user numbers soar, the company has struggled to convert these numbers into revenue, running up losses in the hundreds of millions of dollars in each quarter since its IPO, while seeing its stock price tumble.

In a bid to try and address these financial woes, Snap has begun rolling out a major overhaul to its app. This change sees a ‘Friends’ page appear to the left of the app’s camera, showing friends’ stories and Bitmojis as well as chats with them. Meanwhile, to the right, there is a redesigned ‘Discover’ section, with stories from publishers, creators and the Snap community.

Snap hopes this redesign will enable it to retain its current users and bring in new ones, while also helping to differentiate it from Instagram, making its platform more appealing to advertisers.

“Snapchat’s announcement at the end of last year that it has redesigned the app for greater personalisation and relevance was definitely a step in the right direction, if it wants to size up to Instagram,” says Yuval Ben-Itzhak, CEO at Socialbakers. “In order to keep growing its user base, Snapchat needs to make sure it maintains the quality and relevance of its content. When the quality drops, users take their attention elsewhere, and Snapchat has that challenge right now.

“This redesign, if it delivers on its promise to serve quality content in a targeted way, could be just what Snapchat needs to increase eyeball time on the app. It could also be a great opportunity for marketers to reach and engage their audiences with well-targeted promoted content.”

Adding to Snapchat’s woes has been the performance of many of its major features, as leaked data obtained by The Daily Beast recently revealed. The data from April to September 2017 showed that users were as much as 64 per cent more likely to send a Snap directly to a friend than to post to stories, while only 20 per cent of users visited the Discover section daily. Furthermore, the data revealed that just 11 per cent of users accessed the Snap Map feature – which was heavily criticised upon launch earlier in 2017 – each day.

The tough period for Snap was rounded off with the company making a number of layoffs from its content, engineering and partnership divisions in January.
“If you talk to anyone in the social media sphere, they’ll either tell you that Snapchat is dead or that Generation Z is its last great hope,” says Social Chain’s Watt. “And while the latter may be true in some instances, it’s still but a speck on the landscape compared to Instagram. Curiosity around Snapchat’s new redesign may drive a few extra users in 2018, but early reviews will worry CEO Evan Spiegel and investors.”

Twit for tat
Twitter’s problems probably haven’t been quite as bad as the other major social platforms but, along with the Russia issues, it has still had plenty of problems with hate speech and abuse.

Over the course of 2017, the microblogging platform introduced tools and made several product changes in an attempt to limit the abuse and hate speech that have long been rife on the platform. These updates included new safety settings, updated policies and the use of more technology to detect potentially inappropriate tweets.

“Twitter has been through the mill recently when it comes to changes to its executive team, hate speech on the platform and the influence Russian bots on the platform may have had on the US election,” says Socialbakers’ Ben-Itzhak. “While it has taken some necessary steps to toughen up on hate speech and cyberbullying, it is going to be crucial for Twitter to maintain its authenticity as the platform where everyone has a voice, while also making sure that it is not used as a platform for spreading hatred.”

On a more positive note for Twitter, 2017 saw the introduction of the increased 280-character limit, with the added tweet room making its platform even more appealing to marketers.

“Twitter is definitely on the right path to regaining commitment from marketers,” says Ben-Itzhak. “But it needs to continue proving it is still a worthwhile investment by innovating and adding new features such as the interest-based notifications, new topic modules in the Explore tab and the recent 280-character tweet trial.

“But why stop at 280? 560 characters would add even more context to Twitter’s algorithms to better understand the audience, improve targeting quality and help marketers personalise their message. Changes like this will ultimately reassure marketers that the platform can appeal to their audiences and offer ROI.”

A social future
Looking ahead to the rest of the year, we can expect Instagram to become the social location of choice for brands, especially with the introduction of post scheduling, according to Ben-Itzhak.

At Facebook, brands and publishers will have to get smarter with the content they are posting, and increase their social ad spend, if they want to guarantee reaching the platform’s audience.Meanwhile, Snapchat will need to make improvements to its viewability metrics and work to create true programmatic access to marketers and advertisers as, according to Ben-Itzhak, “this will be a barrier to their success in 2018”.

YouTube
YouTube might not be considered a social network by any traditional measure, but it has joined Facebook, Twitter and Snapchat in having a tumultuous 2017, and as one of Google’s largest advertising channels, it’s also become a lightning rod for concerns over brand safety, abuse and more, writes Tim Maytom.

YouTube began 2017 in a fairly strong position. To most people, the video-sharing platform was synonymous with cute cats, poorly executed pranks and teenage ‘influencers’ talking direct to the camera from their bedrooms. Alongside mobile search, it formed one of Google’s key revenue sources, and as 2017 kicked off, it was focused on broadening its appeal, introducing new features like mobile live-streaming and a lighter version of its app for emerging markets.

But this tranquillity was not to last. In early February 2017, The Times published a damning front-page article demonstrating that ads for a number of household brands had appeared next to videos of extremist content. Screencaps showed Mercedes-Benz ads next to videos praising Islamic State, and campaigns for Argos and Sony next to anti-Semitic hate speech.

The next few weeks were disastrous for YouTube. Brands and ad networks suspended their ad spend, not just across YouTube but across Google’s entire ecosystem, as faith in programmatic buying sank. While Google claimed that incidents of advertising appearing next to extremist content were rare and isolated, tough questions were raised about how, if at all, the firm could regulate a platform where 400 hours of video were uploaded every minute.

By the end of March, the brand safety crisis was estimated to have cost Google over $750m (£525m) in lost revenues, as five of the top 20 US advertisers, representing around 7.5 per cent of total US ad spend, decided to freeze their business with Google. Representatives from the company faced questions from lawmakers in both the US and the UK over how extremist content had been allowed to proliferate on its platform, and why it was allowed to be monetised.

In the wake of the crisis, Google has joined other internet firms in establishing firm plans to deal with extremism on its platforms. YouTube has pledged to bring the total number of people reviewing content to 10,000 over the course of this year, with the safety team also helping to train machine learning technology that YouTube already has in place.

The aim is to eliminate extreme or abusive content before it is even published, and reports published by the firm suggest that as many as 83 per cent of extremist videos were automatically caught by its machine learning tech before receiving a single human flag. Tougher stances are also being taken on videos that don’t infringe YouTube’s policies but do contain “inflammatory religious or supremacist content”.

However, the fight back against extremist content was only the start of YouTube’s problems with brand safety. Over the course of 2017, more problems emerged, demonstrating how unwieldy the platform had become for both marketers and Google itself.

In October, following a tragic mass shooting in Las Vegas, YouTube was criticised when videos peddling debunked conspiracy theories around the event rose to the top of search results, and the firm had to tweak its search engine algorithm in response. In November, several reports revealed the strange and bizarre videos that were being served to children on the YouTube Kids platform, many of which exploited YouTube’s algorithms to generate views. Then, just a few weeks later, another report revealed that YouTube had hosted content designed to appeal to sexual predators, and even served ads next to it.

All of these events were followed by the inevitable departure of a number of advertisers, and assurances by Google and YouTube that it was doing everything it could to prevent such content from being posted on the platform. But the truth is that many of these problems emerged from problems inherent to YouTube’s scale and design. Any platform for user-generated content as big as YouTube (or any other social network) is going to have to rely on algorithms to police behaviour, and those algorithms can be fooled, exploited or made to backfire.

The flip side of the brand safety concerns can be seen in one of YouTube’s missteps from earlier in 2017. The platform’s ‘Restricted Mode’ is designed to filter out “more mature content” and is designed for institutions like schools that want their students to be able to use YouTube as a resource without being able to access explicit content.

However, in mid-March, it emerged that Restricted Mode was also blocking out videos on LGBTQ topics that contained no explicit material, including videos on mental health concerns for LGBTQ youth and even music videos. YouTube claims to have fixed the problem now, but several LGBTQ figures criticised Google for how long it took to remedy the issue, as well as its half-hearted apology for the initial mistake.

“Why was this sort of thing not tested?” asked Rose Ellen Dix, an LGBTQ YouTuber, speaking at Advertising Week Europe. “The Restricted Mode has been around for several years, so how did the engineers who built it not realise that it was affecting videos this way? They need to test it more thoroughly before they roll these sorts of changes out. They need to realise that if they’re making changes to the algorithms that govern these things, and they do make big changes, it has a huge impact on people who are making videos and those that are looking for them.”

Whether its being too strict or too permissive, YouTube has struggled for over a year with managing its own operations and balancing its obligations to both users and advertisers. As the servant of two masters, it is increasingly pulled in different directions. When prominent YouTuber Logan Paul attracted criticism for showing a dead body on his channel, YouTube responded by tightening the requirements for monetisation. However, in response, many smaller content producers have begun discussing moving to different platforms where they can be more assured of views and advertising revenues.

Even months later, advertisers who were part of the initial brand safety controversy remain dissatisfied, with the Financial Times reporting that several had received refunds of just a “couple of dollars” through an automated system designed to credit the accounts of advertisers who had served campaigns next to users that YouTube later terminated for violating ad policies.

YouTube’s approach to brand safety is increasingly resembling a high-wire act, and advertisers are beginning to tire of operating without a safety net.