Fake News: Brands and Social Media Credibility

With the advent of social media came hope that consumers would be treated to a world where the truth prevailed – a brave new world powered by the people against global media moguls peddling propaganda. Now, it appears the situation has been turned on its head, with the likes of Pepsi, New Balance, Macy’s, Grubhub, and Oreo falling victim to fake news that spread rapidly via social media.

Social media – A breeding ground for viral (but fake) content

Brands have a real problem with platforms such as Facebook and the like. Social media is now so ingrained within our lives that consumers seem to believe every last word, of every shared post. And the trouble is, we truly trust it.

Like lambs to the slaughter, brands are potentially killed off by a single post from those who profiteer on click bait. At least we knew the clear positions of media outlets – vested interests, self-declared political persuasions and (with a little research) their connections to the world of big business. Now this no-holds-barred free-speech is looking less like a utopia, and more like an Orwellian nightmare for brands doing battle with news fakery.

Compared to mainstream news, this fakery is altogether more difficult to decipher – both the origins and the intent. There are those who create click-bait for a rush of penny-click profit (something Google has vowed to get to grips with when it comes to AdSense) – but these are the easy guys to tackle. There are then those doing it just for kicks – the pre-teen in his bedroom, bored between tea and bedtime.

And the potential result? Outcomes that range from the bizarre to the financially devastating

Fake news can (allegedly) be held at least partly to blame for electing a megalomaniac business man – now the most powerful in the world (Buzzfeed’s take on this). It can also be credited with Kylie Jenner having been awarded the Medal of Freedom for – “realizing stuff”. Amazing. Ok, you could argue that target market of the Kardashian clan is arguably less intellectually gifted than the average man, but even market traders (who we’d like to think are pretty smart) aren’t immune : £1.05m losses suffered by shareholders of two companies following fake tweets that both were under US government investigation.

Brands – You’ve got a challenge on your hands. And you (may be) on your own with this one.

There’s been much heat on the likes of Facebook to get on top of this issue. Mark Zuckerberg spearheads the importance of free speech, yet his position on the antithesis of free speech – propaganda and brand BS – has until recently been a flat out denial of the issue. Bottom line? It’s debatable whether this is the responsibility of social media platforms, and even if it were, brands can’t rely on them for protection from fake news.

PR guru Tony Telloni says that it takes anywhere from six weeks to six months for brands to recover from the impact of a fake news story. The need for a reactive and finely honed strategy is then a non-negotiable for brands of any and every industry. But when it comes to the crunch, brands have two big problems – the speed at which these stories spread and being able to counter the impact by being able to reach people on-mass (and even then – will the consumer listen?).

So, how can brands manage the risk of fake news?

As a first step, straight-from-the-horse’s-mouth communications from brand to consumer need to be innovative and engaging enough to contend with the allure of fake news. But beyond this line of messaging is the need for a reply to such stories with a voice that goes beyond the official announcement with a voice that is authentic and credible. And there may be no more an authentic or credible voice than real, fellow social media users. But when faced with a viral fake story, some brands are missing a trick by ruling out thousands of potential mouthpieces that could aptly counteract mistruths and rumours – their employees.

Whilst employee advocacy has been harnessed and accepted as important for tasks such as productivity, recruiting, brand awareness, social selling, event attendance and more, brands have been slow to catch onto employee advocacy as a tool for fighting fake news.

One example that others may learn a lot from is healthcare company Humana, where staff aren’t only granted access to social media, but are promoted to a level where they themselves can become original content creators. This is the kind of innovation that could be capable of truly tackling social media news fakery.

By comparison, 54% of employees in the wider working world are banned entirely from social media. It’s all pretty archaic, to say the least, and in fact 45% of companies ban it exactly because they fear it’ll damage business reputation (Lewis Communications and HCL Technologies), when in fact, quite the opposite could be true.

Whilst we can hope that Facebook etc get to grips with the issue of fake stories, brands need to presume that all responsibility currently lies at their door.

When it comes to voices to whom consumers may listen, employee advocacy is a critical tool to be harnessed – and yet all too many companies fall at the first hurdle by not even allowing their staff online – all whilst the world may well be falling down around their ears.

Ultimately companies of every industry need to think hard on their strategies if they’re to limit the impact of any future fake story – and this may well be such an imposing challenge that global businesses may need to completely rip up corporation-wide communication policies.

Should we be scared of artificial intelligence?

Recent technology news coverage has been dominated by growing concerns over the development of artificial intelligence (AI). My interest was piqued by a particular Forbes article that frames this topic in terms of recent advancements and explores a variety of viewpoints from industry heavyweights. For your own reference, the article can be found here:  http://www.forbes.com/sites/theopriestley/2015/09/07/musk-and-hawking-are-wrong-we-should-fear-facebook-building-an-artificial-intelligence/

The author, Theo Priestley, hooked me and prompted me to read the full article – one in which he fundamentally disagrees with big technology players such as Elon Musk and Stephen Hawking. Hang on, why would someone be disagreeing with two of the most forward-thinking and well-respected human beings on our planet?! If anyone is going to know something about this growing phenomenon, surely it would be either of those people? We should be paying attention, not calling them ‘wrong’! Right?

Elon Musk, founder and CEO of Space Exploration Technologies Corp, says: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that.”’

Eminent physicist Stephen Hawking’s view is that “The development of full artificial intelligence could spell the end of the human race.”

Sure, the unknown is a scary thing, and the movie industry has done much to perpetuate the widely accepted ‘future’ of Artificial Intelligence – we’ve all seen the movies. There is a formulaic and uniform outcome – the Artificially Intelligent being turns on its creator, struggles for power, and is ultimately a force of destruction.

Theo describes how, in the real world, artificial intelligence handles queries, helps users and solves problems. This is possible through ‘understanding human behaviour, rather than the traditional method of building artificial intelligence by mimicking how the brain works through algorithms.’

Artificial intelligence products already on the market include Siri, Cortana and Google (OK Google). The newest AI to launch is ‘M’, the Facebook virtual assistant.

‘M’ would work in the same way as the main character Eva in the movie ‘Ex Machina’. (sidenote: highly recommended viewing!)

The data gets pulled; this will include personal information; names, nicknames, ages, etc. An important point Theo raises is that Facebook also owns WhatsApp and Instagram, meaning its data pull is far wider than any other network. It has access to phone numbers, photos, filter preferences, bio words and much, much more that the average user may not have considered before. This should make dealing with human queries an easy task, as they already have access to most of a user’s information and can give them tailored results specific to that individual.

Part of the training and development of ‘M’ relies heavily on human ‘assistants’; monitoring and tracking their own endeavours to solve problems – which websites are they visiting to glean the best information, which keywords are typed, etc. This will lead to further precise alterations to align ‘M’ more closely with human behavior patterns.

You may think this will lead to a brand new type of breach into personal data or even personal freedom. However, companies such as Facebook have already been manipulating users’ news feeds as part of psychology experiments. According to them, this measures how emotions spread across social media. Personally I feel that Facebook may have done this to understand human behaviour in a deeper context. Our once tacit and private thoughts, reactions and emotions are now on the open market to be measured, monitored and then ultimately mimicked.

With that in mind, I share some of Theo Priestley’s concerns and feel we should be proceeding with caution and prudence when allowing certain companies to create AI and be carefully examining their motives and end goals.

What are your thoughts on this? Do you think we should fear Artificial Intelligence? Or is the development of AI safe if the right organisations are at the forefront?

Leave a comment below, even if it just about how much you love/hate Ex Machina!!