Are algorithms better than editors?

Technology apparently knows us better than we know ourselves. So maybe we should let it tell us what to publish too?

'Big data’ is a phrase that’s getting a lot of airplay at the moment. The basic premise is that nowadays we have the computing power at our fingertips to be able to crunch massive quantities of disparate information and use it to unearth previously unappreciated things - about people.

The US election came down ultimately to just few counties in a few swing states according to various media reports. And that’s because both parties had huge databases of information that they had gathered about voters that allowed them to predict very accurately how states and even counties would vote long before polling day. This allowed them to micromanage the campaign. Spending their resources only on the people that ‘mattered’.

On the one hand this seems really smart. It’s a bit of a holy grail for marketers this kind of stuff. To roll out the oft quoted phrase attributed to John Wanamaker. "Half the money I spend on advertising is wasted; the trouble is I don't know which half.” Increasingly with the web and big data to help them, marketers DO know which half is wasted. (Or they think they do.)

Take this thinking to its conclusion and you can see that in politics this approach could be massively undemocratic. People who live in states which are bound to vote a particular way regardless (according to the data wonks) aren’t worth talking to at all. Leave them alone – don’t even bother to tell them anything much at all. Conversely, imagine if you just happen to live in one of the key swing state counties that the data wonks have worked out really matter – your vote is suddenly worth exponentially more than the votes of millions of people elsewhere. It would be worth moving to one of these counties just to have that kind of influence.

I’ve recently read The Filter Bubble: What The Internet Is Hiding From You by Eli Pariser and it’s just brilliant. Its basic premise is quite similar. As algorithms get better and better at knowing what you want (or what they think you want), they’ll just keep dishing that up and you’ll never see anything else. So, click on the same friend a few more times on Facebook and their updates get pushed up in your newsfeed – do that enough and it’s possible that you’ll increasingly see only the updates of same few friends that Facebook thinks ‘really’ matter to you at the expense of all the others. Likewise for Google. Keep searching for say information that suggests you have a bias towards voting Republican (stuff that’s pro gun ownership maybe?) and slowly the search engine will start serving up more of the same and you’ll see less and less stuff that’s more Democrat leaning (stuff that’s pro gay marriage for example). You’ll begin to be locked inside a bubble of stuff that is highly 'relevant' to you to the extent that it will shut out all conflicting points of view.

Both of these concepts are fundamentally about the ability of technology to know us better than we know ourselves or to be better at deducing subtle connections than we can ever hope to be. And maybe (maybe) it is. But is that a good thing?

You know all that stuff you stick on your Facebook page? It’s not just Facebook using it. It gets sold on to huge third party companies that use it to model and predict behaviour. They are getting better and better at it. Soon they will know that because you like Homeland you are more likely to buy one brand of soft drink over another or one car over another. Did you know that when you read a book on Kindle, Amazon is watching what you read? So you spent the afternoon reading a book of fiction which has a chapter in it featuring a car chase where the hero drives a BMW? It won’t be long before the ad you get served up next time you hit Amazon will be for… a BMW.

So… you might have guessed that I’m not a huge fan of big data. I believe the Net should be about helping us make more interesting and unexpected connections – about serendipity and about humanity ahead of technology and profit.

What has all this got to do with content then? This week a company called Percolate raised 9 million USD in series A funding. Percolate has developed a platform to help brands use social media more effectively. It uses an algorithm to suggest content ideas for social media community managers (ie the people that run a brand’s Facebook page or blog etc). This means they can spend their time producing more content that’s ‘appropriate’ more quickly.

To quote some of the stuff on their website: Percolate’s goal is to make content creation easy by prompting community managers with ideas and inspiration…  Percolate is constantly scanning for interesting areas for the brand to explore for stock content, whether that’s a long-form blog post, an infographic or a video.

Who needs research and consideration to write content? Just get an algorithm to come up with the ideas for you. But how would you feel if you were reading stuff created off the back of prompts from a piece of technology rather than by a real person?

Related Posts