In reply to thomasadixon:
> So they used data to target people and then tried to convince these people
Yes that's established.
> who they thought would likely agree with them, due to the data they had
That's a big extrapolation, and is certainly inaccurate. Millions are not spent, convincing people who already agree with you. It's made quite clear in the linked article that the aim of this endeavour was to find 'slivers of influence that can tip an election.'
> What exactly is the big deal here?
Really? Ok. Due to almost-opaque business practices and privacy policies, very few users of Facebook, Google and a plethora of other sites, fully understand how completely the combined data set from these entities penetrates their lives. Where they've been. Who they're friends with. Where they like to hang out. Their interests, sexual preferences, employment history. Who they admire. This is the tip of the iceberg, in truth. Similar methodology has already clearly shown that peoples' political leanings can be accurately gleaned from their Facebook 'likes' alone. Not to mention the other data:
'...the company also (perfectly legally) bought consumer datasets – on everything from magazine subscriptions to airline travel – and uniquely it appended these with the psych data to voter files. It matched all this information to people’s addresses, their phone numbers and often their email addresses. “The goal is to capture every single aspect of every voter’s information environment...”'
This huge, diverse data set was assembled and used to identify and influence key swing voters using a stream of individualised, targeted advertisements *without the individual's knowledge or consent*. Were this set up as an interventional experiment, it would be extremely unlikely to pass review from the research ethics committee. The methodology used violates some key points of well-established codes of ethics, first and most important being that of informed consent.
I'd further suggest that the effectiveness of such methods is likely closely linked to how unaware the recipient is.
> That companies that do this are getting better? In politics data has *always* been used to target people. Parties target seats, and then canvas based on where they think they have a chance of winning
Yes. Hereto now however, political parties have not poked around people's individual lives to determine susceptible voters to specifically harass. They've put themselves out there via canvassing and traditional advertising and hoped their 'product' was good enough to buy, just like everyone else.
It's plain to see that using psychological priming via individualised and targeted advertising based on a data set that the consumer likely has no idea they have even allowed for sale (let alone its extent), affords entities with access to these methods an unfair, and unethical advantage.