Elections warn about ethical issues in algorithms

I tweeted recently on this article about how Big Data has been used on the last American Presidential campaign.

Concordia Summit, New York 2016

“At Cambridge,” he said, “we were able to form a model to predict the personality of every single adult in the United States of America.” The hall is captivated. According to Nix, the success of Cambridge Analytica’s marketing is based on a combination of three elements: behavioral science using the OCEAN Model, Big Data analysis, and ad targeting. Ad targeting is personalized advertising, aligned as accurately as possible to the personality of an individual consumer.

Nix candidly explains how his company does this. First, Cambridge Analytica buys personal data from a range of different sources, like land registries, automotive data, shopping data, bonus cards, club memberships, what magazines you read, what churches you attend. Nix displays the logos of globally active data brokers like Acxiom and Experian—in the US, almost all personal data is for sale. […] Now Cambridge Analytica aggregates this data with the electoral rolls of the Republican party and online data and calculates a Big Five personality profile. Digital footprints suddenly become real people with fears, needs, interests, and residential addresses.
[…]

Nix shows how psychographically categorized voters can be differently addressed, based on the example of gun rights, the 2nd Amendment: “For a highly neurotic and conscientious audience the threat of a burglary—and the insurance policy of a gun.” An image on the left shows the hand of an intruder smashing a window. The right side shows a man and a child standing in a field at sunset, both holding guns, clearly shooting ducks: “Conversely, for a closed and agreeable audience. People who care about tradition, and habits, and family.”

Now I came across this other article by Peter Diamandis, featuring what we can expect in 4 year’s time for the next future elections’ campaign.

5 Big Tech Trends That Will Make This Election Look Tame

5 Big Tech Trends That Will Make This Election Look Tame

If you think this election is insane, wait until 2020.

I want you to imagine how, in four years’ time, technologies like AI, machine learning, sensors and networks will accelerate.

Political campaigns are about to get hyper-personalized thanks to advances in a few exponential technologies.

Imagine a candidate who now knows everything about you, who can reach you wherever you happen to be looking, and who can use info scraped from social media (and intuited by machine learning algorithms) to speak directly to you and your interests.

[…] For example, imagine I’m walking down the street to my local coffee shop and a photorealistic avatar of the presidential candidate on the bus stop advertisement I pass turns to me and says:

“Hi Peter, I’m running for president. I know you have two five-year-old boys going to kindergarten at XYZ school. Do you know that my policy means that we’ll be cutting tuition in half for you? That means you’ll immediately save $10,000 if you vote for me…”

If you pause and listen, the candidate’s avatar may continue: […] “I’d really appreciate your vote. Every vote and every dollar counts. Do you mind flicking me a $1 sticker to show your support?”

I know, this last article is from the SingularityHub, but even though they tend to be alarming, knowing how fast technology advances, the predictions they advance are not too exaggerated…

In any way, that reminds me how important it is to ACT on the ethical issues of algorithms. Please notice the capital letters to stress on the movement, which is to take action.  There are many issues that need to be identify, to be discussed, to raise awareness upon, to regulate, and on some of them we can already act on at company level.

I talked in May last year at the Data Innovation Summit about the biases that can be (and usually are) replicated by the new algorithms based on data.  Since then I began working on a training program to help identify and correct those bias when designing and using algorithms, and I’m reminded with the above mentioned articles that this cannot be delayed, it’s needed right now.

So if you are interested on getting your people and organization be aware of biases (human biases and digital ones), and be trained to fix these issues, contact me!

EmojiOne

We are creating our future, let’s don’t close our eyes, we can take control and assume our responsibility setting the railings that will guide the path to our future society.

 

Print Friendly
Be Sociable, Share!

Leave a Reply

Your email address will not be published. Required fields are marked *

Comments will be closed on June 17, 2017.