CNIL Conclusions on Google’s privacy policy

Just a few days ago, the European authorities on Google’s respect of the European Directive on Privacy published their conclusion.  Basically they indicated that Google failed respecting essential principles of the Directive as  limiting the usage of the personal data, minimising the requested personal data and the right to object.

  • There is not enough information on the nature and usage of the collected data,
  • The way users can control their level of privacy is too complicated,
  • The data collected is not minimized for the purpose.
  • The retention periods are not specified.

As the CNIL  puts it:

..it is not possible to ascertain from the analysis that Google respects the key data protection principles of purpose limitation, data quality, data minimization, proportionality and right to object.[…]

Under the current Policy, a Google service’s user is unable to determine which categories of personal data are processed for this service, and the exact purposes for which these data are processed.

E.g.: the Privacy Policy makes no difference in terms of processing between the innocuous content of search query and the credit card number or the telephone communications of the user ; all these data can be used equally for all the purposes in the Policy.

Moreover, passive users (i.e. those that interact with some of Google’s services like advertising or ‘+1′ buttons on third-party websites) have no information at all.

On the combination of data accross services, the change Google just did, the CNIL says:

Combination of data across services has been generalized with the new Privacy Policy: in practice, any online activity related to Google (use of its services, of its system Android or consultation of third-party websites using Google’s services) can be gathered and combined.

The European DPAs note that this combination pursues different purposes such as the provision of a service requested by the user, product development, security, advertising, the creation of the Google account or academic research. The investigation also showed that the combination of data is extremely broad in terms of scope and age of the data.

E.g.: the mere consultation of a website including a ‘+1′ button is recorded and kept during at least 18 months and can be associated with the uses of Google’s services; data collected with the DoubleClick cookie are associated to a identifying number valid during 2 years and renewable

Here are the recommentadions they made to Google to tackle the combined data accross services:

  • reinforce users’ consent to the combination of data for the purposes of service improvements, development of new services, advertising and analytics. This could be realized by giving users the opportunity to choose when their data are combined, for instance with dedicated buttons in the services’ (cf. button “Search Plus Your World”),
  • offer an improved control over the combination of data by simplifying and centralizing the right to object (opt-out) and by allowing users to choose for which service their data are combined
  • adapt the tools used by Google for the combination of data so that it remains limited to the authorized purposes, e.g. by differentiating the tools used for security and those used for advertising.

But there is a good news for us citizens from this issue:

This letter [a letter to Google with the recommendations of the EU Data protection authorities] is individually signed by 27 European Data protection authorities for the first time and it is a significant step forward in the mobilization of European authorities.

Let’s hope next Google’s Data Privacy Policy will be soon here to adopt.

 

Europe against Google’s new data privacy

Rumors are that European data protection commisssioners will soon act, as it seems they have determined that the new data privacy rules of Google breach our European laws.  Viviane Reding, the EU Justice chief, had already mentioned her doubts in March, but there are still no details on what has been found illegal.

There is still not clear what they can do to enforce european laws, as in each country,  each data protection commissioner has different levels of power, but as an example, they could oblige Google to go back to their previous system.  Now I don’t see practically how they can go back and disentangle the unified data, but it feels good to have somebody keeping an eye for us, looking into this issue to protect us.

Can a big leak on privacy stop Big Data?

 

Bill Franks is Chief Analytics Officer for Teradata’s global alliance programs.  So he knows more than a little about what’s going on in the Advanced Analytics space. He predicted on the International Institute for Analytics’ 2012 that the evolution of big data will depend on how the privacy issue would be handled.  He said:  ‘I have wondered what the “big moment” will be that causes everyone to realize how much about them is exposed and leads to a major popular revolt. Honestly, I thought the big blow up in December around Carrier IQ would be that moment.’

The [Carrier IQ] software collects usage information aimed at helping telecommunications companies and mobile device manufacturers identify hardware or network issues. […] The phone was even capturing key presses such as when you entered a password on a secure website. Naturally, this caused a huge uproar. (You can view this series of articles from CNNMoney for more detail: Part 1, Part 2, Part 3, and Part 4.)

As you see, the intention of the software may be completely valid, but any e-recording entails a risk for privacy.  It is critical to create awareness of this risk, and that the access and usage of all recorded information must be regulated.

[…] The extent to which the tracking of behavior on the internet occurs – with Facebook, Google, and other public sites capturing data about who you are, what you are doing, where you are going, and what you want – is not known to most people. Even though many privacy policies technically declare intentions to collect and use data, the dozens of pages of “legalese” terms used aren’t read or understood by most people.

[…] I believe that privacy concerns will be a major influence on how big data itself, and the use of it, evolves. There will need to be an extremely high level of trust between organizations who want our data and those of us who provide it. That trust must be earned and maintained. All it will take will be a few cases of violated trust, intentional or not, to derail the relationship and set us all back.

 

Though I don’t think a leak of privacy will ‘set us back’ as Bill Franks says, I do think there is a big need to create a trusted organisation, institution or other group to regulate the privacy issue.  If there was such a ‘Trusted Privacy Organisation’, there would be a way to work only with the applications that adhere to there standards and/or allow audits from such an institution.

 

TEDxBrussels, la suite…

Julie Meyer, founder and CEO of Ariadne Capital,  was also very interesting calling the economy in our 21st. century the ‘Individualist Capitalism’.  Her very clever message to all entrepreneurs was: Look at your ‘Natural Allies’, ask yourself who has interest in your success.  They are going to be there and help you to be successful!

On privacy and the lack of forgetfulness of Internet, Miko Hypponen, let me thinking with his question: ‘Do you trust your future governments with what you said today?’

Alain De Taeye, co-founder of Tele Atlas, did a very realistic presentation.  He talked about TomTom, and the fact that they are predicting their user’s immediate future, by letting them know what’s ahead of them: traffic jam, accidents, so that the driver can decide to take another route and avoid that near future.   Who can say positively that being informed of our possibilities in the path in front of us is not, as he said, predicting our future?  It is like when you learn a magician trick: it’s no longer magic.  Will we have the same feeling when we will be able to know our future?  Will it seem so obvious to us?

That’s all for today, folks!

Is Deleted ‘removed from your sight’? Or ‘removed from every support’?

Max Schrems is a law student from Vienna.  He is filing Facebook for data privacy issues.  What he discovered is that not only Facebook keeps everything you typed in or uploaded to your account (they have about 1200 A4 pages of information about him, and he is only 24 years old!),  but also they keep deleted information.  In his case, he found messages he had deleted, with the added note: ‘Deleted: Yes’.

Facebook has since 2009 a second headquarter in Ireland, and any new european user has its contract with that Irish Facebook company.  So they are subject to the European laws, which are much more strict regarding data privacy than in the US.    He filed complaints against Facebook and sent them to the Irish data protection authority, who is analysing them.  Facebook answer to Max’s ‘Deleted’ message issue that they removed it from a certain part of the site…If the information is not public anymore, but kept on their servers, is that enough?

For the whole story, check this video.

Can we predict social behavior from Internet data?

The New York  Times posted an article from :Government Aims to Build a ‘Data Eye in the Sky’, informing that a US Intelligence Unit launched a research program to analyse public data and find predictions of social and political relevance.

Now social scientists are trying to mine the vast resources of the Internet — Web searches and Twitter messages, Facebook and blog posts, the digital location trails generated by billions of cellphones — to do the same thing.[combine mathematics and psychology to predict the future, as the ‘psychohistory from Isaac Asimov]

The most optimistic researchers believe that these storehouses of “big data” will for the first time reveal sociological laws of human behavior — enabling them to predict political crises, revolutions and other forms of social and economic instability, just as physicists and chemists can predict natural phenomena.[…]

This summer a little-known intelligence agency began seeking ideas from academic social scientists and corporations for ways to automatically scan the Internet in 21 Latin American countries for “big data,” according to a research proposal being circulated by the agency. The three-year experiment, to begin in April, is being financed by the Intelligence Advanced Research Projects Activity, or Iarpa (pronounced eye-AR-puh), part of the office of the director of national intelligence.The automated data collection system is to focus on patterns of communication, consumption and movement of populations. It will use publicly accessible data, including Web search queries, blog entries, Internet traffic flow, financial market indicators, traffic webcams and changes in Wikipedia entries.

No need to mention that they also mentioned the data privacy issue in the article.  There are many comments to this news, and I extracted here an important part from Ike Solem’s first comment:

The fundamental flaw in Asimov’s notion of “predicting history” involves the mathematical concept of chaos, otherwise known as “sensitive dependence on initial conditions.”

[…] certain features of physical (and biological) systems exhibit sensitive dependence on initial conditions, such that a microscopic change in a key variable leads to a radically different outcome. While this has been heavily studied in areas like meteorology and orbital physics, it surely applies to ecology, economics, and human behavioral sciences too.

Thus, it’s a false notion that by collecting all this data on human societies, one can accurately predict future events. Some general trends might be evident, but even that is very uncertain. Just look at the gross failure of econometric models to predict economic collapses, if you want an example.

So there is always the possibility of an unforseen agent that changes the predicted behaviour.   Still, much more trends will be uncovered from the available big data sets than the ones discovered by human minds as it is up to now.  But what about the ‘quantum effect’?  If a trend is announced publicly, would that announcement make people to follow it just because they are expected to do so?  Or otherwise, wouldn’t it make them change their behavior radically?  I think we are still far away from human behavioral prediction.

Internet interaction in the spotlight

Before, you should consider everything you wrote in Facebook as public. Now with Open Graph, the new application Mark Zuckerberg presented, everything YOU DO in Facebook will be public too.

[…]

First, Facebook observed that asking people to manually Like, Share, or Comment on content requires an extra step that actually inhibits sharing and interaction. Rather than introduce changes to the buttons, it will simply change the technical framework for apps within Facebook so that rather than requiring you to click to share, comment or express sentiment, the app automatically broadcasts a status update for you. For example, with the new Facebook and Spotify integration, simply listening to music automagically updates my News Feed (eventually my timeline). Depending on how much interaction it triggers, that activity may also show up in your News Feed.

So now you know. As your Internet Reputation is becoming increasingly important, be aware of the capabilities of the new tools!

Now, like in Quantum Physics, knowing that you are being observed may change your behaviour 🙂  Unluckily it also increases the stress of the person being watched.

Check the full article: Whoops, I didn’t mean for you to read this

Data Philanthropy is Good for Business

Give Data as you give Blood. Global Pulse, an innovation initiative in the Executive Office of the UN Secretary-General, wants to analyse private data for the public good.  The idea is to find patterns in data coming from private companies, and share those findings.  For that, they have to find a way enterprises can deliver their users trends but warrant the anonymity of the information (to protect user’s privacy), and also in a way they don’t loose corporate competitiveness.
Check the Forbes article ‘Data Philanthropy is Good for Business’:

Corporations today are mining this data to gain a real-time understanding of their customers, identify new markets, and make investment decisions. This is the data that powers business, which the World Economic Forum has described as a new asset class.[…]

Consider: MIT researchers have found evidence that changes in mobile phone calling patterns can be used to detect flu outbreaks; A Telefónica Research team has demonstrated that calling patterns can be used to identify the socioeconomic level of a population, which in turn may be used to infer its access to housing, education, healthcare, and basic services such as water and electricity; and researchers from Sweden’s Karolinska Institute and Columbia University have used data from Digicel, Haiti’s largest cell phone provider, to determine the movement of displaced populations after the earthquake, aiding the distribution of resources.

At Global Pulse, an innovation initiative of the UN Secretary-General, we believe that analysis of patterns within big data could revolutionize the way we respond to events such as global economic shocks, disease outbreaks, and natural disasters. Our team of data scientists, open source hackers, and international development experts functions the way an R&D lab does: asking questions, formulating and testing hypotheses, building prototypes and collaborating with partners within and outside the United Nations to develop methods for harnessing real-time data to gain a real-time understanding of human well being.

We’re in discussions with corporations about how their digital services could be used as human sensor networks to detect the early warning signs that communities are losing jobs, getting sick, not getting enough food, or struggling to make ends meet. Now we need to find a way for the private sector to share, safely and anonymously, some of what it knows about its customers to help give the public sector a badly needed edge in protecting citizens. It’s the concept that has been called “data philanthropy.”[…]

The companies that engage with us, however, don’t regard this work as an act of charity. They recognize that population well being is key to the growth and continuity of business. For example, what if you were a company that invested in a promising emerging market that is now being threatened by a food crisis that could leave your customers unable to afford your products and services? And what if it turned out that expert analysis of patterns in your own data could have revealed all along that people were in trouble, while there was still time to act?

Data philanthropy could make a real difference, and it makes good business sense as well.

20GB Data leak from missing dots!

Ed Borasky (@znmeb) pointed me out this article Missing dots from email addresses opens 20GB data leak.  In it, Mark Stockley explains a scheme easy to follow to obtain private information.  It is based on the errors users make when writing an email address.

Security researchers have captured 120,000 emails intended for Fortune 500 companies by exploiting a basic typo. The emails included trade secrets, business invoices, personal information about employees, network diagrams and passwords.Researchers Peter Kim and Garrett Gee did this by buying 30 internet domains they thought people would send emails to by accident (a practice known as typosquatting).

And in all this, they are not even doing something illegal!  But they could easily have gone further, crossing the line and exploiting this misspelled address to put themselves between the originator and the recipient (Man in the Middle attack).   In that way they would have been intercepting all courier.  How to do it? Sending the original mail to the correct intended recipient (correcting the email address) , and expecting him to simply click on ‘Reply’.  That would send them the answer, and for the originator not to be suspicious, they would send the answer back to them.

So when sending your next big idea, be sure to check your spelling!

By which country’s law is your Cloud ruled?

I don’t think I’m the only one surprised by this possibility, am I?

Microsoft admitted to a ZDNet reporter that they would turn over data from European companies, in European based servers, if the data was the subject of a U.S. Patriot Act investigation or request.Now, everyone has always known that data held in U.S. based server locations could be the subject of a Patriot Act request and that the businesses and persons that were the subject of the request might not ever know if their data had been turned over. But I think a lot of companies, especially overseas, were surprised to find that the U.S. government could request their non-U.S. based data simply if the company running the service was U.S. based.

The article from Jim Rapoza  No Data Privacy In The Cloud, ends with a sound advice: If you are using the cloud, use strong encryption, and keep safe the key!

If the cloud provider can’t decrypt your data, they can’t turn it over.