New computer interface using radar technology

Thanks to Otticamedia.com

Thanks to Otticamedia.com

Have you seen this article?  It’s about the project Soli from the Google’s Advanced Technologies and Projects (ATAP) group.  They have implemented a new way to comunicate with a computer: through radar.  The radar captures the slight movements of the hand like in this picture, where just moving your fingers in the air makes you move a ‘virtual’ slider.

Fantastic, can’t wait to try it!

Be Sociable, Share!

How to lie with charts

I hope you didn’t miss the article on visualization from the Harvard Business Review.  It is called ‘Vision statement: How to lie with charts‘, and it’s full of clear stated examples.

http://en.wikipedia.org/wiki/United_States_presidential_election,_2008

Source: Wikipedia

This color-coded map is one of the examples they show where coloring a county with the political color of the majority vote in that state is misleading.  The map represents the 2008 election (Obama versus McCain) and we can see 80% of the US colored in red (the Republican color), and in fact the Republican candidate John McCain received only 40% of the votes.  The mismatch of the (natural) election’s expectation after looking at this map and the real outcome comes from representing in a map information that is not related to geography.  The number of votes in a county or a state is not proportional with its geographical size.
[..] you could call it the New York City problem -0,01% of the area but 2,7% of the population.

A suggested better representation is using bubbles with sizes proportional of the number of votes, ending with this map showing more correctly a majority of blue instead.

Source: hbr.org

Source: hbr.org

Visualization is growing in importance nowadays that we have so much data all around.  Visualization can help to identify trends, to find patterns, to show relations between data.  It can show what the data represents, putting it in an intuitive way.

But as this article shows, used in a wrong way, visualization can mislead you just as well.

To be on the safe side, it’s better to check the numbers or data behind the representation in order to confirm what the image is showing you … or if somebody is not tricking you!

Be Sociable, Share!

Correlation and Causation in Big Data

Big data began as a term used when you have extremely large data sets, These big data sets cannot  be managed nor analyzed with conventional database programs not only because  of the size exceeding the capacities of standard data management , but also because of the variety and unstructured nature of the data (it comes from different sources as the sales department, customer contact center, social media, mobile devices and so on) and because of the velocity at which it moves (imagine what it entails for a GPS to recalculate continually the next move to stay on the best route and avoid traffic jams: looking at all traffic information coming from official instances as well as from other drivers on real time, and transmitting all the details before the car reaches a crossroad).

The term ‘Big Data’ is also used to identify the new technology needed to compute the data and reveal patterns, trends, and associations.  Furthermore, this term is now synonym of big data’s analytical power and its business potential that will help companies and organizations improve operations and make faster, more intelligent decisions.

What is big data used for?

First and the more evident part is to do statistics: how many chocolates have we sold? What are the global sales around the world, splitted per country? Where do the customers come from?

Then correlation comes to play:  things that have the same tendency, that go together or that move together: countries that are strong on chocolate sells also have  a lot of PhDs.

Thanks to http://tylervigen.com

Thanks to http://tylervigen.com

Correlation is not causality. It’s not because you eat chocolate that you become a PhD (nor the other way around, having a PhD doesn’t mean you are more likely of loving chocolate).  Analyzing correlations is still a big deal.  It can be a conjunction, like with thunder and lightning. It can be a causality relation, and even when there is causality, it is hard to say the direction of the relationship, what is the cause and what its effect.  Nevertheless, big data predictive behaviour analysis is doing a great job, even when the ‘why’s behind it, the underlying causes, are still hidden, not explained.

The great potential in Big data is that it helps us discover correlations, patterns and trends where we couldn’t see them before, but it’s up to us to create theories and models that can explain the relations behind the correlations.

Be Sociable, Share!

Can An Algoritm be “Racist”?

Library of Congress Classification - Reading Room

David Auerbach has written this article pointing out that some classification algorithms may be racists :

Can a computer program be racist? Imagine this scenario: A program that screens rental applicants is primed with examples of personal history, debt, and the like. The program makes its decision based on lots of signals: rental history, credit record, job, salary. Engineers “train” the program on sample data. People use the program without incident until one day, someone thinks to put through two applicants of seemingly equal merit, the only difference being race. The program rejects the black applicant and accepts the white one. The engineers are horrified, yet say the program only reflected the data it was trained on. So is their algorithm racially biased?

Yes and a classification algorithm could not only be racist but, as humans write them, or more accurately with the learning algorithms, as they are built upon human examples and counter-examples, the algorithms may have any human bias that we have.  With the abundance of data, we are training programs with examples from the real world; the resulting programming will be an image of how we act and not a reflection on how we would like to be.  Exactly as the saying on educating kids: they do as they see and not as they are told :- )

To make things worse, when dealing with learning algorithms, not even the programmer can predict the resulting classification. So knowing that there may be errors,  who is there to ensure their correctness?

What about the everyday profiling that goes on without anyone noticing? [… ]
Their goal is chiefly “microtargeting,” knowing enough about users so that ads can be customized for tiny segments like “soccer moms with two kids who like Kim Kardashian” or “aging, cynical ex-computer programmers.”

Some of these categories are dicey enough that you wouldn’t want to be a part of them. Pasquale writes that some third-party data-broker microtargeting lists include “probably bipolar,” “daughter killed in car crash,” “rape victim,” and “gullible elderly.” […]

There is no clear process for fixing these errors, making the process of “cyberhygiene” extraordinarily difficult.[…]

For example, just because someone has access to the source code of an algorithm does not always mean he or she can explain how a program works. It depends on the kind of algorithm. If you ask an engineer, “Why did your program classify Person X as a potential terrorist?” the answer could be as simple as “X had used ‘sarin’ in an email,” or it could be as complicated and nonexplanatory as, “The sum total of signals tilted X out of the ‘non-terrorist’ bucket into the ‘terrorist’ bucket, but no one signal was decisive.” It’s the latter case that is becoming more common, as machine learning and the “training” of data create classification algorithms that do not behave in wholly predictable manners.

Further on, the author mentions the dangers or this kind of programming that is not fully predictable.

Philosophy professor Samir Chopra has discussed the dangers of such opaque programs in his book A Legal Theory for Autonomous Artificial Agents, stressing that their autonomy from even their own programmers may require them to be regulated as autonomous entities.

Chopra sees these algorithms as autonomous entities.  They may be unpredictable, but till now there is no will or conscious choice to go one path instead of another.  Programs are being told to maximize a particular benefit, and how to measure that benefit is a calculated by a  human written function.  Now as time goes by, and technological advances go their way, I can easily see that the benefit function could include certain feedback the program gets from ‘real world’ that could make the behavior of the algorithm still more unpredictable than now.  At that point we can think of algorithms that can evaluate or ‘choose’ to be on the regulated side.. or not? Will it reaches the point of them having a kind of survival instinct?   Where it may lead that…we’ll know it soon enough.

photo by:
Be Sociable, Share!

The value of Reflection in Learning

introspection

I just read Stephen M. Fleming‘s article “The Power of Reflection” in the Scientific American Mind.  It talks about  the importance of metacognition, that is the ability of knowing our own thoughts and capacities.

This skill that allows us to evaluate our level of competence on a particular domain is totally independent of our effective competence in that specific domain. We can be bad at evaluating one particular skill,and still be good at it.  We can also know we don’t know anything about a specific subject but that doesn’t make us know more about it.  Though, knowing our lack of knowledge is very important! It allow us to evaluate correctly the situation and act properly accordingly. In this last mentioned case the proper action would be to look for help in that domain :-)  A very typical action we take based on our knowledge of ourselves is writing lists when we tend to forget things, I fully recognize myself here, do you?

Having a good insight on our internal thoughts and processes is very important, it can even be more important than the knowledge itself because it drives our actions. Not being aware of the reality, as they point out in the article, can be very damaging not only for us but for our social relationships and family. Not knowing that we have a particular medical condition, thus not taking the medication, can make it impossible to live unattended, even if the condition itself is not so impairing.

It plays  particular role in learning, and the article mentions a study where they tried to boost this ability among students:

[…] Thomas O. Nelson and his student John Dunlosky, then at the University of Washington, reported an intriguing effect. When volunteers were asked to reflect on how well they had learned a list of word pairs after a short delay, they were more self-aware than if asked immediately.  Many studies have since replicated this finding.  Encouraging a student to take a break before deciding how well he or she has studied for an upcoming test could aid learning in a simple but effective way.

Learners could also trigger better insight by coming up with their own subject keywords. Educational psychologist Keith Thiede of Boise State University and his colleagues found that asking students to generate a few words summarizing a particular topic led to greater metacognitive accuracy.  The students then allocated their study time better by focusing on material that was less well understood.

This method of studying should be taught at school thus teaching this meta-skill to learn more effectively.

Be Sociable, Share!

Free Search Engines, says the EU!

The European Parliament is asking to “unbundling search engines from other commercial services”, issuing a message as in the ‘Free Willy’ movie, or any other cause you may be for :-)

Free_willyThe Economist has done its first page article around it: ‘Should governments break up digital monopolies?’, Nov. 29th. 2014.  Is this issue so important?  Yes, I believe so.  The Economist’ writer dismiss this issue arguing that lately any dominant company has not kept its position for too long.  He mentions on this particular issue that technology is shifting again, and browsing is not as relevant as it was, as everybody is going mobile and using more apps than browsing than before. He also says the main interest of the EU for him is more to protect the European companies than for the benefit of the consumer, because the consumer is offered a better service with the attachment of additional functionnalities to the result of searches.

Giving people flight details, dictionary definitions or a map right away saves them time. And while advertisers often pay hefty rates for clicks, users get Google’s service for nothing—rather as plumbers and florists fork out to be listed in Yellow Pages which are given to readers gratis, and nightclubs charge men steep entry prices but let women in free.

Even though as consumers we may be happy having those additional features, I don’t fully agree:  I still believe it is very important to ensure a correct result to a search or as much as it can be, at least not too obviously biased.  And for sure I don’t want to leave in the hands of a few (managers of Google for instance) to decide what is shown to the majority of us as a result of a search, how to prone between the choices, how to direct our attention to only their friend’s interests (on products or on views).

On the other hand, we may have a bigger impact on educating the user: what is he receiving from a search result may be biased because of the business model or the intertwined interests of the search engine providing the answers. Because technology is moving very fast, for when a resolution of this type is issued, the manipulative aspect of marketing may have moved to another place.

For the other aspect, the collection of all the user’s data and its privacy, the issue is becoming urgent, the whole world would benefit from a just and feasable way to deal with it:

The good reason for worrying about the internet giants is privacy. It is right to limit the ability of Google and Facebook to use personal data: their services should, for instance, come with default settings guarding privacy, so companies gathering personal information have to ask consumers to opt in. Europe’s politicians have shown more interest in this than American ones.

Be Sociable, Share!

Changing schools with gaming techniques

Could you imagine a world where children will ask you to bring them to school?  Well, that world doesn’t seem so far away… at least I know my son would be happy to  go to the school Ian Livingstone is planning to open in 2016 in Hammersmith, London.  Read what technology reporter Dave Lee wrote on his article in the BBC News:

By bringing gaming elements into the learning process, Mr Livingstone argued, students would learn how to problem-solve rather than just how to pass exams.

Livingston_77075569_books

Mr Livingstone said he wanted to bring the principles of his interactive books to the classroom

[…] Mr Livingstone is best known for being the man behind huge franchises such as Tomb Raider and tabletop game Warhammer.

In the 80s, his Fighting Fantasy books brought an interactive element to reading that proved extremely popular.

Speaking to the BBC about the plans, Mr Livingstone said he wanted to bring those interactive principles to schooling, but stressed the school would provide learning across all core subjects.

There is more behind his idea than just making children wanting to go to school.  It fosters a ‘hands-on’ approach that allows students not only to know, but to know how to use the learned knowledge.  Plus the added benefit of allowing diverse paths to reach the goal:

By bringing gaming elements into the learning process, Mr Livingstone argued, students would learn how to problem-solve rather than just how to pass exams.

[…] “There needs to be a shift in the pedagogy of learning in classrooms because there’s still an awful lot of testing and conformity instead of diversity.

“I’m not saying knowledge is bad – I’m just trying to get a bit more know-how into the curriculum.”

He said he considers the trial-and-error nature of creating games as a key model for learning.

“For my mind, failure is just success work-in-progress. Look at any game studio and the way they iterate. Angry Birds was Rovio’s 51st game.

“You’re allowed to fail. Games-based learning allows you to fail in a safe environment.”

Let’s wish him a great success!

Be Sociable, Share!

About Internet of Things and Privacy

InternetofThings

Innovation is creating new materials, new sensors each time smaller, cheaper, more flexible, more powerful and at the same time less power-consuming. It allows to put them everywhere: we are surrounded with devices crowded with those sensors as our phones with cameras, gyroscopes and gps. And all those measurements captured by the sensors are being used by applications, many of which are connected to the cloud and to Internet.

Internet of Things (as this technology is called) is becoming ubiquitous, leaving us each time more exposed on our daily life.  How many of us have our whereabouts known by the GPS company, the Phone provider and even the car manufacturer?  Also our personal biometrical information is being left all over our running paths not to mention the new gym-centers.

On the other hand, Nicole Dewandre reminds us on this recorded presentation of two basic human needs: our human need of privacy and the fact that we construct ourselves through the public eye.

We need privacy to express our internal thoughts without public judgement, we need to be in a safe place to test and confront to others our lines of reasoning.  On our hyper-connected world, the spaces where we can profit from this privacy are vanishing.

As for our second need, the image the others have of us is very important. The information we leave behind influences this public image and it has a great effect not only on what others think of us, but also on our own perception of ourselves, on our self-esteem and finally it ends reflecting on our happiness.

Living on this hyper-connected world in which we are immersed is a real challenge!

Be Sociable, Share!

Our 2 ways of thinking: Fast and Slow

From Jim Holt review  in The New York Times. Illustration by David Plunkert.

From Jim Holt review in The New York Times. Illustration by David Plunkert.

I just came back from holidays, and I want to share with you my last reading: “Thinking, Fast and Slow” by David Kahneman.  He describes our mind as having 2 different ways of functioning: a fast one, based on our ‘intuition’ and a slower one, where we have to do the effort of reasoning.

  • The fast one is the intuitive way, used on everyday tasks, and is also called by the psychologists our ‘unconscious mind’.  It is based on the inputs of our senses (hearing, sight, smell..). They trigger a search in our memory and bring through associations a representation of our situation and an immediate response to it.
  • The slower functioning way is when we focus our attention on the inputs at hand, and we follow a line of reasoning based on our knowledge to come to a conclusion.  This method requires more energy, we must direct our attention to each piece of information, and as we evaluate things sequentially (one thing after the other) it is slower.

As our body is lazy by nature, this second ‘slow’ way of reasoning is only used if needed, that is if the situation requires our ‘special attention’.  It is a great thing that our faster and energy-saving functioning way is our ‘default’…except for the fact that David Kahneman points out very interesting experiences that show the pitfalls of our intuition!

One great example he presents is the ambiguity resolution that goes behind our knowledge: when a sentence or image could be interpreted in different ways, our ‘fast mind’ resolves the ambiguity with the most recent context, which is good in many situations.  The problem is that it doesn’t even let us know that there was another interpretation at all!  We are not aware that our mind took only one of the possible alternatives. And moreover, it takes the easiest available memory to give sense to the world as we sense it.  So recent events that are more vivid on our memory have a greater impact on our interpretation of the world.  This is called the ‘availability bias’.

 Not only our memories play us games, but our whole body is linked to our intuitive way of functioning.  He mentions an experiment they performed in the United States where they asked the participants to look at photos and words related with elderly, then they asked them to move to another room, and that was the aim of the experience: they measured the time it took them to walk from their actual location to the other one.  They realized that the participants that have been shown pictures related to elderly were slower than the others, like if our body was related to what we have been thinking.  This is called the ‘priming’ effect.

Pencil-in-Mouth

And what may seem more surprising, this body-mind link works also the other way around: people requested to hold a pencil on their mouth had their mood adapted to the grimace they have been forced into.  Here is the details of the experiment: some participants were requested to hold the pencil by the middle of it, so having on one side of the mouth the point and on the other the eraser, some others were requested to hold the pencil putting their lips around the eraser end.  Then the 2 groups have been presented with the same cartoon images, and the first group found it on average  more funnier that the second group.  The first group seemed on a happier mood as if they have been smiling.  The second group were less positive after they have been forced on frowning before looking at it.

The conclusion is that we have to be really careful with our  mind’s evaluation of a situation if we have left it to our unconscious or intuitive mind.  It is biased by design!  The more aware we are about those biases, the better we are to counter them.

Be Sociable, Share!