The Value of Emotional Connection

HBR-Emotions MAGIDS_value_v4-small

Scott Magids, Alan Zorfas and Daniel Leemon tell us that research on motivational values is paying off:

Our research across hundreds of brands in dozens of categories shows that it’s possible to rigorously measure and strategically target the feelings that drive customers’ behavior. We call them “emotional motivators.” They provide a better gauge of customers’ future value to a firm than any other metric, including brand awareness and customer satisfaction, and can be an important new source of growth and profitability.

The article guides you through a detailed process to find out your customers’ motivators, that begins with:

Online surveys can help you quantify the relevance of individual motivators. Are your customers more driven by life in the moment or by future goals? Do they place greater value on social acceptance or on individuality? Don’t assume you know what motivates customers just because you know who they are. Young parents may be motivated by a desire to provide security for their families—or by an urge to escape and have some fun (you will probably find both types in your customer base). And don’t undermine your understanding of customers’ emotions by focusing on how people feel about your brand or how they say it makes them feel. You need to understand their underlying motivations separate from your brand.

Check here the full Harvard Business Review’s article for the full description. What is surprising is this finding:

To increase revenue and market share, many companies focus on turning dissatisfied customers into satisfied ones. However, our analysis shows that moving customers from highly satisfied to fully connected can have three times the return of moving them from unconnected to highly satisfied. And the highest returns we’ve seen have come from focusing on customers who are already fully connected to the category—from maximizing their value and attracting more of them to your brand.

It is analogous to the different strategies used on education:

  • In secondary school you have to get a minimum knowledge from all the courses you have.  It is frequent that students must focus on the ones for which they are not naturally talented.
  • In higher studies, it pays to focus on your strengths, on your best skills, and to improve them until you are really good at them.

It’s not frequent to get youngsters very motivated by the courses they don’t really like, even if they finish the year managing them enough to pass. It is no surprise that it is easier to motivate the second group, and as a result, seems reasonable that the acquired knowledge or skill may be more astonishing on the second group than on the first one. Surprising not have had this intuition and need a research to show it with data.

Be Sociable, Share!

The European Data Innovation Hub

What began as a community of like-minded people, with nice meetups around data science and get-together’s, is now taking the form of the European Data Innovation Hub.  Its mission is to be an active actor in the data innovation ecosystem and to support data professionals throughout Belgium and Europe with networking activities, events, training and meeting facilities, learning platforms, co-working space and mentorship. It will foster grassroots community initiatives and take the burden out of realising and organising them. The idea is to set the conditions where people with the right skills and organisations in the right positions can have the option to move forward.

Here are some of the activities of the Hub:

  • To organise data innovation events
  • To provide co-working space for data professionals
  • To support the education and training of the data workforce, from academic to data scientists to managers to data end-users

I’m very happy to be part of this eco-system, participating not only in the trainings in Big Data and Machine Learning, but hopefully opening as many opportunities as I can to women in this domain.

Be Sociable, Share!

The rise of the Self-Tuning Enterprise


As you may know, I am a fan of Machine Learning, a subfield of Artificial Intelligence (AI) that englobes computer programs that exhibit some kind of intelligent behavior. The first researchers on AI began analyzing how we (humans) did intelligent tasks in order to create programs that reproduced our behavior. So look at the irony of this HBR article”The self-Tuning Enterprise” where the authors Martin Reeves, Ming Zeng and Amin Venjara use the analogy of how machine learning programs do to transpose the behavior to enterprise strategy tuning:

[…] These enterprises [he’s talking about internet companies like Google, Netflix, Amazon, and Alibaba] have become extraordinarily good at automatically retooling their offerings for millions of individual customers, leveraging real-time data on their behavior. Those constant updates are, in fact, driven by algorithms, but the processes and technologies underlying the algorithms aren’t magic: It’s possible to pull them apart, see how they operate, and use that know-how in other settings. And that’s just what some of those same companies have started to do.

In this article we’ll look first at how self-tuning algorithms are able to learn and adjust so effectively in complex, dynamic environments. Then we’ll examine how some organizations are applying self-tuning across their enterprises, using the Chinese e-commerce giant Alibaba as a case example.”

You may have notice those new programs at work to recommend you books or other products each time you buy something on Internet (and in fact, even if you are just looking and didn’t buy anything ;-). Those programs are based on Machine Learning algorithms, and they improve over time with the new information of success (if you bought the proposed article) or failure (if you didn’t).

How do they work?

There is a ‘learning’ part that finds similarities between customers in order to propose you products that another customer similar to you bought. But it’s not so simple, these programs are coupled with other learning modules like the one that does some ‘experimentation’ not to get stuck with always the same kind of products. This module will propose you something different from time to time. Even if you like polar books, after the tenth one, you would like to read something else, isn’t it? So the trick is to find equilibrium between showing you books you have great chances to like and novelties to make you discover new horizons. You have to have the feeling that they know what they are doing when they propose you a book (so they fine-tune to be good at similarities) but you may like to change from time to time not to get bored, and also they are very interested in making you discover another bounty/category of literature, let’s say poems. If you don’t like it, you won’t accept so easily next recommendation, so here comes the next ‘tuning’ on how often to do it.

And that’s where self-tuning comes in. Self-tuning is related to the concepts of agility (rapid adjustment), adaptation (learning through trial and error), and ambidexterity (balancing exploration and exploitation). Self-tuning algorithms incorporate elements of all three—but in a self-directed fashion.

The ‘self-tuning’ process they are talking about adjusts the tool to the new information available to him without the need of reprogramming. The analogy the authors are doing is to do in organizations this same kind of automatics tunings that Machine Learning systems are doing: to ‘self-tune’ the companies without any top-down directive, to have agility, adaptation through trial and error and ambidexterity balancing exploration and exploitation.

To understand how this works, think of the enterprise as a nested set of strategic processes. At the highest level, the vision articulates the direction and ambition of the firm as a whole. As a means to achieving the vision, a company deploys business models and strategies that bring together capabilities and assets to create advantageous positions. And it uses organizational structure, information systems, and culture to facilitate the effective operation of those business models and strategies.

In the vast majority of organizations, the vision and the business model are fixed axes around which the entire enterprise revolves. They are often worked out by company founders and, once proven successful, rarely altered. Consequently, the structure, systems, processes, and culture that support them also remain static for long periods. Experimentation and innovation focus mostly on product or service offerings within the existing model, as the company leans on its established recipe for success in other areas.

The self-tuning enterprise, in contrast, takes an evolutionary approach at all levels. The vision, business model, and supporting components are regularly calibrated to the changing environment by applying the three learning loops. The organization is no longer viewed as a fixed means of transmitting intentions from above but, rather, as a network that shifts and develops in response to external feedback. To see what this means in practice, let’s look at Alibaba.[…]

Keep resetting the vision.

When Alibaba began operations, internet penetration in China was less than 1%. While most expected that figure to grow, it was difficult to predict the nature and shape of that growth. So Alibaba took an experimental approach: At any given time, its vision would be the best working assumption about the future. As the market evolved, the company’s leaders reevaluated the vision, checking their hypotheses against reality and revising them as appropriate.

In the early years, Alibaba’s goal was to be “an e-commerce company serving China’s small exporting companies.” This led to an initial focus on, which created a platform for international sales. However, when the market changed, so did the vision. As Chinese domestic consumption exploded, Alibaba saw an opportunity to expand its offering to consumers. Accordingly, it launched the online marketplace Taobao in 2003. Soon Alibaba realized that Chinese consumers needed more than just a site for buying and selling goods. They needed greater confidence in internet business—for example, to be sure that online payments were safe. So in 2004, Alibaba created Alipay, an online payment service. […] Ultimately, this led Alibaba to change its vision again, in 2008, to fostering “the development of an e-commerce ecosystem in China.” It started to offer more infrastructure services, such as a cloud computing platform, microfinancing, and a smart logistics platform. More recently, Alibaba recalibrated that vision in response to the rapid convergence between digital and physical channels. Deliberately dropping the “e” from e-commerce, its current vision statement reads simply, “We aim to build the future infrastructure of commerce.”

Experiment with business models.

Alibaba could not have built a portfolio of companies that spanned virtually the entire digital spectrum without making a commitment to business model experimentation from very early on.

[…]At each juncture in its evolution, Alibaba continued to generate new business model options, letting them run as separate units. After testing them, it would scale up the most promising ones and close down or reabsorb those that were less promising.[…]

Again there was heated debate within the company about which direction to take and which model to build. Instead of relying on a top-down decision, Alibaba chose to place multiple bets and let the market pick the winners.[…]

Increasing experimentation at the height of success runs contrary to established managerial wisdom, but for Alibaba it was necessary to avoid rigidity and create options. Recalibrating how and how much to experiment was fundamental to its ability to capitalize on nascent market trends.

Focus on seizing and shaping strategic opportunities, not on executing plans.

In volatile environments, plans can quickly become out-of-date. In Alibaba’s case, rapid advances in technology, shifting consumer expectations in China and beyond, and regulatory uncertainty made it difficult to predict the future. […]

Alibaba does have a regular planning cycle, in which business unit leaders and the executive management team iterate on plans in the fourth quarter of each year. However, it’s understood that this is only a starting point. Whenever a unit leader sees a significant market change or a new opportunity, he or she can initiate a “co-creation” process, in which employees, including senior business leaders and lead implementers, develop new directions for the business directly with customers.

At Alibaba co-creation involves four steps. The first is establishing common ground: identifying signals of change (based on data from the market and insights from customers or staff) and ensuring that the right people are present and set up to work together. This typically happens at a full-day working session. The second step is getting to know the customer. Now participants explore directly with customers their evolving needs or pain points and brainstorm potential solutions. The third step entails developing an action plan based on the outcome of customer discussions. An action plan must identify a leader who can champion the opportunity, the supporting team (or teams) that will put the ideas into motion, and the mechanisms that will enable the work to get done. The final step is gathering regular customer feedback as the plan is implemented, which can, in turn, trigger further iterations.

So now you know how Alibaba does it, how is it in your company?  What ideas from them would you adopt?

Be Sociable, Share!

New computer interface using radar technology

Thanks to

Thanks to

Have you seen this article?  It’s about the project Soli from the Google’s Advanced Technologies and Projects (ATAP) group.  They have implemented a new way to comunicate with a computer: through radar.  The radar captures the slight movements of the hand like in this picture, where just moving your fingers in the air makes you move a ‘virtual’ slider.

Fantastic, can’t wait to try it!

Be Sociable, Share!

How to lie with charts

I hope you didn’t miss the article on visualization from the Harvard Business Review.  It is called ‘Vision statement: How to lie with charts‘, and it’s full of clear stated examples.,_2008

Source: Wikipedia

This color-coded map is one of the examples they show where coloring a county with the political color of the majority vote in that state is misleading.  The map represents the 2008 election (Obama versus McCain) and we can see 80% of the US colored in red (the Republican color), and in fact the Republican candidate John McCain received only 40% of the votes.  The mismatch of the (natural) election’s expectation after looking at this map and the real outcome comes from representing in a map information that is not related to geography.  The number of votes in a county or a state is not proportional with its geographical size.
[..] you could call it the New York City problem -0,01% of the area but 2,7% of the population.

A suggested better representation is using bubbles with sizes proportional of the number of votes, ending with this map showing more correctly a majority of blue instead.



Visualization is growing in importance nowadays that we have so much data all around.  Visualization can help to identify trends, to find patterns, to show relations between data.  It can show what the data represents, putting it in an intuitive way.

But as this article shows, used in a wrong way, visualization can mislead you just as well.

To be on the safe side, it’s better to check the numbers or data behind the representation in order to confirm what the image is showing you … or if somebody is not tricking you!

Be Sociable, Share!

Correlation and Causation in Big Data

Big data began as a term used when you have extremely large data sets, These big data sets cannot  be managed nor analyzed with conventional database programs not only because  of the size exceeding the capacities of standard data management , but also because of the variety and unstructured nature of the data (it comes from different sources as the sales department, customer contact center, social media, mobile devices and so on) and because of the velocity at which it moves (imagine what it entails for a GPS to recalculate continually the next move to stay on the best route and avoid traffic jams: looking at all traffic information coming from official instances as well as from other drivers on real time, and transmitting all the details before the car reaches a crossroad).

The term ‘Big Data’ is also used to identify the new technology needed to compute the data and reveal patterns, trends, and associations.  Furthermore, this term is now synonym of big data’s analytical power and its business potential that will help companies and organizations improve operations and make faster, more intelligent decisions.

What is big data used for?

First and the more evident part is to do statistics: how many chocolates have we sold? What are the global sales around the world, splitted per country? Where do the customers come from?

Then correlation comes to play:  things that have the same tendency, that go together or that move together: countries that are strong on chocolate sells also have  a lot of PhDs.

Thanks to

Thanks to

Correlation is not causality. It’s not because you eat chocolate that you become a PhD (nor the other way around, having a PhD doesn’t mean you are more likely of loving chocolate).  Analyzing correlations is still a big deal.  It can be a conjunction, like with thunder and lightning. It can be a causality relation, and even when there is causality, it is hard to say the direction of the relationship, what is the cause and what its effect.  Nevertheless, big data predictive behaviour analysis is doing a great job, even when the ‘why’s behind it, the underlying causes, are still hidden, not explained.

The great potential in Big data is that it helps us discover correlations, patterns and trends where we couldn’t see them before, but it’s up to us to create theories and models that can explain the relations behind the correlations.

Be Sociable, Share!

Can An Algoritm be “Racist”?

Library of Congress Classification - Reading Room

David Auerbach has written this article pointing out that some classification algorithms may be racists :

Can a computer program be racist? Imagine this scenario: A program that screens rental applicants is primed with examples of personal history, debt, and the like. The program makes its decision based on lots of signals: rental history, credit record, job, salary. Engineers “train” the program on sample data. People use the program without incident until one day, someone thinks to put through two applicants of seemingly equal merit, the only difference being race. The program rejects the black applicant and accepts the white one. The engineers are horrified, yet say the program only reflected the data it was trained on. So is their algorithm racially biased?

Yes and a classification algorithm could not only be racist but, as humans write them, or more accurately with the learning algorithms, as they are built upon human examples and counter-examples, the algorithms may have any human bias that we have.  With the abundance of data, we are training programs with examples from the real world; the resulting programming will be an image of how we act and not a reflection on how we would like to be.  Exactly as the saying on educating kids: they do as they see and not as they are told :- )

To make things worse, when dealing with learning algorithms, not even the programmer can predict the resulting classification. So knowing that there may be errors,  who is there to ensure their correctness?

What about the everyday profiling that goes on without anyone noticing? [… ]
Their goal is chiefly “microtargeting,” knowing enough about users so that ads can be customized for tiny segments like “soccer moms with two kids who like Kim Kardashian” or “aging, cynical ex-computer programmers.”

Some of these categories are dicey enough that you wouldn’t want to be a part of them. Pasquale writes that some third-party data-broker microtargeting lists include “probably bipolar,” “daughter killed in car crash,” “rape victim,” and “gullible elderly.” […]

There is no clear process for fixing these errors, making the process of “cyberhygiene” extraordinarily difficult.[…]

For example, just because someone has access to the source code of an algorithm does not always mean he or she can explain how a program works. It depends on the kind of algorithm. If you ask an engineer, “Why did your program classify Person X as a potential terrorist?” the answer could be as simple as “X had used ‘sarin’ in an email,” or it could be as complicated and nonexplanatory as, “The sum total of signals tilted X out of the ‘non-terrorist’ bucket into the ‘terrorist’ bucket, but no one signal was decisive.” It’s the latter case that is becoming more common, as machine learning and the “training” of data create classification algorithms that do not behave in wholly predictable manners.

Further on, the author mentions the dangers or this kind of programming that is not fully predictable.

Philosophy professor Samir Chopra has discussed the dangers of such opaque programs in his book A Legal Theory for Autonomous Artificial Agents, stressing that their autonomy from even their own programmers may require them to be regulated as autonomous entities.

Chopra sees these algorithms as autonomous entities.  They may be unpredictable, but till now there is no will or conscious choice to go one path instead of another.  Programs are being told to maximize a particular benefit, and how to measure that benefit is a calculated by a  human written function.  Now as time goes by, and technological advances go their way, I can easily see that the benefit function could include certain feedback the program gets from ‘real world’ that could make the behavior of the algorithm still more unpredictable than now.  At that point we can think of algorithms that can evaluate or ‘choose’ to be on the regulated side.. or not? Will it reaches the point of them having a kind of survival instinct?   Where it may lead that…we’ll know it soon enough.

photo by:
Be Sociable, Share!

The value of Reflection in Learning


I just read Stephen M. Fleming‘s article “The Power of Reflection” in the Scientific American Mind.  It talks about  the importance of metacognition, that is the ability of knowing our own thoughts and capacities.

This skill that allows us to evaluate our level of competence on a particular domain is totally independent of our effective competence in that specific domain. We can be bad at evaluating one particular skill,and still be good at it.  We can also know we don’t know anything about a specific subject but that doesn’t make us know more about it.  Though, knowing our lack of knowledge is very important! It allow us to evaluate correctly the situation and act properly accordingly. In this last mentioned case the proper action would be to look for help in that domain :-)  A very typical action we take based on our knowledge of ourselves is writing lists when we tend to forget things, I fully recognize myself here, do you?

Having a good insight on our internal thoughts and processes is very important, it can even be more important than the knowledge itself because it drives our actions. Not being aware of the reality, as they point out in the article, can be very damaging not only for us but for our social relationships and family. Not knowing that we have a particular medical condition, thus not taking the medication, can make it impossible to live unattended, even if the condition itself is not so impairing.

It plays  particular role in learning, and the article mentions a study where they tried to boost this ability among students:

[…] Thomas O. Nelson and his student John Dunlosky, then at the University of Washington, reported an intriguing effect. When volunteers were asked to reflect on how well they had learned a list of word pairs after a short delay, they were more self-aware than if asked immediately.  Many studies have since replicated this finding.  Encouraging a student to take a break before deciding how well he or she has studied for an upcoming test could aid learning in a simple but effective way.

Learners could also trigger better insight by coming up with their own subject keywords. Educational psychologist Keith Thiede of Boise State University and his colleagues found that asking students to generate a few words summarizing a particular topic led to greater metacognitive accuracy.  The students then allocated their study time better by focusing on material that was less well understood.

This method of studying should be taught at school thus teaching this meta-skill to learn more effectively.

Be Sociable, Share!

Free Search Engines, says the EU!

The European Parliament is asking to “unbundling search engines from other commercial services”, issuing a message as in the ‘Free Willy’ movie, or any other cause you may be for :-)

Free_willyThe Economist has done its first page article around it: ‘Should governments break up digital monopolies?’, Nov. 29th. 2014.  Is this issue so important?  Yes, I believe so.  The Economist’ writer dismiss this issue arguing that lately any dominant company has not kept its position for too long.  He mentions on this particular issue that technology is shifting again, and browsing is not as relevant as it was, as everybody is going mobile and using more apps than browsing than before. He also says the main interest of the EU for him is more to protect the European companies than for the benefit of the consumer, because the consumer is offered a better service with the attachment of additional functionnalities to the result of searches.

Giving people flight details, dictionary definitions or a map right away saves them time. And while advertisers often pay hefty rates for clicks, users get Google’s service for nothing—rather as plumbers and florists fork out to be listed in Yellow Pages which are given to readers gratis, and nightclubs charge men steep entry prices but let women in free.

Even though as consumers we may be happy having those additional features, I don’t fully agree:  I still believe it is very important to ensure a correct result to a search or as much as it can be, at least not too obviously biased.  And for sure I don’t want to leave in the hands of a few (managers of Google for instance) to decide what is shown to the majority of us as a result of a search, how to prone between the choices, how to direct our attention to only their friend’s interests (on products or on views).

On the other hand, we may have a bigger impact on educating the user: what is he receiving from a search result may be biased because of the business model or the intertwined interests of the search engine providing the answers. Because technology is moving very fast, for when a resolution of this type is issued, the manipulative aspect of marketing may have moved to another place.

For the other aspect, the collection of all the user’s data and its privacy, the issue is becoming urgent, the whole world would benefit from a just and feasable way to deal with it:

The good reason for worrying about the internet giants is privacy. It is right to limit the ability of Google and Facebook to use personal data: their services should, for instance, come with default settings guarding privacy, so companies gathering personal information have to ask consumers to opt in. Europe’s politicians have shown more interest in this than American ones.

Be Sociable, Share!