Pre-Crime unit for tracking Terrorists?

minority-report-11-3Due to last events in Belgium, the terrorist bomb attacks in Zaventem and Brussels, I couldn’t but remember the article from Bloomberg Businessweek talking about pre-crime: ‘China Tries Its Hand at Pre-Crime’.  They refer us to the film Minority Report, with Tom Cruise, that takes place in a future society where three mutants foresee all crime before it occurs. Plugged into a great machine, these “precogs” are at the base of a police unit (Pre-Crime unit) that arrests murderers before they commit their crimes.

China Electronics Technology company won recently the contract for constructing the ‘United information environment’ as they call it, an ‘antiterrorism’ platform as declared by the Chinese government:

The Communist Party has directed [them] to develop software to collate data on jobs, hobbies, consumption habits, and other behavior of ordinary citizens to predict terrorist acts before they occur.

This may seem a little too much to ask, if you think about it you may need every daily detail to be able to predict terrorist behaviour, but in a country like China where the state has control over their citizens since many decades, where they have no privacy limits to respect and a good network of informants…

A draft cybersecurity law unveiled in July grants the government almost unbridled access to user data in the name of national security. “If neither legal restrictions nor unfettered political debate about Big Brother surveillance is a factor for a regime, then there are many different sorts of data that could be collated and cross-referenced to help identify possible terrorists or subversives,” says Paul Pillar, a nonresident fellow at the Brookings Institution.

See how now there is also a new target: subversives.  the article continues:

China was a surveillance state long before Edward Snowden clued Americans in to the extent of domestic spying. Since the Mao era, the government has kept a secret file, called a dang’an, on almost everyone. Dang’an contain school reports, health records, work permits, personality assessments, and other information that might be considered confidential and private in other countries. The contents of the dang’an can determine whether a citizen is eligible for a promotion or can secure a coveted urban residency permit. The government revealed last year that it was also building a nationwide database that would score citizens on their trustworthiness.

Wait a second, who’s defining what is ‘trustworthiness’, and what if you’re not?

New antiterror laws that went into effect on Jan. 1 allow authorities to gain access to bank accounts, telecommunications, and a national network of surveillance cameras called Skynet. Companies including Baidu, China’s leading search engine; Tencent, operator of the popular social messaging app WeChat; and Sina, which controls the Weibo microblogging site, already cooperate with official requests for information, according to a report from the U.S. Congressional Research Service. A Baidu spokesman says the company wasn’t involved in the new antiterror initiative.

So Skynet is here now (remember Terminator Genisys?). Even if right after a horrendous crime you can be tempted to be happy that this ‘pre-crime’ initiative is being constructed, there are way too many negative aspects still to consider before having such a tool. Like in which hands will it be, who’s defining what is a crime, what about your free will of changing your mind, to mention some.  Let’s begin thinking how to tackle them.

The rise of the Self-Tuning Enterprise

Alibaba

As you may know, I am a fan of Machine Learning, a subfield of Artificial Intelligence (AI) that englobes computer programs that exhibit some kind of intelligent behavior. The first researchers on AI began analyzing how we (humans) did intelligent tasks in order to create programs that reproduced our behavior. So look at the irony of this HBR article”The self-Tuning Enterprise” where the authors Martin Reeves, Ming Zeng and Amin Venjara use the analogy of how machine learning programs do to transpose the behavior to enterprise strategy tuning:

[…] These enterprises [he’s talking about internet companies like Google, Netflix, Amazon, and Alibaba] have become extraordinarily good at automatically retooling their offerings for millions of individual customers, leveraging real-time data on their behavior. Those constant updates are, in fact, driven by algorithms, but the processes and technologies underlying the algorithms aren’t magic: It’s possible to pull them apart, see how they operate, and use that know-how in other settings. And that’s just what some of those same companies have started to do.

In this article we’ll look first at how self-tuning algorithms are able to learn and adjust so effectively in complex, dynamic environments. Then we’ll examine how some organizations are applying self-tuning across their enterprises, using the Chinese e-commerce giant Alibaba as a case example.”

You may have notice those new programs at work to recommend you books or other products each time you buy something on Internet (and in fact, even if you are just looking and didn’t buy anything ;-). Those programs are based on Machine Learning algorithms, and they improve over time with the new information of success (if you bought the proposed article) or failure (if you didn’t).

How do they work?

There is a ‘learning’ part that finds similarities between customers in order to propose you products that another customer similar to you bought. But it’s not so simple, these programs are coupled with other learning modules like the one that does some ‘experimentation’ not to get stuck with always the same kind of products. This module will propose you something different from time to time. Even if you like polar books, after the tenth one, you would like to read something else, isn’t it? So the trick is to find equilibrium between showing you books you have great chances to like and novelties to make you discover new horizons. You have to have the feeling that they know what they are doing when they propose you a book (so they fine-tune to be good at similarities) but you may like to change from time to time not to get bored, and also they are very interested in making you discover another bounty/category of literature, let’s say poems. If you don’t like it, you won’t accept so easily next recommendation, so here comes the next ‘tuning’ on how often to do it.

And that’s where self-tuning comes in. Self-tuning is related to the concepts of agility (rapid adjustment), adaptation (learning through trial and error), and ambidexterity (balancing exploration and exploitation). Self-tuning algorithms incorporate elements of all three—but in a self-directed fashion.

The ‘self-tuning’ process they are talking about adjusts the tool to the new information available to him without the need of reprogramming. The analogy the authors are doing is to do in organizations this same kind of automatics tunings that Machine Learning systems are doing: to ‘self-tune’ the companies without any top-down directive, to have agility, adaptation through trial and error and ambidexterity balancing exploration and exploitation.

To understand how this works, think of the enterprise as a nested set of strategic processes. At the highest level, the vision articulates the direction and ambition of the firm as a whole. As a means to achieving the vision, a company deploys business models and strategies that bring together capabilities and assets to create advantageous positions. And it uses organizational structure, information systems, and culture to facilitate the effective operation of those business models and strategies.

In the vast majority of organizations, the vision and the business model are fixed axes around which the entire enterprise revolves. They are often worked out by company founders and, once proven successful, rarely altered. Consequently, the structure, systems, processes, and culture that support them also remain static for long periods. Experimentation and innovation focus mostly on product or service offerings within the existing model, as the company leans on its established recipe for success in other areas.

The self-tuning enterprise, in contrast, takes an evolutionary approach at all levels. The vision, business model, and supporting components are regularly calibrated to the changing environment by applying the three learning loops. The organization is no longer viewed as a fixed means of transmitting intentions from above but, rather, as a network that shifts and develops in response to external feedback. To see what this means in practice, let’s look at Alibaba.[…]

Keep resetting the vision.

When Alibaba began operations, internet penetration in China was less than 1%. While most expected that figure to grow, it was difficult to predict the nature and shape of that growth. So Alibaba took an experimental approach: At any given time, its vision would be the best working assumption about the future. As the market evolved, the company’s leaders reevaluated the vision, checking their hypotheses against reality and revising them as appropriate.

In the early years, Alibaba’s goal was to be “an e-commerce company serving China’s small exporting companies.” This led to an initial focus on Alibaba.com, which created a platform for international sales. However, when the market changed, so did the vision. As Chinese domestic consumption exploded, Alibaba saw an opportunity to expand its offering to consumers. Accordingly, it launched the online marketplace Taobao in 2003. Soon Alibaba realized that Chinese consumers needed more than just a site for buying and selling goods. They needed greater confidence in internet business—for example, to be sure that online payments were safe. So in 2004, Alibaba created Alipay, an online payment service. […] Ultimately, this led Alibaba to change its vision again, in 2008, to fostering “the development of an e-commerce ecosystem in China.” It started to offer more infrastructure services, such as a cloud computing platform, microfinancing, and a smart logistics platform. More recently, Alibaba recalibrated that vision in response to the rapid convergence between digital and physical channels. Deliberately dropping the “e” from e-commerce, its current vision statement reads simply, “We aim to build the future infrastructure of commerce.”

Experiment with business models.

Alibaba could not have built a portfolio of companies that spanned virtually the entire digital spectrum without making a commitment to business model experimentation from very early on.

[…]At each juncture in its evolution, Alibaba continued to generate new business model options, letting them run as separate units. After testing them, it would scale up the most promising ones and close down or reabsorb those that were less promising.[…]

Again there was heated debate within the company about which direction to take and which model to build. Instead of relying on a top-down decision, Alibaba chose to place multiple bets and let the market pick the winners.[…]

Increasing experimentation at the height of success runs contrary to established managerial wisdom, but for Alibaba it was necessary to avoid rigidity and create options. Recalibrating how and how much to experiment was fundamental to its ability to capitalize on nascent market trends.

Focus on seizing and shaping strategic opportunities, not on executing plans.

In volatile environments, plans can quickly become out-of-date. In Alibaba’s case, rapid advances in technology, shifting consumer expectations in China and beyond, and regulatory uncertainty made it difficult to predict the future. […]

Alibaba does have a regular planning cycle, in which business unit leaders and the executive management team iterate on plans in the fourth quarter of each year. However, it’s understood that this is only a starting point. Whenever a unit leader sees a significant market change or a new opportunity, he or she can initiate a “co-creation” process, in which employees, including senior business leaders and lead implementers, develop new directions for the business directly with customers.

At Alibaba co-creation involves four steps. The first is establishing common ground: identifying signals of change (based on data from the market and insights from customers or staff) and ensuring that the right people are present and set up to work together. This typically happens at a full-day working session. The second step is getting to know the customer. Now participants explore directly with customers their evolving needs or pain points and brainstorm potential solutions. The third step entails developing an action plan based on the outcome of customer discussions. An action plan must identify a leader who can champion the opportunity, the supporting team (or teams) that will put the ideas into motion, and the mechanisms that will enable the work to get done. The final step is gathering regular customer feedback as the plan is implemented, which can, in turn, trigger further iterations.

So now you know how Alibaba does it, how is it in your company?  What ideas from them would you adopt?

New computer interface using radar technology

Thanks to Otticamedia.com

Thanks to Otticamedia.com

Have you seen this article?  It’s about the project Soli from the Google’s Advanced Technologies and Projects (ATAP) group.  They have implemented a new way to comunicate with a computer: through radar.  The radar captures the slight movements of the hand like in this picture, where just moving your fingers in the air makes you move a ‘virtual’ slider.

Fantastic, can’t wait to try it!

Can An Algoritm be “Racist”?

Library of Congress Classification - Reading Room

David Auerbach has written this article pointing out that some classification algorithms may be racists :

Can a computer program be racist? Imagine this scenario: A program that screens rental applicants is primed with examples of personal history, debt, and the like. The program makes its decision based on lots of signals: rental history, credit record, job, salary. Engineers “train” the program on sample data. People use the program without incident until one day, someone thinks to put through two applicants of seemingly equal merit, the only difference being race. The program rejects the black applicant and accepts the white one. The engineers are horrified, yet say the program only reflected the data it was trained on. So is their algorithm racially biased?

Yes and a classification algorithm could not only be racist but, as humans write them, or more accurately with the learning algorithms, as they are built upon human examples and counter-examples, the algorithms may have any human bias that we have.  With the abundance of data, we are training programs with examples from the real world; the resulting programming will be an image of how we act and not a reflection on how we would like to be.  Exactly as the saying on educating kids: they do as they see and not as they are told :- )

To make things worse, when dealing with learning algorithms, not even the programmer can predict the resulting classification. So knowing that there may be errors,  who is there to ensure their correctness?

What about the everyday profiling that goes on without anyone noticing? [… ]
Their goal is chiefly “microtargeting,” knowing enough about users so that ads can be customized for tiny segments like “soccer moms with two kids who like Kim Kardashian” or “aging, cynical ex-computer programmers.”

Some of these categories are dicey enough that you wouldn’t want to be a part of them. Pasquale writes that some third-party data-broker microtargeting lists include “probably bipolar,” “daughter killed in car crash,” “rape victim,” and “gullible elderly.” […]

There is no clear process for fixing these errors, making the process of “cyberhygiene” extraordinarily difficult.[…]

For example, just because someone has access to the source code of an algorithm does not always mean he or she can explain how a program works. It depends on the kind of algorithm. If you ask an engineer, “Why did your program classify Person X as a potential terrorist?” the answer could be as simple as “X had used ‘sarin’ in an email,” or it could be as complicated and nonexplanatory as, “The sum total of signals tilted X out of the ‘non-terrorist’ bucket into the ‘terrorist’ bucket, but no one signal was decisive.” It’s the latter case that is becoming more common, as machine learning and the “training” of data create classification algorithms that do not behave in wholly predictable manners.

Further on, the author mentions the dangers or this kind of programming that is not fully predictable.

Philosophy professor Samir Chopra has discussed the dangers of such opaque programs in his book A Legal Theory for Autonomous Artificial Agents, stressing that their autonomy from even their own programmers may require them to be regulated as autonomous entities.

Chopra sees these algorithms as autonomous entities.  They may be unpredictable, but till now there is no will or conscious choice to go one path instead of another.  Programs are being told to maximize a particular benefit, and how to measure that benefit is a calculated by a  human written function.  Now as time goes by, and technological advances go their way, I can easily see that the benefit function could include certain feedback the program gets from ‘real world’ that could make the behavior of the algorithm still more unpredictable than now.  At that point we can think of algorithms that can evaluate or ‘choose’ to be on the regulated side.. or not? Will it reaches the point of them having a kind of survival instinct?   Where it may lead that…we’ll know it soon enough.

photo by:

Changing schools with gaming techniques

Could you imagine a world where children will ask you to bring them to school?  Well, that world doesn’t seem so far away… at least I know my son would be happy to  go to the school Ian Livingstone is planning to open in 2016 in Hammersmith, London.  Read what technology reporter Dave Lee wrote on his article in the BBC News:

By bringing gaming elements into the learning process, Mr Livingstone argued, students would learn how to problem-solve rather than just how to pass exams.

Livingston_77075569_books

Mr Livingstone said he wanted to bring the principles of his interactive books to the classroom

[…] Mr Livingstone is best known for being the man behind huge franchises such as Tomb Raider and tabletop game Warhammer.

In the 80s, his Fighting Fantasy books brought an interactive element to reading that proved extremely popular.

Speaking to the BBC about the plans, Mr Livingstone said he wanted to bring those interactive principles to schooling, but stressed the school would provide learning across all core subjects.

There is more behind his idea than just making children wanting to go to school.  It fosters a ‘hands-on’ approach that allows students not only to know, but to know how to use the learned knowledge.  Plus the added benefit of allowing diverse paths to reach the goal:

By bringing gaming elements into the learning process, Mr Livingstone argued, students would learn how to problem-solve rather than just how to pass exams.

[…] “There needs to be a shift in the pedagogy of learning in classrooms because there’s still an awful lot of testing and conformity instead of diversity.

“I’m not saying knowledge is bad – I’m just trying to get a bit more know-how into the curriculum.”

He said he considers the trial-and-error nature of creating games as a key model for learning.

“For my mind, failure is just success work-in-progress. Look at any game studio and the way they iterate. Angry Birds was Rovio’s 51st game.

“You’re allowed to fail. Games-based learning allows you to fail in a safe environment.”

Let’s wish him a great success!

About Internet of Things and Privacy

InternetofThings

Innovation is creating new materials, new sensors each time smaller, cheaper, more flexible, more powerful and at the same time less power-consuming. It allows to put them everywhere: we are surrounded with devices crowded with those sensors as our phones with cameras, gyroscopes and gps. And all those measurements captured by the sensors are being used by applications, many of which are connected to the cloud and to Internet.

Internet of Things (as this technology is called) is becoming ubiquitous, leaving us each time more exposed on our daily life.  How many of us have our whereabouts known by the GPS company, the Phone provider and even the car manufacturer?  Also our personal biometrical information is being left all over our running paths not to mention the new gym-centers.

On the other hand, Nicole Dewandre reminds us on this recorded presentation of two basic human needs: our human need of privacy and the fact that we construct ourselves through the public eye.

We need privacy to express our internal thoughts without public judgement, we need to be in a safe place to test and confront to others our lines of reasoning.  On our hyper-connected world, the spaces where we can profit from this privacy are vanishing.

As for our second need, the image the others have of us is very important. The information we leave behind influences this public image and it has a great effect not only on what others think of us, but also on our own perception of ourselves, on our self-esteem and finally it ends reflecting on our happiness.

Living on this hyper-connected world in which we are immersed is a real challenge!

Our 2 ways of thinking: Fast and Slow

From Jim Holt review  in The New York Times. Illustration by David Plunkert.

From Jim Holt review in The New York Times. Illustration by David Plunkert.

I just came back from holidays, and I want to share with you my last reading: “Thinking, Fast and Slow” by David Kahneman.  He describes our mind as having 2 different ways of functioning: a fast one, based on our ‘intuition’ and a slower one, where we have to do the effort of reasoning.

  • The fast one is the intuitive way, used on everyday tasks, and is also called by the psychologists our ‘unconscious mind’.  It is based on the inputs of our senses (hearing, sight, smell..). They trigger a search in our memory and bring through associations a representation of our situation and an immediate response to it.
  • The slower functioning way is when we focus our attention on the inputs at hand, and we follow a line of reasoning based on our knowledge to come to a conclusion.  This method requires more energy, we must direct our attention to each piece of information, and as we evaluate things sequentially (one thing after the other) it is slower.

As our body is lazy by nature, this second ‘slow’ way of reasoning is only used if needed, that is if the situation requires our ‘special attention’.  It is a great thing that our faster and energy-saving functioning way is our ‘default’…except for the fact that David Kahneman points out very interesting experiences that show the pitfalls of our intuition!

One great example he presents is the ambiguity resolution that goes behind our knowledge: when a sentence or image could be interpreted in different ways, our ‘fast mind’ resolves the ambiguity with the most recent context, which is good in many situations.  The problem is that it doesn’t even let us know that there was another interpretation at all!  We are not aware that our mind took only one of the possible alternatives. And moreover, it takes the easiest available memory to give sense to the world as we sense it.  So recent events that are more vivid on our memory have a greater impact on our interpretation of the world.  This is called the ‘availability bias’.

 Not only our memories play us games, but our whole body is linked to our intuitive way of functioning.  He mentions an experiment they performed in the United States where they asked the participants to look at photos and words related with elderly, then they asked them to move to another room, and that was the aim of the experience: they measured the time it took them to walk from their actual location to the other one.  They realized that the participants that have been shown pictures related to elderly were slower than the others, like if our body was related to what we have been thinking.  This is called the ‘priming’ effect.

Pencil-in-Mouth

And what may seem more surprising, this body-mind link works also the other way around: people requested to hold a pencil on their mouth had their mood adapted to the grimace they have been forced into.  Here is the details of the experiment: some participants were requested to hold the pencil by the middle of it, so having on one side of the mouth the point and on the other the eraser, some others were requested to hold the pencil putting their lips around the eraser end.  Then the 2 groups have been presented with the same cartoon images, and the first group found it on average  more funnier that the second group.  The first group seemed on a happier mood as if they have been smiling.  The second group were less positive after they have been forced on frowning before looking at it.

The conclusion is that we have to be really careful with our  mind’s evaluation of a situation if we have left it to our unconscious or intuitive mind.  It is biased by design!  The more aware we are about those biases, the better we are to counter them.

Games for breakthrough thinking

Using games for brainstorming is really great.  Instead of doing a standard meeting, the idea is to set a series of rules, and then play that game.  There is a clear beginning, once the rules have been explained and everybody agrees to play by the rules.  Then when the game is being played, the participants are free to explore the ‘game space’ that are all the possible situations that we can reach by applying the predefined rules.  And there is an end when the declared goal is reached.

image from book Gamestorming by Dave Gray, Sunny Brown and James Macanufo

image from book Gamestorming by Dave Gray, Sunny Brown and James Macanufo

Some goals are clearly defined like the ones limited by time: for example to come up in 3 minutes with as many ideas or words around a subject as possible.  Others have no time constraints; the end is to reach a desired end situation as in the 4-in-line or chess games.

But typically, in real situations where there is need of brainstorming, the goal is not so clear.  For problems that need creativity, new ideas or innovation usually the goal cannot be fully defined; it’s more like a general purpose.  We may have a general direction on where we want to go and we count on measures to see if we have succeeded.

But why playing a game for brainstorming?  Because we just love playing games 🙂 but more important because when we are on a game we feel free to explore all the alternatives and go beyond conventions.  And that facilitates innovative ideas to come up.  We just free ourselves from standard agreed conventions to cover all the possible alternatives that the rules of the game offer us.

As an example, I can mention the story of Timothy Ferriss, author of ‘The 4-hour workweek’ that won the gold medal at the Chinese Kickboxing National Championships.  He did not use to practice kickboxing, but he read the rules of that sport, and he explore the ‘game space’ of the Championship.  He then took advantage of 2 loopholes to participate with only 4-weeks of preparation!  One of the rules said that if the combatant fell off the platform 3 times in the row, his opponent won by default.   Another one allowed him to play in classes of lower weight than what he should have played in.  Those 2 rules combined made him the World Champion on Kickboxing.  He was not really playing; he was just pushing his opponents and won with that technique.  For sure you can argue it is not a fair way of winning, but it’s an interesting way of thinking in order to reach the goal of the game.

Another example comes from my son who had an assignment last year at the university: to program a robot so that it will follow a circuit, then it has to throw a piece of wood as far as it could and finishes by going back to its parking place.  There were points for each action: to reach the start line, to follow the path without going out of the route, to throw the piece of wood in a predetermined place and also to go back to the garage.  The path was unknown, only revealed at the time of the exam.  When the fatidic day came, the path that was presented to them was quite complicated and most of the robots failed.  But in one of the teams they had a ‘plan B’ that was a different set of programming instructions: they only programmed the robot  to do the tasks that gave points with the minimum risk: go to the starting point, go to the predetermined place and throw the piece of wood and then return to the garage. The robot didn’t even try to do the circuit, but with that strategy they were one of the 5 finalists!  Again, this is the same situation as with Timothy Ferriss: it doesn’t feel fair even if it played by the rules,  but worked for the assignement.

Now if your survival is at stake, let’s imagine a planetary catastrophe, wouldn’t it be good to have a ‘B’ strategy on your sleeve? 

MOOCs: the new learning style

Last week I presented MOOCs (Massive Open Online Courses) at the Professional Women International association in Brussels, Belgium.

I had the pleasure of talking to the participants afterwards.  They told me they were so pleased to learn they had such an easy way of taking good quality courses that they were going to check that same night for their preferred subjects 🙂

Happy to have contributed to spread the word about the availability of the MOOCs, putting all their encapsulated knowledge encapsulated at any user’s fingertips!

On the last slide, I just dropped words  with the main implications of this trend;  I encourage you to put a comment if any of the subjects I mention resonates with you: