Big Data and Ethics

BIG Data and Ethics was held a few weeks ago in the new premises of the DigitYser, downtown Brussels.

It was a great Meetup, with interesting speakers and an interested public 😉 It’s always a pleasure when the public can contribute and presentations raise great discussions, and it is more important here on this gathering on ethics, as people still have to position themselves on the different aspects of this topic.

I was particularly surprised when Michael Ekstrand from Boise State University mentioned a use of the recommendations systems that I hadn’t think of: using it as a tool to tackle the intention behaviour gap: ‘I don’t do what I want to do’ (for example not eating while on a diet). Recommenders can be used to help you change your behaviour, giving you nudges as incentive.

Jochanan Eynikel also mentioned the use of technology as a morality enforcer.

Still, there are possible drawbacks:

Another area that was discussed was the ethical fact that Personalisation has a direct negative impact on Insurance as it goes against Risk mitigation (mutualising it among customers). There are sensible domains where a ‘human’ approach should be taken.
How to ensure ethical and moral concerns are taken into account? One approach is through participatory design, that is a framework to get users voices on the subject during the design phase. MIT is strongly pushing participatory design to tackle many basic dilemmas.

Solving and clarifying our human position on these kind of dilemmas is more than relevant when we are talking here about autonomous technology, that is when technology is teachings itself, as driving cars learning from users.
Can we imagine not having human supervision in all domains? How to introduce Ethics so that the system itself can choose the ‘good’ decision and discard the others?

Pierre-Nicolas Schwab presented us the General Data Protection Regulation as “the only thing that the EC can do to force companies to take data privacy into account: fine them if they don’t”:

At the end of the meeting, this question has been raised: “Do data scientist and programmers need an Hippocratic oath?” Like ACM that has a code of conduct, something like ‘don’t harm with your code’.
What’s your opinion on this?

Elections warn about ethical issues in algorithms

I tweeted recently on this article about how Big Data has been used on the last American Presidential campaign.

Concordia Summit, New York 2016

“At Cambridge,” he said, “we were able to form a model to predict the personality of every single adult in the United States of America.” The hall is captivated. According to Nix, the success of Cambridge Analytica’s marketing is based on a combination of three elements: behavioral science using the OCEAN Model, Big Data analysis, and ad targeting. Ad targeting is personalized advertising, aligned as accurately as possible to the personality of an individual consumer.

Nix candidly explains how his company does this. First, Cambridge Analytica buys personal data from a range of different sources, like land registries, automotive data, shopping data, bonus cards, club memberships, what magazines you read, what churches you attend. Nix displays the logos of globally active data brokers like Acxiom and Experian—in the US, almost all personal data is for sale. […] Now Cambridge Analytica aggregates this data with the electoral rolls of the Republican party and online data and calculates a Big Five personality profile. Digital footprints suddenly become real people with fears, needs, interests, and residential addresses.

Nix shows how psychographically categorized voters can be differently addressed, based on the example of gun rights, the 2nd Amendment: “For a highly neurotic and conscientious audience the threat of a burglary—and the insurance policy of a gun.” An image on the left shows the hand of an intruder smashing a window. The right side shows a man and a child standing in a field at sunset, both holding guns, clearly shooting ducks: “Conversely, for a closed and agreeable audience. People who care about tradition, and habits, and family.”

Now I came across this other article by Peter Diamandis, featuring what we can expect in 4 year’s time for the next future elections’ campaign.

5 Big Tech Trends That Will Make This Election Look Tame

5 Big Tech Trends That Will Make This Election Look Tame

If you think this election is insane, wait until 2020.

I want you to imagine how, in four years’ time, technologies like AI, machine learning, sensors and networks will accelerate.

Political campaigns are about to get hyper-personalized thanks to advances in a few exponential technologies.

Imagine a candidate who now knows everything about you, who can reach you wherever you happen to be looking, and who can use info scraped from social media (and intuited by machine learning algorithms) to speak directly to you and your interests.

[…] For example, imagine I’m walking down the street to my local coffee shop and a photorealistic avatar of the presidential candidate on the bus stop advertisement I pass turns to me and says:

“Hi Peter, I’m running for president. I know you have two five-year-old boys going to kindergarten at XYZ school. Do you know that my policy means that we’ll be cutting tuition in half for you? That means you’ll immediately save $10,000 if you vote for me…”

If you pause and listen, the candidate’s avatar may continue: […] “I’d really appreciate your vote. Every vote and every dollar counts. Do you mind flicking me a $1 sticker to show your support?”

I know, this last article is from the SingularityHub, but even though they tend to be alarming, knowing how fast technology advances, the predictions they advance are not too exaggerated…

In any way, that reminds me how important it is to ACT on the ethical issues of algorithms. Please notice the capital letters to stress on the movement, which is to take action.  There are many issues that need to be identify, to be discussed, to raise awareness upon, to regulate, and on some of them we can already act on at company level.

I talked in May last year at the Data Innovation Summit about the biases that can be (and usually are) replicated by the new algorithms based on data.  Since then I began working on a training program to help identify and correct those bias when designing and using algorithms, and I’m reminded with the above mentioned articles that this cannot be delayed, it’s needed right now.

So if you are interested on getting your people and organization be aware of biases (human biases and digital ones), and be trained to fix these issues, contact me!


We are creating our future, let’s don’t close our eyes, we can take control and assume our responsibility setting the railings that will guide the path to our future society.


AI and Machine Learning in business: use it everywhere!

How One Clothing Company Blends AI and Human Expertise, HBR nov-16

How One Clothing Company Blends AI and Human Expertise, HBR nov-16

Last week Bev from PWI’s group in Linkedin pointed me to a great HBR article: “How One Clothing Company Blends AI and Human Expertise”, by H. James Wilson, Paul Daugherty and Prashant Shukla.

It describes how the company Stitch Fix works, using machine learning insights to assist their designers, and as you will see, they use machine learning at many levels throughout the company.

The company offers a subscription clothing and styling service that delivers apparel to its customers’ doors. But users of the service don’t actually shop for clothes; in fact, Stitch Fix doesn’t even have an online store. Instead, customers fill out style surveys, provide measurements, offer up Pinterest boards, and send in personal notes. Machine learning algorithms digest all of this eclectic and unstructured information. An interface communicates the algorithms’ results along with more-nuanced data, such as the personal notes, to the company’s fashion stylists, who then select five items from a variety of brands to send to the customer. Customers keep what they like and return anything that doesn’t suit them.

The Key factor of success for the company is to be good at recommending clothes that not only will fit the customer and that they’ll like enough to keep them, but better than just ‘like them’, that they like them enough to be happy with their subscription.

Stitch Fix, which lives and dies by the quality of its suggestions, has no choice but to do better [than Amazon and Netflix].

Unlike Amazon and Netflix that recommend directly products to the customers, here they use machine learning methods to provide digested information to their human stylists and designers.

[…] companies can use machines to supercharge the productivity and effectiveness of workers in unprecedented ways […]

Algorithms are for example analysing the measurements to find other clients with same body shape, so they can use the knowledge of what items fitted those other clients: the clothes that those other clients kept. Algorithms are also used to extract information of clients’ taste on styles, from brands preferences and their comments on collections.  Human stylists, using the results of that data analysis and reading the client’s notes, are better equipped to choose clothes that will suit the customers.

Next, it’s time to pick the actual [item of clothe] to be shipped. This is up to the stylist, who takes into account a client’s notes or the occasion for which the client is shopping. In addition, the stylist can include a personal note with the shipment, fostering a relationship, which Stitch Fix hopes will encourage even more useful feedback.

This human-in-the-loop recommendation system uses multiple information streams to help it improve.

See how stylists maintain a human dialog with their clients through the included note. This personalised contact is usually well appreciated by customers and it has a positive effect for the company because it opens the door to receive their feedback to better tailor their next delivery.

The company is testing natural language processing for reading and categorizing notes from clients — whether it received positive or negative feedback, for instance, or whether a client wants a new outfit for a baby shower or for an important business meeting. Stylists help to identify and summarize textual information from clients and catch mistakes in categorization.

The machine learning systems arelearning through experience’ (=adapting with the feedback) as usual, but in a humanly ‘supervised’ way. This supervision allows them to try new algorithms without the risk of losing clients if results are not as good as expected.

Stitch Fix employs more than 2,800 stylists, dispersed across the country, all of them working from home and setting their own hours. In this distributed workforce, stylists are measured by a variety of metrics, including the amount of money a client spends, client satisfaction, and the number of items a client keeps per delivery. But one of the most important factors is the rate at which a stylist puts together a collection of clothes for a client.

Speed is an important factor to satisfy their customers’ demands, and machine learning gives them the needed insight so much quicker than if stylists had to go through all the raw data!

This is where the work interface comes into effect. To enable fast decision making, the screen on which a stylist views recommendations shows the relevant information the company keeps about a client, including apparel and feedback history, measurements, and tolerance for fashion risks — it’s all readily accessible

The interface itself, which shows the information to the stylist, is also adapting through feedback, being tested for better performance.  And you could go again one step further and check for bias on the stylists:

Stitch Fix’s system can vary the information a stylist sees to test for bias. For instance, how might a picture of a client affect a stylist’s choices? Or knowledge about a client’s age? Does it help or hinder to know where a client lives?

By measuring the impact of modified information in the stylist interface, the company is developing a systematic way to measure improvements in human judgment

And there are many other machine learning algorithms throughout the company:

[…]the company has hundreds of algorithms, like a styling algorithm that matches products to clients; an algorithm that matches stylists with clients; an algorithm that calculates how happy a customer is with the service; and one that figures out how much and what kind of inventory the company should buy.

The company is also using the information of the kept and returned items to find fashion trends:

From this seemingly simple data, the team has been able to uncover which trends change with the seasons and which fashions are going out of style.

The data they are collecting is also helping advance research on computer vision systems:

[…] system that can interpret style and extract a kind of style measurement from images of clothes. The system itself would undergo unsupervised learning, taking in a huge number of images and then extracting patterns or features and deciding what kinds of styles are similar to each other. This “auto-styler” could be used to automatically sort inventory and improve selections for customers.

In addition to developing an algorithmic trend-spotter and an auto-styler, Stitch Fix is developing brand new styles — fashions born entirely from data. The company calls them frankenstyles. These new styles are created from a “genetic algorithm,” modeled after the process of natural selection in biological evolution. The company’s genetic algorithm starts with existing styles that are randomly modified over the course of many simulated “generations.” Over time, a sleeve style from one garment and a color or pattern from another, for instance, “evolve” into a whole new shirt.

How does a company using so many machine learning systems look like at employee level? How is it perceived by the employees? This is what they say:

Even with the constant monitoring and algorithms that guide decision making, according to internal surveys, Stitch Fix stylists are mostly satisfied with the work. And this type of work, built around augmented creativity and flexible schedules, will play an important role in the workforce of the future.

Machine learning and AI (artificial intelligence) systems are changing the way companies do business.  They are providing an insight that either could not be grasped before, or that it could, but not at that speed, nor being accessible as a tool to assist each and every employee.

The least that can be said is that this will improve productivity in all sectors and, as today almost everyone has access to the Internet to verify a word, look for a translation, a recipe, check the  weather and countless other uses, the new generation of employees will be assisted by tons of algorithms that will analyse data and deduce, induce or summarize information to assist them in their work and in their decision-making.

DIS2016 Restore the balance of data

Two weeks ago was the Data Innovation Summit 2016.  I was due to speak using the presentation format of ‘ignite’.  For the ones who don’t know this format, it’s a nightmare! Out of joke, it means that slides go automatically at regular intervals (15″ in my case).  You cannot stop it, you don’t control the flow… so to be synchronized, you really have to prepare your speech in advance, you must know exactly how much time it takes to explain each of your points, what examples you’ll be presenting (check it out, 15 seconds go very quickly when you’re looking for your words :-))).

So here it is, my 5′ presentation, if you only count the time on scene…

Pre-Crime unit for tracking Terrorists?

minority-report-11-3Due to last events in Belgium, the terrorist bomb attacks in Zaventem and Brussels, I couldn’t but remember the article from Bloomberg Businessweek talking about pre-crime: ‘China Tries Its Hand at Pre-Crime’.  They refer us to the film Minority Report, with Tom Cruise, that takes place in a future society where three mutants foresee all crime before it occurs. Plugged into a great machine, these “precogs” are at the base of a police unit (Pre-Crime unit) that arrests murderers before they commit their crimes.

China Electronics Technology company won recently the contract for constructing the ‘United information environment’ as they call it, an ‘antiterrorism’ platform as declared by the Chinese government:

The Communist Party has directed [them] to develop software to collate data on jobs, hobbies, consumption habits, and other behavior of ordinary citizens to predict terrorist acts before they occur.

This may seem a little too much to ask, if you think about it you may need every daily detail to be able to predict terrorist behaviour, but in a country like China where the state has control over their citizens since many decades, where they have no privacy limits to respect and a good network of informants…

A draft cybersecurity law unveiled in July grants the government almost unbridled access to user data in the name of national security. “If neither legal restrictions nor unfettered political debate about Big Brother surveillance is a factor for a regime, then there are many different sorts of data that could be collated and cross-referenced to help identify possible terrorists or subversives,” says Paul Pillar, a nonresident fellow at the Brookings Institution.

See how now there is also a new target: subversives.  the article continues:

China was a surveillance state long before Edward Snowden clued Americans in to the extent of domestic spying. Since the Mao era, the government has kept a secret file, called a dang’an, on almost everyone. Dang’an contain school reports, health records, work permits, personality assessments, and other information that might be considered confidential and private in other countries. The contents of the dang’an can determine whether a citizen is eligible for a promotion or can secure a coveted urban residency permit. The government revealed last year that it was also building a nationwide database that would score citizens on their trustworthiness.

Wait a second, who’s defining what is ‘trustworthiness’, and what if you’re not?

New antiterror laws that went into effect on Jan. 1 allow authorities to gain access to bank accounts, telecommunications, and a national network of surveillance cameras called Skynet. Companies including Baidu, China’s leading search engine; Tencent, operator of the popular social messaging app WeChat; and Sina, which controls the Weibo microblogging site, already cooperate with official requests for information, according to a report from the U.S. Congressional Research Service. A Baidu spokesman says the company wasn’t involved in the new antiterror initiative.

So Skynet is here now (remember Terminator Genisys?). Even if right after a horrendous crime you can be tempted to be happy that this ‘pre-crime’ initiative is being constructed, there are way too many negative aspects still to consider before having such a tool. Like in which hands will it be, who’s defining what is a crime, what about your free will of changing your mind, to mention some.  Let’s begin thinking how to tackle them.

The rise of the Self-Tuning Enterprise


As you may know, I am a fan of Machine Learning, a subfield of Artificial Intelligence (AI) that englobes computer programs that exhibit some kind of intelligent behavior. The first researchers on AI began analyzing how we (humans) did intelligent tasks in order to create programs that reproduced our behavior. So look at the irony of this HBR article”The self-Tuning Enterprise” where the authors Martin Reeves, Ming Zeng and Amin Venjara use the analogy of how machine learning programs do to transpose the behavior to enterprise strategy tuning:

[…] These enterprises [he’s talking about internet companies like Google, Netflix, Amazon, and Alibaba] have become extraordinarily good at automatically retooling their offerings for millions of individual customers, leveraging real-time data on their behavior. Those constant updates are, in fact, driven by algorithms, but the processes and technologies underlying the algorithms aren’t magic: It’s possible to pull them apart, see how they operate, and use that know-how in other settings. And that’s just what some of those same companies have started to do.

In this article we’ll look first at how self-tuning algorithms are able to learn and adjust so effectively in complex, dynamic environments. Then we’ll examine how some organizations are applying self-tuning across their enterprises, using the Chinese e-commerce giant Alibaba as a case example.”

You may have notice those new programs at work to recommend you books or other products each time you buy something on Internet (and in fact, even if you are just looking and didn’t buy anything ;-). Those programs are based on Machine Learning algorithms, and they improve over time with the new information of success (if you bought the proposed article) or failure (if you didn’t).

How do they work?

There is a ‘learning’ part that finds similarities between customers in order to propose you products that another customer similar to you bought. But it’s not so simple, these programs are coupled with other learning modules like the one that does some ‘experimentation’ not to get stuck with always the same kind of products. This module will propose you something different from time to time. Even if you like polar books, after the tenth one, you would like to read something else, isn’t it? So the trick is to find equilibrium between showing you books you have great chances to like and novelties to make you discover new horizons. You have to have the feeling that they know what they are doing when they propose you a book (so they fine-tune to be good at similarities) but you may like to change from time to time not to get bored, and also they are very interested in making you discover another bounty/category of literature, let’s say poems. If you don’t like it, you won’t accept so easily next recommendation, so here comes the next ‘tuning’ on how often to do it.

And that’s where self-tuning comes in. Self-tuning is related to the concepts of agility (rapid adjustment), adaptation (learning through trial and error), and ambidexterity (balancing exploration and exploitation). Self-tuning algorithms incorporate elements of all three—but in a self-directed fashion.

The ‘self-tuning’ process they are talking about adjusts the tool to the new information available to him without the need of reprogramming. The analogy the authors are doing is to do in organizations this same kind of automatics tunings that Machine Learning systems are doing: to ‘self-tune’ the companies without any top-down directive, to have agility, adaptation through trial and error and ambidexterity balancing exploration and exploitation.

To understand how this works, think of the enterprise as a nested set of strategic processes. At the highest level, the vision articulates the direction and ambition of the firm as a whole. As a means to achieving the vision, a company deploys business models and strategies that bring together capabilities and assets to create advantageous positions. And it uses organizational structure, information systems, and culture to facilitate the effective operation of those business models and strategies.

In the vast majority of organizations, the vision and the business model are fixed axes around which the entire enterprise revolves. They are often worked out by company founders and, once proven successful, rarely altered. Consequently, the structure, systems, processes, and culture that support them also remain static for long periods. Experimentation and innovation focus mostly on product or service offerings within the existing model, as the company leans on its established recipe for success in other areas.

The self-tuning enterprise, in contrast, takes an evolutionary approach at all levels. The vision, business model, and supporting components are regularly calibrated to the changing environment by applying the three learning loops. The organization is no longer viewed as a fixed means of transmitting intentions from above but, rather, as a network that shifts and develops in response to external feedback. To see what this means in practice, let’s look at Alibaba.[…]

Keep resetting the vision.

When Alibaba began operations, internet penetration in China was less than 1%. While most expected that figure to grow, it was difficult to predict the nature and shape of that growth. So Alibaba took an experimental approach: At any given time, its vision would be the best working assumption about the future. As the market evolved, the company’s leaders reevaluated the vision, checking their hypotheses against reality and revising them as appropriate.

In the early years, Alibaba’s goal was to be “an e-commerce company serving China’s small exporting companies.” This led to an initial focus on, which created a platform for international sales. However, when the market changed, so did the vision. As Chinese domestic consumption exploded, Alibaba saw an opportunity to expand its offering to consumers. Accordingly, it launched the online marketplace Taobao in 2003. Soon Alibaba realized that Chinese consumers needed more than just a site for buying and selling goods. They needed greater confidence in internet business—for example, to be sure that online payments were safe. So in 2004, Alibaba created Alipay, an online payment service. […] Ultimately, this led Alibaba to change its vision again, in 2008, to fostering “the development of an e-commerce ecosystem in China.” It started to offer more infrastructure services, such as a cloud computing platform, microfinancing, and a smart logistics platform. More recently, Alibaba recalibrated that vision in response to the rapid convergence between digital and physical channels. Deliberately dropping the “e” from e-commerce, its current vision statement reads simply, “We aim to build the future infrastructure of commerce.”

Experiment with business models.

Alibaba could not have built a portfolio of companies that spanned virtually the entire digital spectrum without making a commitment to business model experimentation from very early on.

[…]At each juncture in its evolution, Alibaba continued to generate new business model options, letting them run as separate units. After testing them, it would scale up the most promising ones and close down or reabsorb those that were less promising.[…]

Again there was heated debate within the company about which direction to take and which model to build. Instead of relying on a top-down decision, Alibaba chose to place multiple bets and let the market pick the winners.[…]

Increasing experimentation at the height of success runs contrary to established managerial wisdom, but for Alibaba it was necessary to avoid rigidity and create options. Recalibrating how and how much to experiment was fundamental to its ability to capitalize on nascent market trends.

Focus on seizing and shaping strategic opportunities, not on executing plans.

In volatile environments, plans can quickly become out-of-date. In Alibaba’s case, rapid advances in technology, shifting consumer expectations in China and beyond, and regulatory uncertainty made it difficult to predict the future. […]

Alibaba does have a regular planning cycle, in which business unit leaders and the executive management team iterate on plans in the fourth quarter of each year. However, it’s understood that this is only a starting point. Whenever a unit leader sees a significant market change or a new opportunity, he or she can initiate a “co-creation” process, in which employees, including senior business leaders and lead implementers, develop new directions for the business directly with customers.

At Alibaba co-creation involves four steps. The first is establishing common ground: identifying signals of change (based on data from the market and insights from customers or staff) and ensuring that the right people are present and set up to work together. This typically happens at a full-day working session. The second step is getting to know the customer. Now participants explore directly with customers their evolving needs or pain points and brainstorm potential solutions. The third step entails developing an action plan based on the outcome of customer discussions. An action plan must identify a leader who can champion the opportunity, the supporting team (or teams) that will put the ideas into motion, and the mechanisms that will enable the work to get done. The final step is gathering regular customer feedback as the plan is implemented, which can, in turn, trigger further iterations.

So now you know how Alibaba does it, how is it in your company?  What ideas from them would you adopt?

New computer interface using radar technology

Thanks to

Thanks to

Have you seen this article?  It’s about the project Soli from the Google’s Advanced Technologies and Projects (ATAP) group.  They have implemented a new way to comunicate with a computer: through radar.  The radar captures the slight movements of the hand like in this picture, where just moving your fingers in the air makes you move a ‘virtual’ slider.

Fantastic, can’t wait to try it!

Can An Algoritm be “Racist”?

Library of Congress Classification - Reading Room

David Auerbach has written this article pointing out that some classification algorithms may be racists :

Can a computer program be racist? Imagine this scenario: A program that screens rental applicants is primed with examples of personal history, debt, and the like. The program makes its decision based on lots of signals: rental history, credit record, job, salary. Engineers “train” the program on sample data. People use the program without incident until one day, someone thinks to put through two applicants of seemingly equal merit, the only difference being race. The program rejects the black applicant and accepts the white one. The engineers are horrified, yet say the program only reflected the data it was trained on. So is their algorithm racially biased?

Yes and a classification algorithm could not only be racist but, as humans write them, or more accurately with the learning algorithms, as they are built upon human examples and counter-examples, the algorithms may have any human bias that we have.  With the abundance of data, we are training programs with examples from the real world; the resulting programming will be an image of how we act and not a reflection on how we would like to be.  Exactly as the saying on educating kids: they do as they see and not as they are told :- )

To make things worse, when dealing with learning algorithms, not even the programmer can predict the resulting classification. So knowing that there may be errors,  who is there to ensure their correctness?

What about the everyday profiling that goes on without anyone noticing? [… ]
Their goal is chiefly “microtargeting,” knowing enough about users so that ads can be customized for tiny segments like “soccer moms with two kids who like Kim Kardashian” or “aging, cynical ex-computer programmers.”

Some of these categories are dicey enough that you wouldn’t want to be a part of them. Pasquale writes that some third-party data-broker microtargeting lists include “probably bipolar,” “daughter killed in car crash,” “rape victim,” and “gullible elderly.” […]

There is no clear process for fixing these errors, making the process of “cyberhygiene” extraordinarily difficult.[…]

For example, just because someone has access to the source code of an algorithm does not always mean he or she can explain how a program works. It depends on the kind of algorithm. If you ask an engineer, “Why did your program classify Person X as a potential terrorist?” the answer could be as simple as “X had used ‘sarin’ in an email,” or it could be as complicated and nonexplanatory as, “The sum total of signals tilted X out of the ‘non-terrorist’ bucket into the ‘terrorist’ bucket, but no one signal was decisive.” It’s the latter case that is becoming more common, as machine learning and the “training” of data create classification algorithms that do not behave in wholly predictable manners.

Further on, the author mentions the dangers or this kind of programming that is not fully predictable.

Philosophy professor Samir Chopra has discussed the dangers of such opaque programs in his book A Legal Theory for Autonomous Artificial Agents, stressing that their autonomy from even their own programmers may require them to be regulated as autonomous entities.

Chopra sees these algorithms as autonomous entities.  They may be unpredictable, but till now there is no will or conscious choice to go one path instead of another.  Programs are being told to maximize a particular benefit, and how to measure that benefit is a calculated by a  human written function.  Now as time goes by, and technological advances go their way, I can easily see that the benefit function could include certain feedback the program gets from ‘real world’ that could make the behavior of the algorithm still more unpredictable than now.  At that point we can think of algorithms that can evaluate or ‘choose’ to be on the regulated side.. or not? Will it reaches the point of them having a kind of survival instinct?   Where it may lead that…we’ll know it soon enough.

photo by:

Changing schools with gaming techniques

Could you imagine a world where children will ask you to bring them to school?  Well, that world doesn’t seem so far away… at least I know my son would be happy to  go to the school Ian Livingstone is planning to open in 2016 in Hammersmith, London.  Read what technology reporter Dave Lee wrote on his article in the BBC News:

By bringing gaming elements into the learning process, Mr Livingstone argued, students would learn how to problem-solve rather than just how to pass exams.


Mr Livingstone said he wanted to bring the principles of his interactive books to the classroom

[…] Mr Livingstone is best known for being the man behind huge franchises such as Tomb Raider and tabletop game Warhammer.

In the 80s, his Fighting Fantasy books brought an interactive element to reading that proved extremely popular.

Speaking to the BBC about the plans, Mr Livingstone said he wanted to bring those interactive principles to schooling, but stressed the school would provide learning across all core subjects.

There is more behind his idea than just making children wanting to go to school.  It fosters a ‘hands-on’ approach that allows students not only to know, but to know how to use the learned knowledge.  Plus the added benefit of allowing diverse paths to reach the goal:

By bringing gaming elements into the learning process, Mr Livingstone argued, students would learn how to problem-solve rather than just how to pass exams.

[…] “There needs to be a shift in the pedagogy of learning in classrooms because there’s still an awful lot of testing and conformity instead of diversity.

“I’m not saying knowledge is bad – I’m just trying to get a bit more know-how into the curriculum.”

He said he considers the trial-and-error nature of creating games as a key model for learning.

“For my mind, failure is just success work-in-progress. Look at any game studio and the way they iterate. Angry Birds was Rovio’s 51st game.

“You’re allowed to fail. Games-based learning allows you to fail in a safe environment.”

Let’s wish him a great success!