How our economy is shifting towards network-centric players

Managing our hub economy, HBR

I loved this article from the Harvard Business Review: Managing Our Hub Economy,by Marco Iansiti and Karim R. Lakhani, the authors explain in a very clear way what we are already experiencing in the last decade already at the macro-level economy.

The global economy is coalescing around a few digital superpowers. We see unmistakable evidence that a winner-take-all world is emerging in which a small number of “hub firms”—including Alibaba, Alphabet/Google, Amazon, Apple, Baidu, Facebook, Microsoft, and Tencent—occupy central positions. While creating real value for users, these companies are also capturing a disproportionate and expanding share of the value, and that’s shaping our collective economic future. The very same technologies that promised to democratize business are now threatening to make it more monopolistic.

Beyond dominating individual markets, hub firms create and control essential connections in the networks that pervade our economy. Google’s Android and related technologies form “competitive bottlenecks”; that is, they own access to billions of mobile consumers that other product and service providers want to reach. Google can not only exact a toll on transactions but also influence the flow of information and the data collected.

These big ‘Hub’ companies, as these authors call them, are companies that you cannot ignore when you want to do business in many markets today.  The interesting point of this article is that those same companies have a great competitive advantage over traditional companies in a lot of other markets. And each time they dominate in a different market, their competitive advantage grows to capture yet more easily the future next market they’ll wish to enter.

This is flipping, because one of the great advantages we all see on being connected through Internet and being heard by (almost) everybody is the democratisation of power, the opening of opportunities for everybody… and what is really happening is that the same companies that are offering the inter-connections are growing so much that they are not avoidable, so they monopolise the communications channels.

Hub firms don’t compete in a traditional fashion—vying with existing products or services, perhaps with improved features or lower cost. Rather, they take the network-based assets that have already reached scale in one setting and then use them to enter another industry and “re-architect” its competitive structure—transforming it from product-driven to network-driven. They plug adjacent industries into the same competitive bottlenecks they already control.

For example […] Google’s automotive strategy does not simply entail creating an improved car; it leverages technologies and data advantages (many already at scale from billions of mobile consumers and millions of advertisers) to change the structure of the auto industry itself.[…]

If current trends continue, the hub economy will spread across more industries, further concentrating data, value, and power in the hands of a small number of firms employing a tiny fraction of the workforce.[…]

To remain competitive, companies will need to use their assets and capabilities differently, transform their core businesses, develop new revenue opportunities, and identify areas that can be defended from encroaching hub firms and others rushing in from previously disconnected economic sectors. Some companies have started on this path—Comcast, with its new Xfinity platform, is a notable example—but the majority, especially those in traditional sectors, still need to master the implications of network competition.

In this article, the authors encourage the ‘hub’ companies to realize the impact they have on society, the resentment that could rise if their power is not wisely used.

Most importantly, the very same hub firms that are transforming our economy must be part of the solution—and their leaders must step up. As Mark Zuckerberg articulated in his Harvard commencement address in May 2017, “we have a level of wealth inequality that hurts everyone.” Business as usual is not a good option. Witness the public concern about the roles that Facebook and Twitter played in the recent U.S. presidential election, Google’s challenges with global regulatory bodies, criticism of Uber’s culture and operating policies, and complaints that Airbnb’s rental practices are racially discriminatory and harmful to municipal housing stocks, rents, and pricing.

Thoughtful hub strategies will create effective ways to share economic value, manage collective risks, and sustain the networks and communities we all ultimately depend on. If carmakers, major retailers, or media companies continue to go out of business, massive economic and social dislocation will ensue. And with governments and public opinion increasingly attuned to this problem, hub strategies that foster a more stable economy and united society will drive differentiation among the hub firms themselves.[…]

A real opportunity exists for hub firms to truly lead our economy. This will require hubs to fully consider the long-term societal impact of their decisions and to prioritize their ethical responsibilities to the large economic ecosystems that increasingly revolve around them. At the same time, the rest of us—whether in established enterprises or start-ups, in institutions or communities—will need to serve as checks and balances, helping to shape the hub economy by providing critical, informed input and, as needed, pushback.

They explain that with the growing connectivity, we share information at near-zero marginal cost. Thus networks are creating value:

Metcalfe’s law states that a network’s value increases with the number of nodes (connection points) or users—the dynamic we think of as network effects. This means that digital technology is enabling significant growth in value across our economy, particularly as open-network connections allow for the recombination of business offerings[…]

But that value is not much distributed among players to begin with, moreover the bigger the network, the stronger effect of attraction that it will exert, thus exacerbating the differences:

But while value is being created for everyone, value capture is getting more skewed and concentrated. This is because in networks, traffic begets more traffic, and as certain nodes become more heavily used, they attract additional attachments, which further increases their importance. This brings us to the third principle, a lesser-known dynamic originally posited by the physicist Albert-László Barabási: the notion that digital-network formation naturally leads to the emergence of positive feedback loops that create increasingly important, highly connected hubs. As digital networks carry more and more economic transactions, the economic power of network hubs, which connect consumers, firms, and even industries to one another, expands. Once a hub is highly connected (and enjoying increasing returns to scale) in one sector of the economy (such as mobile telecommunications), it will enjoy a crucial advantage as it begins to connect in a new sector (automobiles, for example). This can, in turn, drive more and more markets to tip, and the many players competing in traditionally separate industries get winnowed down to just a few hub firms that capture a growing share of the overall economic value created—a kind of digital domino effect.

They give then some well-known examples of our near past:

Just a few years ago, cell phone manufacturers competed head-to-head for industry leadership in a traditional product market without appreciable network effects. [..] But with the introduction of iOS and Android, the industry began to tip away from its hardware centricity to network structures centered on these multisided platforms. The platforms connected smartphones to a large number of apps and services. Each new app makes the platform it sits on more valuable, creating a powerful network effect that in turn creates a more daunting barrier to entry for new players. Today Motorola, Nokia, BlackBerry, and Palm are out of the mobile phone business, and Google and Apple are extracting the lion’s share of the sector’s value. The value captured by the large majority of complementors—the app developers and third-party manufacturers—is generally modest at best.

The domino effect is now spreading to other sectors and picking up speed. Music has already tipped to Apple, Google, and Spotify. […] On-premise computer and software offerings are losing ground to the cloud services provided by Amazon, Microsoft, Google, and Alibaba. In financial services, the big players are Ant, Paytm, Ingenico, and the independent start-up Wealthfront; in home entertainment, Amazon, Apple, Google, and Netflix dominate.

Where are powerful hub firms likely to emerge next? Health care, industrial products, and agriculture are three contenders. But let’s examine how the digital domino effect could play out in another prime candidate, the automotive sector […].

The authors then describe their analysis of the transformation that is going on in the automotive sector:

As with many other products and services, cars are now connected to digital networks, essentially becoming rolling information and transaction nodes. This connectivity is reshaping the structure of the automotive industry. When cars were merely products, car sales were the main prize. But a new source of value is emerging: the connection to consumers in transit. […] If consumers embrace self-driving vehicles, that one hour of consumer access could be worth hundreds of billions of dollars in the U.S. alone.

Which companies will capitalize on the vast commercial potential of a new hour of free time for the world’s car commuters? Hub firms like Alphabet and Apple are first in line. They already have bottleneck assets like maps and advertising networks at scale, and both are ready to create super-relevant ads pinpointed to the car’s passengers and location. […]

The transformation will also upend a range of connected sectors—including insurance, automotive repairs and maintenance, road construction, law enforcement, and infrastructure—as the digital dominos continue to fall. […]

In conclusion :

To reach the scale required to be competitive, automotive companies that were once fierce rivals may need to join together. […]

Of course, successful collaboration depends on a common, strongly felt commitment. So as traditional enterprises position themselves for a fight, they must understand how the competitive dynamics in their industries have shifted.

I think this analysis is highly accurate and we can expect similar developments in other industries.  They give a good advice to bare in mind when defining the best strategy for the long term.

Embrace difficulties to stay mentally fit and happy!

I just came by this old article from Ian Leslie in The Economist magazine, it’s about a thought: embrace difficulties when they arise, they force us to be more creative and bring more satisfaction when we overcome them.

There are two ideas intertwined here: the first one is that when things come too easy, we don’t savor them enough. In French I would say « Il faut de la pluie pour faire le beau temps ».

This article brought up a memory of my childhood: we had the means to eat good meat every day. Yes, you can argue that having meat every day is not healthy, but having been brought up in Argentina, well, meat (of any kind) was mandatory at the menu! The thing is that I remember a period we ate beef tenderloin, that is a very tender cut of beef meat. Obviously, we appreciated that cut, and for a long period, every dish at home containing beef meat was done with that cut.  On the oven, as a steak, or in a wok, it was always tenderloin.

Believe me, you can get tired of it!  After a while, whenever I went for dinner to friends and they had another cut, I really savored it, even if it was not so tender.

What about not having money limitations? Yes, I’m sure I would go for a ravaging shopping for a while… until I’ll end up having more than what I need, more than what I could wear on a season! And what after that?  Shopping will not taste the same ?

It’s the same on other levels. At work, if there is no challenge, we’d lose interest, emotion.

But not only that, here is the second idea: challenges force us to think, guide our imagination and help us to come up with innovative solutions. And after the exercise, we end up with a sense of satisfaction of having solved the problem that we would not have experienced without the problem in the first place. This sense of satisfaction for having stretched our brain muscle is equivalent to the endorphin’s after a physical exercise!

Our brains respond better to difficulty than we imagine. In schools, teachers and pupils alike often assume that if a concept has been easy to learn, then the lesson has been successful. But numerous studies have now found that when classroom material is made harder to absorb, pupils retain more of it over the long term, and understand it on a deeper level. Robert Bjork, of the University of California, coined the phrase “desirable difficulties” to describe the counter-intuitive notion that learning should be made harder by, for instance, spacing sessions further apart so that students have to make more effort to recall what they learnt last time. Psychologists at Princeton found that students remembered reading material better when it was printed in an ugly font.

So remember next time you encounter a pebble on your way : embrace the opportunity of some brain gymnastic and enjoy life!

Using The Past To Discover What The Customer Will Want Next

I loved the article What’s your best innovation bet? by Melissa Schilling in this summer issue of the Harvard Business Review, as it has always been very hard to guess the future:

Image from Magda Kochanowicz

Melissa Schilling says that “By mapping a technology’s past, you can predict what future customers will want.”  For that she explains her method:

  • 1 – Identify the key dimensions

What she means here is to examine/analyse/determine the different aspects in which the technology has evolved, like on processing speed or on precision just to mention some typical dimensions, and to relate them to the need of users: how much has the technology satisfied that need? She gives a clear example with the recording industry, where the basic dimension for many years was the audio fidelity:

By the mid-1990s, both industries were eager to introduce a next-generation audio format. In 1996 Toshiba, Hitachi, Time Warner, and others formed a consortium to back a new technology, called DVD-Audio, that offered superior fidelity and surround sound. They hoped to do an end run around Sony and Philips, which owned the compact disc standard and extracted a licensing fee for every CD and player sold.

Sony and Philips, however, were not going to go down without a fight. They counterattacked with a new format they had jointly developed, Super Audio CD. Those in the music industry gave a collective groan; manufacturers, distributors, and consumers all stood to lose big if they bet on the wrong format. Nonetheless, Sony launched the first Super Audio players in late 1999; DVD-Audio players hit the market in mid-2000. A costly format war seemed inevitable.

You may be scratching your head at this point, wondering why you’ve never heard about this format war. What happened? MP3 happened. While the consumer electronics giants were pursuing new heights in audio fidelity, an algorithm that slightly depressed fidelity in exchange for reduced audio file size was taking off. Soon after the file-sharing platform Napster launched in 1999, consumers were downloading free music files by the millions, and Napster-like services were sprouting up like weeds.

If you wonder: ”who could have predicted the disruptive arrival of MP3? How could the consumer electronics giants have known that a format on a trajectory of ever-increasing fidelity would be overtaken by a technology with less fidelity?” Well, that’s just the method she’s presenting in this article, which first step is identifying the different dimensions at play.

For example, computers became faster and smaller in tandem; speed was one dimension, size another. Developments in any dimension come with specific costs and benefits and have measurable and changing utility for customers. Identifying the key dimensions of a technology’s progression is the first step in predicting its future.

To determine these dimensions, trace the technology’s evolution to date, starting as far back as possible. Consider what need the technology originally fulfilled, and then for each major change in its form and function, think about what fundamental elements were affected.

Tracing its [the recording industry] history reveals six dimensions that have been central to its development: desynchronization, cost, fidelity, music selection, portability, and customizability. Before the invention of the phonograph, people could hear music or a speech only when and where it was performed. When Thomas Edison and Alexander Graham Bell began working on their phonographs in the late 1800s, their primary objective was to desynchronize the time and place of a performance so that it could be heard anytime, anywhere. Edison’s device—a rotating cylinder covered in foil—was a remarkable achievement, but it was cumbersome, and making copies was difficult. Bell’s wax-covered cardboard cylinders, followed by Emile Berliner’s flat, disc-shaped records and, later, the development of magnetic tape, made it significantly easier to mass-produce recordings, lowering their cost while increasing the fidelity and selection of music available.

For decades, however, players were bulky and not particularly portable. It was not until the 1960s that eight-track tape cartridges dramatically increased the portability of recorded music, as players became common in automobiles. Cassette tapes rose to dominance in the 1970s, further enhancing portability but also offering, for the first time, customizability—the ability to create personalized playlists. Then, in 1982, Sony and Philips introduced the compact disc standard, which offered greater fidelity than cassette tapes and rapidly became the dominant format.

[…] I usually ask teams to agree on three to six key dimensions for their technology.

The recurring dimensions across industries are: ease of use, durability and cost.  To foresee the future, it is worth also to imagine new  dimensions worth exploring. A good tip to come up with those new aspects is to think big, no constraints, what could the customer want in an ideal world.

Folklore has it that Henry Ford once said, “If I had asked people what they wanted, they would have said faster horses.” If any car maker at the time had really probed people about exactly what their dream conveyance would provide, they probably would have said “instantaneous transportation.” Both consumer responses highlight that speed is a high-level dimension valued in transportation, but the latter helps us think more broadly about how it can be achieved. There are only limited ways to make horses go faster—but there are many ways to speed up transportation

  • 2 – Locate your position

For each dimension, examine the value consumers are receiving for actual technology

This will help reveal where the greatest opportunity for improvement lies.

[..] For example, the history of audio formats suggests that the selection of music available has a concave parabolic utility curve: Utility increases as selection expands, but at a decreasing rate, and not indefinitely. When there’s little music to choose from, even a small increase in selection significantly enhances utility. Consider that when the first phonographs appeared, there were few recordings to play on them. As more became available, customers eagerly bought them, and the appeal of owning a player grew. Increasing selection even a little had a powerful impact on utility. Over the ensuing decades, selection grew exponentially, and the utility curve ultimately began to flatten; people still valued new releases, but each new recording added less additional value. Today digital music services like iTunes, Amazon Prime Music, and Spotify offer tens of millions of songs. With this virtually unlimited selection, most customers’ appetites are sated—and we are probably approaching the top of the curve.

Many dimensions have S-shaped curves: Below some threshold of performance there is no utility, but utility increases quickly above that threshold and then maxes out somewhere beyond that.

  • 3 – Determine your focus

Once you know the dimensions along which your firm’s technology has (or can be) improved and where you are on the utility curves for those dimensions, it should be straightforward to identify where the most room for improvement exists. But it’s not enough to know that performance on a given dimension can be enhanced; you need to decide whether it should be. So first assess which of the dimensions you’ve identified are most important to customers. Then assess the cost and difficulty of addressing each dimension.

For example, of the four dimensions that have been central to automobile development—speed, cost, comfort, and safety—which do customers value most, and which are easiest or most cost-effective to address?

[..] Tata Motors’ experience with the Nano is instructive. The Nano was designed as an affordable car for drivers in India, so it needed to be cheap enough to compete with two-wheeled scooters. The manufacturer cut costs in several ways: The Nano had only a two-cylinder engine and few amenities—no radio, electric windows or locks, antilock brakes, power steering, or airbags. Its seats had a simple three-position recline, the windshield had a single wiper, and there was only one rearview mirror. In 2014, after the Nano received zero stars for safety in crash tests, analysts pointed out that adding airbags and making simple adjustments to the frame could significantly improve the car’s safety for less than $100 per vehicle. Tata took this under advisement—and placed its bets on comfort. All 2017 models include air-conditioning and power steering but not airbags.

Once you have identified the dimensions, the author suggests scoring these criteria to help you prioritize where to put the effort of innovation: how much users care about the dimension, room for improvement of the technology, and the cost involved in developing a new product on that dimension.  See this example for blood-sugar monitoring devices:

DIMENSION IMPORTANCE TO
CUSTOMERS (1–5 SCALE)
ROOM FOR
IMPROVEMENT (1–5 SCALE)
EASE OF
IMPROVEMENT (1–5 SCALE)
TOTAL
SCORE
RELIABILITY 5 1 1 7
COMFORT 4 4 3 11
COST 4 2 2 8
EASE OF USE 3 2 3 8

This matrix is very helpful to explicit the need to change a company’s traditional strategy:

It can also help overcome the bias and inertia that tend to keep an organization’s attention locked on technology dimensions that are less important to consumers than they once were.

Depending on your company’s situation (lack of cash, strong market position,..) you can weight some of the scoring to get your ‘personalised score. You can also use this method to analyse your competitors positioning and expected future products.  Knowing their actual market strength and their potential future directions will make you see the best ‘bet’ for your company in an ever evolving industry.

The technology assessment exercise can help companies anticipate competitors’ moves. Because competitors may differ in their capabilities (making particular technology dimensions harder or easier for them to address), or because they may focus on different segments (influencing which dimensions seem most important or have the most room for improvement), they are likely to come up with different rankings for a given set of dimensions.

The great insight of the method presented in this article is not on getting the innovation idea, but more at a strategic level, on where it will be better to put the effort for Your company considering its Actual circumstances at this Present market (evolution of the industry and existing competence).

Perhaps more valuable is the big-picture perspective it can give managers—shedding new light on market dynamics and the larger-scale or longer-term opportunities before them. Only then will they be able to lead innovation in their industries rather than scramble to respond to it.

Big Data and Ethics

BIG Data and Ethics was held a few weeks ago in the new premises of the DigitYser, downtown Brussels.

It was a great Meetup, with interesting speakers and an interested public 😉 It’s always a pleasure when the public can contribute and presentations raise great discussions, and it is more important here on this gathering on ethics, as people still have to position themselves on the different aspects of this topic.

I was particularly surprised when Michael Ekstrand from Boise State University mentioned a use of the recommendations systems that I hadn’t think of: using it as a tool to tackle the intention behaviour gap: ‘I don’t do what I want to do’ (for example not eating while on a diet). Recommenders can be used to help you change your behaviour, giving you nudges as incentive.

Jochanan Eynikel also mentioned the use of technology as a morality enforcer.

Still, there are possible drawbacks:

Another area that was discussed was the ethical fact that Personalisation has a direct negative impact on Insurance as it goes against Risk mitigation (mutualising it among customers). There are sensible domains where a ‘human’ approach should be taken.
How to ensure ethical and moral concerns are taken into account? One approach is through participatory design, that is a framework to get users voices on the subject during the design phase. MIT is strongly pushing participatory design to tackle many basic dilemmas.

Solving and clarifying our human position on these kind of dilemmas is more than relevant when we are talking here about autonomous technology, that is when technology is teachings itself, as driving cars learning from users.
Can we imagine not having human supervision in all domains? How to introduce Ethics so that the system itself can choose the ‘good’ decision and discard the others?

Pierre-Nicolas Schwab presented us the General Data Protection Regulation as “the only thing that the EC can do to force companies to take data privacy into account: fine them if they don’t”:

At the end of the meeting, this question has been raised: “Do data scientist and programmers need an Hippocratic oath?” Like ACM that has a code of conduct, something like ‘don’t harm with your code’.
What’s your opinion on this?

Elections warn about ethical issues in algorithms

I tweeted recently on this article about how Big Data has been used on the last American Presidential campaign.

Concordia Summit, New York 2016

“At Cambridge,” he said, “we were able to form a model to predict the personality of every single adult in the United States of America.” The hall is captivated. According to Nix, the success of Cambridge Analytica’s marketing is based on a combination of three elements: behavioral science using the OCEAN Model, Big Data analysis, and ad targeting. Ad targeting is personalized advertising, aligned as accurately as possible to the personality of an individual consumer.

Nix candidly explains how his company does this. First, Cambridge Analytica buys personal data from a range of different sources, like land registries, automotive data, shopping data, bonus cards, club memberships, what magazines you read, what churches you attend. Nix displays the logos of globally active data brokers like Acxiom and Experian—in the US, almost all personal data is for sale. […] Now Cambridge Analytica aggregates this data with the electoral rolls of the Republican party and online data and calculates a Big Five personality profile. Digital footprints suddenly become real people with fears, needs, interests, and residential addresses.
[…]

Nix shows how psychographically categorized voters can be differently addressed, based on the example of gun rights, the 2nd Amendment: “For a highly neurotic and conscientious audience the threat of a burglary—and the insurance policy of a gun.” An image on the left shows the hand of an intruder smashing a window. The right side shows a man and a child standing in a field at sunset, both holding guns, clearly shooting ducks: “Conversely, for a closed and agreeable audience. People who care about tradition, and habits, and family.”

Now I came across this other article by Peter Diamandis, featuring what we can expect in 4 year’s time for the next future elections’ campaign.

5 Big Tech Trends That Will Make This Election Look Tame

5 Big Tech Trends That Will Make This Election Look Tame

If you think this election is insane, wait until 2020.

I want you to imagine how, in four years’ time, technologies like AI, machine learning, sensors and networks will accelerate.

Political campaigns are about to get hyper-personalized thanks to advances in a few exponential technologies.

Imagine a candidate who now knows everything about you, who can reach you wherever you happen to be looking, and who can use info scraped from social media (and intuited by machine learning algorithms) to speak directly to you and your interests.

[…] For example, imagine I’m walking down the street to my local coffee shop and a photorealistic avatar of the presidential candidate on the bus stop advertisement I pass turns to me and says:

“Hi Peter, I’m running for president. I know you have two five-year-old boys going to kindergarten at XYZ school. Do you know that my policy means that we’ll be cutting tuition in half for you? That means you’ll immediately save $10,000 if you vote for me…”

If you pause and listen, the candidate’s avatar may continue: […] “I’d really appreciate your vote. Every vote and every dollar counts. Do you mind flicking me a $1 sticker to show your support?”

I know, this last article is from the SingularityHub, but even though they tend to be alarming, knowing how fast technology advances, the predictions they advance are not too exaggerated…

In any way, that reminds me how important it is to ACT on the ethical issues of algorithms. Please notice the capital letters to stress on the movement, which is to take action.  There are many issues that need to be identify, to be discussed, to raise awareness upon, to regulate, and on some of them we can already act on at company level.

I talked in May last year at the Data Innovation Summit about the biases that can be (and usually are) replicated by the new algorithms based on data.  Since then I began working on a training program to help identify and correct those bias when designing and using algorithms, and I’m reminded with the above mentioned articles that this cannot be delayed, it’s needed right now.

So if you are interested on getting your people and organization be aware of biases (human biases and digital ones), and be trained to fix these issues, contact me!

EmojiOne

We are creating our future, let’s don’t close our eyes, we can take control and assume our responsibility setting the railings that will guide the path to our future society.

 

AI and Machine Learning in business: use it everywhere!

How One Clothing Company Blends AI and Human Expertise, HBR nov-16

How One Clothing Company Blends AI and Human Expertise, HBR nov-16

Last week Bev from PWI’s group in Linkedin pointed me to a great HBR article: “How One Clothing Company Blends AI and Human Expertise”, by H. James Wilson, Paul Daugherty and Prashant Shukla.

It describes how the company Stitch Fix works, using machine learning insights to assist their designers, and as you will see, they use machine learning at many levels throughout the company.

The company offers a subscription clothing and styling service that delivers apparel to its customers’ doors. But users of the service don’t actually shop for clothes; in fact, Stitch Fix doesn’t even have an online store. Instead, customers fill out style surveys, provide measurements, offer up Pinterest boards, and send in personal notes. Machine learning algorithms digest all of this eclectic and unstructured information. An interface communicates the algorithms’ results along with more-nuanced data, such as the personal notes, to the company’s fashion stylists, who then select five items from a variety of brands to send to the customer. Customers keep what they like and return anything that doesn’t suit them.

The Key factor of success for the company is to be good at recommending clothes that not only will fit the customer and that they’ll like enough to keep them, but better than just ‘like them’, that they like them enough to be happy with their subscription.

Stitch Fix, which lives and dies by the quality of its suggestions, has no choice but to do better [than Amazon and Netflix].

Unlike Amazon and Netflix that recommend directly products to the customers, here they use machine learning methods to provide digested information to their human stylists and designers.

[…] companies can use machines to supercharge the productivity and effectiveness of workers in unprecedented ways […]

Algorithms are for example analysing the measurements to find other clients with same body shape, so they can use the knowledge of what items fitted those other clients: the clothes that those other clients kept. Algorithms are also used to extract information of clients’ taste on styles, from brands preferences and their comments on collections.  Human stylists, using the results of that data analysis and reading the client’s notes, are better equipped to choose clothes that will suit the customers.

Next, it’s time to pick the actual [item of clothe] to be shipped. This is up to the stylist, who takes into account a client’s notes or the occasion for which the client is shopping. In addition, the stylist can include a personal note with the shipment, fostering a relationship, which Stitch Fix hopes will encourage even more useful feedback.

This human-in-the-loop recommendation system uses multiple information streams to help it improve.

See how stylists maintain a human dialog with their clients through the included note. This personalised contact is usually well appreciated by customers and it has a positive effect for the company because it opens the door to receive their feedback to better tailor their next delivery.

The company is testing natural language processing for reading and categorizing notes from clients — whether it received positive or negative feedback, for instance, or whether a client wants a new outfit for a baby shower or for an important business meeting. Stylists help to identify and summarize textual information from clients and catch mistakes in categorization.

The machine learning systems arelearning through experience’ (=adapting with the feedback) as usual, but in a humanly ‘supervised’ way. This supervision allows them to try new algorithms without the risk of losing clients if results are not as good as expected.

Stitch Fix employs more than 2,800 stylists, dispersed across the country, all of them working from home and setting their own hours. In this distributed workforce, stylists are measured by a variety of metrics, including the amount of money a client spends, client satisfaction, and the number of items a client keeps per delivery. But one of the most important factors is the rate at which a stylist puts together a collection of clothes for a client.

Speed is an important factor to satisfy their customers’ demands, and machine learning gives them the needed insight so much quicker than if stylists had to go through all the raw data!

This is where the work interface comes into effect. To enable fast decision making, the screen on which a stylist views recommendations shows the relevant information the company keeps about a client, including apparel and feedback history, measurements, and tolerance for fashion risks — it’s all readily accessible

The interface itself, which shows the information to the stylist, is also adapting through feedback, being tested for better performance.  And you could go again one step further and check for bias on the stylists:

Stitch Fix’s system can vary the information a stylist sees to test for bias. For instance, how might a picture of a client affect a stylist’s choices? Or knowledge about a client’s age? Does it help or hinder to know where a client lives?

By measuring the impact of modified information in the stylist interface, the company is developing a systematic way to measure improvements in human judgment

And there are many other machine learning algorithms throughout the company:

[…]the company has hundreds of algorithms, like a styling algorithm that matches products to clients; an algorithm that matches stylists with clients; an algorithm that calculates how happy a customer is with the service; and one that figures out how much and what kind of inventory the company should buy.

The company is also using the information of the kept and returned items to find fashion trends:

From this seemingly simple data, the team has been able to uncover which trends change with the seasons and which fashions are going out of style.

The data they are collecting is also helping advance research on computer vision systems:

[…] system that can interpret style and extract a kind of style measurement from images of clothes. The system itself would undergo unsupervised learning, taking in a huge number of images and then extracting patterns or features and deciding what kinds of styles are similar to each other. This “auto-styler” could be used to automatically sort inventory and improve selections for customers.

In addition to developing an algorithmic trend-spotter and an auto-styler, Stitch Fix is developing brand new styles — fashions born entirely from data. The company calls them frankenstyles. These new styles are created from a “genetic algorithm,” modeled after the process of natural selection in biological evolution. The company’s genetic algorithm starts with existing styles that are randomly modified over the course of many simulated “generations.” Over time, a sleeve style from one garment and a color or pattern from another, for instance, “evolve” into a whole new shirt.

How does a company using so many machine learning systems look like at employee level? How is it perceived by the employees? This is what they say:

Even with the constant monitoring and algorithms that guide decision making, according to internal surveys, Stitch Fix stylists are mostly satisfied with the work. And this type of work, built around augmented creativity and flexible schedules, will play an important role in the workforce of the future.

Machine learning and AI (artificial intelligence) systems are changing the way companies do business.  They are providing an insight that either could not be grasped before, or that it could, but not at that speed, nor being accessible as a tool to assist each and every employee.

The least that can be said is that this will improve productivity in all sectors and, as today almost everyone has access to the Internet to verify a word, look for a translation, a recipe, check the  weather and countless other uses, the new generation of employees will be assisted by tons of algorithms that will analyse data and deduce, induce or summarize information to assist them in their work and in their decision-making.

DIS2016 Restore the balance of data

Two weeks ago was the Data Innovation Summit 2016.  I was due to speak using the presentation format of ‘ignite’.  For the ones who don’t know this format, it’s a nightmare! Out of joke, it means that slides go automatically at regular intervals (15″ in my case).  You cannot stop it, you don’t control the flow… so to be synchronized, you really have to prepare your speech in advance, you must know exactly how much time it takes to explain each of your points, what examples you’ll be presenting (check it out, 15 seconds go very quickly when you’re looking for your words :-))).

So here it is, my 5′ presentation, if you only count the time on scene…

Pre-Crime unit for tracking Terrorists?

minority-report-11-3Due to last events in Belgium, the terrorist bomb attacks in Zaventem and Brussels, I couldn’t but remember the article from Bloomberg Businessweek talking about pre-crime: ‘China Tries Its Hand at Pre-Crime’.  They refer us to the film Minority Report, with Tom Cruise, that takes place in a future society where three mutants foresee all crime before it occurs. Plugged into a great machine, these “precogs” are at the base of a police unit (Pre-Crime unit) that arrests murderers before they commit their crimes.

China Electronics Technology company won recently the contract for constructing the ‘United information environment’ as they call it, an ‘antiterrorism’ platform as declared by the Chinese government:

The Communist Party has directed [them] to develop software to collate data on jobs, hobbies, consumption habits, and other behavior of ordinary citizens to predict terrorist acts before they occur.

This may seem a little too much to ask, if you think about it you may need every daily detail to be able to predict terrorist behaviour, but in a country like China where the state has control over their citizens since many decades, where they have no privacy limits to respect and a good network of informants…

A draft cybersecurity law unveiled in July grants the government almost unbridled access to user data in the name of national security. “If neither legal restrictions nor unfettered political debate about Big Brother surveillance is a factor for a regime, then there are many different sorts of data that could be collated and cross-referenced to help identify possible terrorists or subversives,” says Paul Pillar, a nonresident fellow at the Brookings Institution.

See how now there is also a new target: subversives.  the article continues:

China was a surveillance state long before Edward Snowden clued Americans in to the extent of domestic spying. Since the Mao era, the government has kept a secret file, called a dang’an, on almost everyone. Dang’an contain school reports, health records, work permits, personality assessments, and other information that might be considered confidential and private in other countries. The contents of the dang’an can determine whether a citizen is eligible for a promotion or can secure a coveted urban residency permit. The government revealed last year that it was also building a nationwide database that would score citizens on their trustworthiness.

Wait a second, who’s defining what is ‘trustworthiness’, and what if you’re not?

New antiterror laws that went into effect on Jan. 1 allow authorities to gain access to bank accounts, telecommunications, and a national network of surveillance cameras called Skynet. Companies including Baidu, China’s leading search engine; Tencent, operator of the popular social messaging app WeChat; and Sina, which controls the Weibo microblogging site, already cooperate with official requests for information, according to a report from the U.S. Congressional Research Service. A Baidu spokesman says the company wasn’t involved in the new antiterror initiative.

So Skynet is here now (remember Terminator Genisys?). Even if right after a horrendous crime you can be tempted to be happy that this ‘pre-crime’ initiative is being constructed, there are way too many negative aspects still to consider before having such a tool. Like in which hands will it be, who’s defining what is a crime, what about your free will of changing your mind, to mention some.  Let’s begin thinking how to tackle them.

The rise of the Self-Tuning Enterprise

Alibaba

As you may know, I am a fan of Machine Learning, a subfield of Artificial Intelligence (AI) that englobes computer programs that exhibit some kind of intelligent behavior. The first researchers on AI began analyzing how we (humans) did intelligent tasks in order to create programs that reproduced our behavior. So look at the irony of this HBR article”The self-Tuning Enterprise” where the authors Martin Reeves, Ming Zeng and Amin Venjara use the analogy of how machine learning programs do to transpose the behavior to enterprise strategy tuning:

[…] These enterprises [he’s talking about internet companies like Google, Netflix, Amazon, and Alibaba] have become extraordinarily good at automatically retooling their offerings for millions of individual customers, leveraging real-time data on their behavior. Those constant updates are, in fact, driven by algorithms, but the processes and technologies underlying the algorithms aren’t magic: It’s possible to pull them apart, see how they operate, and use that know-how in other settings. And that’s just what some of those same companies have started to do.

In this article we’ll look first at how self-tuning algorithms are able to learn and adjust so effectively in complex, dynamic environments. Then we’ll examine how some organizations are applying self-tuning across their enterprises, using the Chinese e-commerce giant Alibaba as a case example.”

You may have notice those new programs at work to recommend you books or other products each time you buy something on Internet (and in fact, even if you are just looking and didn’t buy anything ;-). Those programs are based on Machine Learning algorithms, and they improve over time with the new information of success (if you bought the proposed article) or failure (if you didn’t).

How do they work?

There is a ‘learning’ part that finds similarities between customers in order to propose you products that another customer similar to you bought. But it’s not so simple, these programs are coupled with other learning modules like the one that does some ‘experimentation’ not to get stuck with always the same kind of products. This module will propose you something different from time to time. Even if you like polar books, after the tenth one, you would like to read something else, isn’t it? So the trick is to find equilibrium between showing you books you have great chances to like and novelties to make you discover new horizons. You have to have the feeling that they know what they are doing when they propose you a book (so they fine-tune to be good at similarities) but you may like to change from time to time not to get bored, and also they are very interested in making you discover another bounty/category of literature, let’s say poems. If you don’t like it, you won’t accept so easily next recommendation, so here comes the next ‘tuning’ on how often to do it.

And that’s where self-tuning comes in. Self-tuning is related to the concepts of agility (rapid adjustment), adaptation (learning through trial and error), and ambidexterity (balancing exploration and exploitation). Self-tuning algorithms incorporate elements of all three—but in a self-directed fashion.

The ‘self-tuning’ process they are talking about adjusts the tool to the new information available to him without the need of reprogramming. The analogy the authors are doing is to do in organizations this same kind of automatics tunings that Machine Learning systems are doing: to ‘self-tune’ the companies without any top-down directive, to have agility, adaptation through trial and error and ambidexterity balancing exploration and exploitation.

To understand how this works, think of the enterprise as a nested set of strategic processes. At the highest level, the vision articulates the direction and ambition of the firm as a whole. As a means to achieving the vision, a company deploys business models and strategies that bring together capabilities and assets to create advantageous positions. And it uses organizational structure, information systems, and culture to facilitate the effective operation of those business models and strategies.

In the vast majority of organizations, the vision and the business model are fixed axes around which the entire enterprise revolves. They are often worked out by company founders and, once proven successful, rarely altered. Consequently, the structure, systems, processes, and culture that support them also remain static for long periods. Experimentation and innovation focus mostly on product or service offerings within the existing model, as the company leans on its established recipe for success in other areas.

The self-tuning enterprise, in contrast, takes an evolutionary approach at all levels. The vision, business model, and supporting components are regularly calibrated to the changing environment by applying the three learning loops. The organization is no longer viewed as a fixed means of transmitting intentions from above but, rather, as a network that shifts and develops in response to external feedback. To see what this means in practice, let’s look at Alibaba.[…]

Keep resetting the vision.

When Alibaba began operations, internet penetration in China was less than 1%. While most expected that figure to grow, it was difficult to predict the nature and shape of that growth. So Alibaba took an experimental approach: At any given time, its vision would be the best working assumption about the future. As the market evolved, the company’s leaders reevaluated the vision, checking their hypotheses against reality and revising them as appropriate.

In the early years, Alibaba’s goal was to be “an e-commerce company serving China’s small exporting companies.” This led to an initial focus on Alibaba.com, which created a platform for international sales. However, when the market changed, so did the vision. As Chinese domestic consumption exploded, Alibaba saw an opportunity to expand its offering to consumers. Accordingly, it launched the online marketplace Taobao in 2003. Soon Alibaba realized that Chinese consumers needed more than just a site for buying and selling goods. They needed greater confidence in internet business—for example, to be sure that online payments were safe. So in 2004, Alibaba created Alipay, an online payment service. […] Ultimately, this led Alibaba to change its vision again, in 2008, to fostering “the development of an e-commerce ecosystem in China.” It started to offer more infrastructure services, such as a cloud computing platform, microfinancing, and a smart logistics platform. More recently, Alibaba recalibrated that vision in response to the rapid convergence between digital and physical channels. Deliberately dropping the “e” from e-commerce, its current vision statement reads simply, “We aim to build the future infrastructure of commerce.”

Experiment with business models.

Alibaba could not have built a portfolio of companies that spanned virtually the entire digital spectrum without making a commitment to business model experimentation from very early on.

[…]At each juncture in its evolution, Alibaba continued to generate new business model options, letting them run as separate units. After testing them, it would scale up the most promising ones and close down or reabsorb those that were less promising.[…]

Again there was heated debate within the company about which direction to take and which model to build. Instead of relying on a top-down decision, Alibaba chose to place multiple bets and let the market pick the winners.[…]

Increasing experimentation at the height of success runs contrary to established managerial wisdom, but for Alibaba it was necessary to avoid rigidity and create options. Recalibrating how and how much to experiment was fundamental to its ability to capitalize on nascent market trends.

Focus on seizing and shaping strategic opportunities, not on executing plans.

In volatile environments, plans can quickly become out-of-date. In Alibaba’s case, rapid advances in technology, shifting consumer expectations in China and beyond, and regulatory uncertainty made it difficult to predict the future. […]

Alibaba does have a regular planning cycle, in which business unit leaders and the executive management team iterate on plans in the fourth quarter of each year. However, it’s understood that this is only a starting point. Whenever a unit leader sees a significant market change or a new opportunity, he or she can initiate a “co-creation” process, in which employees, including senior business leaders and lead implementers, develop new directions for the business directly with customers.

At Alibaba co-creation involves four steps. The first is establishing common ground: identifying signals of change (based on data from the market and insights from customers or staff) and ensuring that the right people are present and set up to work together. This typically happens at a full-day working session. The second step is getting to know the customer. Now participants explore directly with customers their evolving needs or pain points and brainstorm potential solutions. The third step entails developing an action plan based on the outcome of customer discussions. An action plan must identify a leader who can champion the opportunity, the supporting team (or teams) that will put the ideas into motion, and the mechanisms that will enable the work to get done. The final step is gathering regular customer feedback as the plan is implemented, which can, in turn, trigger further iterations.

So now you know how Alibaba does it, how is it in your company?  What ideas from them would you adopt?

New computer interface using radar technology

Thanks to Otticamedia.com

Thanks to Otticamedia.com

Have you seen this article?  It’s about the project Soli from the Google’s Advanced Technologies and Projects (ATAP) group.  They have implemented a new way to comunicate with a computer: through radar.  The radar captures the slight movements of the hand like in this picture, where just moving your fingers in the air makes you move a ‘virtual’ slider.

Fantastic, can’t wait to try it!