A response to the fear of “AI could spell the end of the human race“

Artificial intelligence programs that predict, suggest and act extrapolating from our requests are already being used in everyday tools, and this technology cannot be stopped. I just changed my car and it’s incredible how many options are based in AI, providing a lot of functionalities to assist me, the driver, making it almost unthinkable that I could do without them.

At the same time we start hearing famous voices  such as the ones of Elon Musk, Steven Hawkings  and Bill Gates  warning us that “AI could spell the end of the human race“.

Their concerns of the potential issues that the rise of AI presents are real, and they will need to be addressed. This is why I liked this video from Stuart Russel, where he proposes 3 principles for creating a safer AI.

 

The King Midas Problem

Midas request was to be able to transform everything he touched into gold. His wish was granted but then he died because EVERYTHING he touched was transformed in gold, even his food. 

Current AIs are facing that same dilemma, they require from us (programmers) to be very specific and careful with the objectives we put into them. As Stuart Rusel says: “better be quite sure that the purpose put on the robots is what we want”.

He proposes to implement these following principles to make sure AI’s programs will be helpful to humans:

The laws for ‘Human compatible AI’

1. AI Goal Is To Maximize The Realization of Human Values

Robots should not have an objective per se,  with the exception of maximizing the realization of human values. This law will  overrule  Asimov’s self-preservation rule, making the AI truly altruisitic.

2. The Law of Humility

As our human values will not be completely defined, the AIs will need some humility to understand that they may not know what are the values they are trying to maximise. This will force them to observe and adapt the values to those observations. 

This law  is important because it avoids the problem of the mindless poursuit by eliminating the certainty of a single known objective to be maximised.

3. Human Behavior, The Information Source of Human Values

AIs  should try to understand the motivations behind our behavior, instead of copying our strict behavior. And they should be designed to satisfy the desires of everybody, not of one in particular.

But There Is Still Room For Improvement

Even following these rules, not everything may work as well as expected:

The GO master did not wanted to loose, he just couldn’t foresee the result of his move. But to understand this the AI should know the mental limitations of us Humans.

Or the emotional value of something may not be correctly weighted against an unfulfilled need, as in the example of making dinner of the cat..

We have huge incentives to get it right, because one bad example will make people mistrust AIs bringing a stop to their development.

 

 

Big Data and Ethics

BIG Data and Ethics was held a few weeks ago in the new premises of the DigitYser, downtown Brussels.

It was a great Meetup, with interesting speakers and an interested public 😉 It’s always a pleasure when the public can contribute and presentations raise great discussions, and it is more important here on this gathering on ethics, as people still have to position themselves on the different aspects of this topic.

I was particularly surprised when Michael Ekstrand from Boise State University mentioned a use of the recommendations systems that I hadn’t think of: using it as a tool to tackle the intention behaviour gap: ‘I don’t do what I want to do’ (for example not eating while on a diet). Recommenders can be used to help you change your behaviour, giving you nudges as incentive.

Jochanan Eynikel also mentioned the use of technology as a morality enforcer.

Still, there are possible drawbacks:

Another area that was discussed was the ethical fact that Personalisation has a direct negative impact on Insurance as it goes against Risk mitigation (mutualising it among customers). There are sensible domains where a ‘human’ approach should be taken.
How to ensure ethical and moral concerns are taken into account? One approach is through participatory design, that is a framework to get users voices on the subject during the design phase. MIT is strongly pushing participatory design to tackle many basic dilemmas.

Solving and clarifying our human position on these kind of dilemmas is more than relevant when we are talking here about autonomous technology, that is when technology is teachings itself, as driving cars learning from users.
Can we imagine not having human supervision in all domains? How to introduce Ethics so that the system itself can choose the ‘good’ decision and discard the others?

Pierre-Nicolas Schwab presented us the General Data Protection Regulation as “the only thing that the EC can do to force companies to take data privacy into account: fine them if they don’t”:

At the end of the meeting, this question has been raised: “Do data scientist and programmers need an Hippocratic oath?” Like ACM that has a code of conduct, something like ‘don’t harm with your code’.
What’s your opinion on this?

Elections warn about ethical issues in algorithms

I tweeted recently on this article about how Big Data has been used on the last American Presidential campaign.

Concordia Summit, New York 2016

“At Cambridge,” he said, “we were able to form a model to predict the personality of every single adult in the United States of America.” The hall is captivated. According to Nix, the success of Cambridge Analytica’s marketing is based on a combination of three elements: behavioral science using the OCEAN Model, Big Data analysis, and ad targeting. Ad targeting is personalized advertising, aligned as accurately as possible to the personality of an individual consumer.

Nix candidly explains how his company does this. First, Cambridge Analytica buys personal data from a range of different sources, like land registries, automotive data, shopping data, bonus cards, club memberships, what magazines you read, what churches you attend. Nix displays the logos of globally active data brokers like Acxiom and Experian—in the US, almost all personal data is for sale. […] Now Cambridge Analytica aggregates this data with the electoral rolls of the Republican party and online data and calculates a Big Five personality profile. Digital footprints suddenly become real people with fears, needs, interests, and residential addresses.
[…]

Nix shows how psychographically categorized voters can be differently addressed, based on the example of gun rights, the 2nd Amendment: “For a highly neurotic and conscientious audience the threat of a burglary—and the insurance policy of a gun.” An image on the left shows the hand of an intruder smashing a window. The right side shows a man and a child standing in a field at sunset, both holding guns, clearly shooting ducks: “Conversely, for a closed and agreeable audience. People who care about tradition, and habits, and family.”

Now I came across this other article by Peter Diamandis, featuring what we can expect in 4 year’s time for the next future elections’ campaign.

5 Big Tech Trends That Will Make This Election Look Tame

5 Big Tech Trends That Will Make This Election Look Tame

If you think this election is insane, wait until 2020.

I want you to imagine how, in four years’ time, technologies like AI, machine learning, sensors and networks will accelerate.

Political campaigns are about to get hyper-personalized thanks to advances in a few exponential technologies.

Imagine a candidate who now knows everything about you, who can reach you wherever you happen to be looking, and who can use info scraped from social media (and intuited by machine learning algorithms) to speak directly to you and your interests.

[…] For example, imagine I’m walking down the street to my local coffee shop and a photorealistic avatar of the presidential candidate on the bus stop advertisement I pass turns to me and says:

“Hi Peter, I’m running for president. I know you have two five-year-old boys going to kindergarten at XYZ school. Do you know that my policy means that we’ll be cutting tuition in half for you? That means you’ll immediately save $10,000 if you vote for me…”

If you pause and listen, the candidate’s avatar may continue: […] “I’d really appreciate your vote. Every vote and every dollar counts. Do you mind flicking me a $1 sticker to show your support?”

I know, this last article is from the SingularityHub, but even though they tend to be alarming, knowing how fast technology advances, the predictions they advance are not too exaggerated…

In any way, that reminds me how important it is to ACT on the ethical issues of algorithms. Please notice the capital letters to stress on the movement, which is to take action.  There are many issues that need to be identify, to be discussed, to raise awareness upon, to regulate, and on some of them we can already act on at company level.

I talked in May last year at the Data Innovation Summit about the biases that can be (and usually are) replicated by the new algorithms based on data.  Since then I began working on a training program to help identify and correct those bias when designing and using algorithms, and I’m reminded with the above mentioned articles that this cannot be delayed, it’s needed right now.

So if you are interested on getting your people and organization be aware of biases (human biases and digital ones), and be trained to fix these issues, contact me!

EmojiOne

We are creating our future, let’s don’t close our eyes, we can take control and assume our responsibility setting the railings that will guide the path to our future society.

 

AI and Machine Learning in business: use it everywhere!

How One Clothing Company Blends AI and Human Expertise, HBR nov-16

How One Clothing Company Blends AI and Human Expertise, HBR nov-16

Last week Bev from PWI’s group in Linkedin pointed me to a great HBR article: “How One Clothing Company Blends AI and Human Expertise”, by H. James Wilson, Paul Daugherty and Prashant Shukla.

It describes how the company Stitch Fix works, using machine learning insights to assist their designers, and as you will see, they use machine learning at many levels throughout the company.

The company offers a subscription clothing and styling service that delivers apparel to its customers’ doors. But users of the service don’t actually shop for clothes; in fact, Stitch Fix doesn’t even have an online store. Instead, customers fill out style surveys, provide measurements, offer up Pinterest boards, and send in personal notes. Machine learning algorithms digest all of this eclectic and unstructured information. An interface communicates the algorithms’ results along with more-nuanced data, such as the personal notes, to the company’s fashion stylists, who then select five items from a variety of brands to send to the customer. Customers keep what they like and return anything that doesn’t suit them.

The Key factor of success for the company is to be good at recommending clothes that not only will fit the customer and that they’ll like enough to keep them, but better than just ‘like them’, that they like them enough to be happy with their subscription.

Stitch Fix, which lives and dies by the quality of its suggestions, has no choice but to do better [than Amazon and Netflix].

Unlike Amazon and Netflix that recommend directly products to the customers, here they use machine learning methods to provide digested information to their human stylists and designers.

[…] companies can use machines to supercharge the productivity and effectiveness of workers in unprecedented ways […]

Algorithms are for example analysing the measurements to find other clients with same body shape, so they can use the knowledge of what items fitted those other clients: the clothes that those other clients kept. Algorithms are also used to extract information of clients’ taste on styles, from brands preferences and their comments on collections.  Human stylists, using the results of that data analysis and reading the client’s notes, are better equipped to choose clothes that will suit the customers.

Next, it’s time to pick the actual [item of clothe] to be shipped. This is up to the stylist, who takes into account a client’s notes or the occasion for which the client is shopping. In addition, the stylist can include a personal note with the shipment, fostering a relationship, which Stitch Fix hopes will encourage even more useful feedback.

This human-in-the-loop recommendation system uses multiple information streams to help it improve.

See how stylists maintain a human dialog with their clients through the included note. This personalised contact is usually well appreciated by customers and it has a positive effect for the company because it opens the door to receive their feedback to better tailor their next delivery.

The company is testing natural language processing for reading and categorizing notes from clients — whether it received positive or negative feedback, for instance, or whether a client wants a new outfit for a baby shower or for an important business meeting. Stylists help to identify and summarize textual information from clients and catch mistakes in categorization.

The machine learning systems arelearning through experience’ (=adapting with the feedback) as usual, but in a humanly ‘supervised’ way. This supervision allows them to try new algorithms without the risk of losing clients if results are not as good as expected.

Stitch Fix employs more than 2,800 stylists, dispersed across the country, all of them working from home and setting their own hours. In this distributed workforce, stylists are measured by a variety of metrics, including the amount of money a client spends, client satisfaction, and the number of items a client keeps per delivery. But one of the most important factors is the rate at which a stylist puts together a collection of clothes for a client.

Speed is an important factor to satisfy their customers’ demands, and machine learning gives them the needed insight so much quicker than if stylists had to go through all the raw data!

This is where the work interface comes into effect. To enable fast decision making, the screen on which a stylist views recommendations shows the relevant information the company keeps about a client, including apparel and feedback history, measurements, and tolerance for fashion risks — it’s all readily accessible

The interface itself, which shows the information to the stylist, is also adapting through feedback, being tested for better performance.  And you could go again one step further and check for bias on the stylists:

Stitch Fix’s system can vary the information a stylist sees to test for bias. For instance, how might a picture of a client affect a stylist’s choices? Or knowledge about a client’s age? Does it help or hinder to know where a client lives?

By measuring the impact of modified information in the stylist interface, the company is developing a systematic way to measure improvements in human judgment

And there are many other machine learning algorithms throughout the company:

[…]the company has hundreds of algorithms, like a styling algorithm that matches products to clients; an algorithm that matches stylists with clients; an algorithm that calculates how happy a customer is with the service; and one that figures out how much and what kind of inventory the company should buy.

The company is also using the information of the kept and returned items to find fashion trends:

From this seemingly simple data, the team has been able to uncover which trends change with the seasons and which fashions are going out of style.

The data they are collecting is also helping advance research on computer vision systems:

[…] system that can interpret style and extract a kind of style measurement from images of clothes. The system itself would undergo unsupervised learning, taking in a huge number of images and then extracting patterns or features and deciding what kinds of styles are similar to each other. This “auto-styler” could be used to automatically sort inventory and improve selections for customers.

In addition to developing an algorithmic trend-spotter and an auto-styler, Stitch Fix is developing brand new styles — fashions born entirely from data. The company calls them frankenstyles. These new styles are created from a “genetic algorithm,” modeled after the process of natural selection in biological evolution. The company’s genetic algorithm starts with existing styles that are randomly modified over the course of many simulated “generations.” Over time, a sleeve style from one garment and a color or pattern from another, for instance, “evolve” into a whole new shirt.

How does a company using so many machine learning systems look like at employee level? How is it perceived by the employees? This is what they say:

Even with the constant monitoring and algorithms that guide decision making, according to internal surveys, Stitch Fix stylists are mostly satisfied with the work. And this type of work, built around augmented creativity and flexible schedules, will play an important role in the workforce of the future.

Machine learning and AI (artificial intelligence) systems are changing the way companies do business.  They are providing an insight that either could not be grasped before, or that it could, but not at that speed, nor being accessible as a tool to assist each and every employee.

The least that can be said is that this will improve productivity in all sectors and, as today almost everyone has access to the Internet to verify a word, look for a translation, a recipe, check the  weather and countless other uses, the new generation of employees will be assisted by tons of algorithms that will analyse data and deduce, induce or summarize information to assist them in their work and in their decision-making.

Sexism spotted with Maths!

cc-restore2

I did a talk in May this year called ‘Restore the balance of data’ at the Data Innovation Summit.  It was about sexism and other biases that are implicit in our existing electronic traces (actual and historical data) and my concern because we are using that data as baseline information to create the new prediction algorithms.

I’ve discussed this many times at home when preparing the talk.  We had vivid discussions with my husband and lovely sons over our family Sunday lunches. That’s how it didn’t surprise me that my eldest son, Alex, thought of me when reading  this article of the MIT Technology Review about sexism in our language.

The article is about a dataset of texts that researchers are using to “better understand everything from machine translation to intelligent Web searching.”  They are transforming words in the text into vectors, and then applying mathematical properties to derive meaning:

It turned out that words with similar meanings occupied similar parts of this vector space. And the relationships between words could be captured by simple vector algebra. For example, “man is to king as woman is to queen” or, using the common notation, “man : king :: woman : queen.” Other relationships quickly emerged too such as  “sister : woman :: brother : man,” and so on. These relationships are known as word embeddings.

The article is about the problem that researchers have identified on this data set, they say “: it is blatantly sexist.”  Here are some examples they provide:

But ask the database “father : doctor :: mother : x” and it will say x = nurse. And the query “man : computer programmer :: woman : x” gives x = homemaker.

Thinking about it, isn’t it obvious that if we have biases on our behavior, the writings about our world would be biased too?  And anything derived from our biased writing traces will reflect our views with all our biases too.

So we learned to extrapolate from our old behavior to predict our future behaviour… just to discover that we don’t like what we are getting out of it!  Our old behavior, amplified by the algorithm, doesn’t seem so good isn’t it? It’s clearer than ever that we don’t want to continue behaving like that in the future… Well, that’s a positive point, it’s good that this uncovers our blind spots, isn’t it?

Now the good news: it can be fixed!

The Boston team has a solution. Since a vector space is a mathematical object, it can be manipulated with standard mathematical tools.

The solution is obvious. Sexism can be thought of as a kind of warping of this vector space. Indeed, the gender bias itself is a property that the team can search for in the vector space. So fixing it is just a question of applying the opposite warp in a way that preserves the overall structure of the space.

Oh, seems so easy…for mathematicians anyway 😉  But no, even for mathematicians it is difficult to find and to measure the distortions:

That’s the theory. In practice, the tricky part is measuring the nature of this warping. The team does this by searching the vector space for word pairs that produce a similar vector to “she: he.” This reveals a huge list of gender analogies. For example, she;he::midwife:doctor; sewing:carpentry; registered_nurse:physician; whore:coward; hairdresser:barber; nude:shirtless; boobs:ass; giggling:grinning; nanny:chauffeur, and so on.

Having compiled a comprehensive list of gender biased pairs, the team used this data to work out how it is reflected in the shape of the vector space and how the space can be transformed to remove this warping. They call this process  “hard de-biasing.”

Finally, they use the transformed vector space to produce a new list of gender analogies[…]

Read the full article if you are interested on their process to de-biased.  Their conclusion, with which I completely agree is:

“One perspective on bias in word embeddings is that it merely reflects bias in society, and therefore one should attempt to debias society rather than word embeddings,” say Bolukbasi and co. “However, by reducing the bias in today’s computer systems (or at least not amplifying the bias), which is increasingly reliant on word embeddings, in a small way debiased word embeddings can hopefully contribute to reducing gender bias in society.”

That seems a worthy goal. As the Boston team concludes: “At the very least, machine learning should not be used to inadvertently amplify these biases.”

 

The rise of the Self-Tuning Enterprise

Alibaba

As you may know, I am a fan of Machine Learning, a subfield of Artificial Intelligence (AI) that englobes computer programs that exhibit some kind of intelligent behavior. The first researchers on AI began analyzing how we (humans) did intelligent tasks in order to create programs that reproduced our behavior. So look at the irony of this HBR article”The self-Tuning Enterprise” where the authors Martin Reeves, Ming Zeng and Amin Venjara use the analogy of how machine learning programs do to transpose the behavior to enterprise strategy tuning:

[…] These enterprises [he’s talking about internet companies like Google, Netflix, Amazon, and Alibaba] have become extraordinarily good at automatically retooling their offerings for millions of individual customers, leveraging real-time data on their behavior. Those constant updates are, in fact, driven by algorithms, but the processes and technologies underlying the algorithms aren’t magic: It’s possible to pull them apart, see how they operate, and use that know-how in other settings. And that’s just what some of those same companies have started to do.

In this article we’ll look first at how self-tuning algorithms are able to learn and adjust so effectively in complex, dynamic environments. Then we’ll examine how some organizations are applying self-tuning across their enterprises, using the Chinese e-commerce giant Alibaba as a case example.”

You may have notice those new programs at work to recommend you books or other products each time you buy something on Internet (and in fact, even if you are just looking and didn’t buy anything ;-). Those programs are based on Machine Learning algorithms, and they improve over time with the new information of success (if you bought the proposed article) or failure (if you didn’t).

How do they work?

There is a ‘learning’ part that finds similarities between customers in order to propose you products that another customer similar to you bought. But it’s not so simple, these programs are coupled with other learning modules like the one that does some ‘experimentation’ not to get stuck with always the same kind of products. This module will propose you something different from time to time. Even if you like polar books, after the tenth one, you would like to read something else, isn’t it? So the trick is to find equilibrium between showing you books you have great chances to like and novelties to make you discover new horizons. You have to have the feeling that they know what they are doing when they propose you a book (so they fine-tune to be good at similarities) but you may like to change from time to time not to get bored, and also they are very interested in making you discover another bounty/category of literature, let’s say poems. If you don’t like it, you won’t accept so easily next recommendation, so here comes the next ‘tuning’ on how often to do it.

And that’s where self-tuning comes in. Self-tuning is related to the concepts of agility (rapid adjustment), adaptation (learning through trial and error), and ambidexterity (balancing exploration and exploitation). Self-tuning algorithms incorporate elements of all three—but in a self-directed fashion.

The ‘self-tuning’ process they are talking about adjusts the tool to the new information available to him without the need of reprogramming. The analogy the authors are doing is to do in organizations this same kind of automatics tunings that Machine Learning systems are doing: to ‘self-tune’ the companies without any top-down directive, to have agility, adaptation through trial and error and ambidexterity balancing exploration and exploitation.

To understand how this works, think of the enterprise as a nested set of strategic processes. At the highest level, the vision articulates the direction and ambition of the firm as a whole. As a means to achieving the vision, a company deploys business models and strategies that bring together capabilities and assets to create advantageous positions. And it uses organizational structure, information systems, and culture to facilitate the effective operation of those business models and strategies.

In the vast majority of organizations, the vision and the business model are fixed axes around which the entire enterprise revolves. They are often worked out by company founders and, once proven successful, rarely altered. Consequently, the structure, systems, processes, and culture that support them also remain static for long periods. Experimentation and innovation focus mostly on product or service offerings within the existing model, as the company leans on its established recipe for success in other areas.

The self-tuning enterprise, in contrast, takes an evolutionary approach at all levels. The vision, business model, and supporting components are regularly calibrated to the changing environment by applying the three learning loops. The organization is no longer viewed as a fixed means of transmitting intentions from above but, rather, as a network that shifts and develops in response to external feedback. To see what this means in practice, let’s look at Alibaba.[…]

Keep resetting the vision.

When Alibaba began operations, internet penetration in China was less than 1%. While most expected that figure to grow, it was difficult to predict the nature and shape of that growth. So Alibaba took an experimental approach: At any given time, its vision would be the best working assumption about the future. As the market evolved, the company’s leaders reevaluated the vision, checking their hypotheses against reality and revising them as appropriate.

In the early years, Alibaba’s goal was to be “an e-commerce company serving China’s small exporting companies.” This led to an initial focus on Alibaba.com, which created a platform for international sales. However, when the market changed, so did the vision. As Chinese domestic consumption exploded, Alibaba saw an opportunity to expand its offering to consumers. Accordingly, it launched the online marketplace Taobao in 2003. Soon Alibaba realized that Chinese consumers needed more than just a site for buying and selling goods. They needed greater confidence in internet business—for example, to be sure that online payments were safe. So in 2004, Alibaba created Alipay, an online payment service. […] Ultimately, this led Alibaba to change its vision again, in 2008, to fostering “the development of an e-commerce ecosystem in China.” It started to offer more infrastructure services, such as a cloud computing platform, microfinancing, and a smart logistics platform. More recently, Alibaba recalibrated that vision in response to the rapid convergence between digital and physical channels. Deliberately dropping the “e” from e-commerce, its current vision statement reads simply, “We aim to build the future infrastructure of commerce.”

Experiment with business models.

Alibaba could not have built a portfolio of companies that spanned virtually the entire digital spectrum without making a commitment to business model experimentation from very early on.

[…]At each juncture in its evolution, Alibaba continued to generate new business model options, letting them run as separate units. After testing them, it would scale up the most promising ones and close down or reabsorb those that were less promising.[…]

Again there was heated debate within the company about which direction to take and which model to build. Instead of relying on a top-down decision, Alibaba chose to place multiple bets and let the market pick the winners.[…]

Increasing experimentation at the height of success runs contrary to established managerial wisdom, but for Alibaba it was necessary to avoid rigidity and create options. Recalibrating how and how much to experiment was fundamental to its ability to capitalize on nascent market trends.

Focus on seizing and shaping strategic opportunities, not on executing plans.

In volatile environments, plans can quickly become out-of-date. In Alibaba’s case, rapid advances in technology, shifting consumer expectations in China and beyond, and regulatory uncertainty made it difficult to predict the future. […]

Alibaba does have a regular planning cycle, in which business unit leaders and the executive management team iterate on plans in the fourth quarter of each year. However, it’s understood that this is only a starting point. Whenever a unit leader sees a significant market change or a new opportunity, he or she can initiate a “co-creation” process, in which employees, including senior business leaders and lead implementers, develop new directions for the business directly with customers.

At Alibaba co-creation involves four steps. The first is establishing common ground: identifying signals of change (based on data from the market and insights from customers or staff) and ensuring that the right people are present and set up to work together. This typically happens at a full-day working session. The second step is getting to know the customer. Now participants explore directly with customers their evolving needs or pain points and brainstorm potential solutions. The third step entails developing an action plan based on the outcome of customer discussions. An action plan must identify a leader who can champion the opportunity, the supporting team (or teams) that will put the ideas into motion, and the mechanisms that will enable the work to get done. The final step is gathering regular customer feedback as the plan is implemented, which can, in turn, trigger further iterations.

So now you know how Alibaba does it, how is it in your company?  What ideas from them would you adopt?

Can An Algoritm be “Racist”?

Library of Congress Classification - Reading Room

David Auerbach has written this article pointing out that some classification algorithms may be racists :

Can a computer program be racist? Imagine this scenario: A program that screens rental applicants is primed with examples of personal history, debt, and the like. The program makes its decision based on lots of signals: rental history, credit record, job, salary. Engineers “train” the program on sample data. People use the program without incident until one day, someone thinks to put through two applicants of seemingly equal merit, the only difference being race. The program rejects the black applicant and accepts the white one. The engineers are horrified, yet say the program only reflected the data it was trained on. So is their algorithm racially biased?

Yes and a classification algorithm could not only be racist but, as humans write them, or more accurately with the learning algorithms, as they are built upon human examples and counter-examples, the algorithms may have any human bias that we have.  With the abundance of data, we are training programs with examples from the real world; the resulting programming will be an image of how we act and not a reflection on how we would like to be.  Exactly as the saying on educating kids: they do as they see and not as they are told :- )

To make things worse, when dealing with learning algorithms, not even the programmer can predict the resulting classification. So knowing that there may be errors,  who is there to ensure their correctness?

What about the everyday profiling that goes on without anyone noticing? [… ]
Their goal is chiefly “microtargeting,” knowing enough about users so that ads can be customized for tiny segments like “soccer moms with two kids who like Kim Kardashian” or “aging, cynical ex-computer programmers.”

Some of these categories are dicey enough that you wouldn’t want to be a part of them. Pasquale writes that some third-party data-broker microtargeting lists include “probably bipolar,” “daughter killed in car crash,” “rape victim,” and “gullible elderly.” […]

There is no clear process for fixing these errors, making the process of “cyberhygiene” extraordinarily difficult.[…]

For example, just because someone has access to the source code of an algorithm does not always mean he or she can explain how a program works. It depends on the kind of algorithm. If you ask an engineer, “Why did your program classify Person X as a potential terrorist?” the answer could be as simple as “X had used ‘sarin’ in an email,” or it could be as complicated and nonexplanatory as, “The sum total of signals tilted X out of the ‘non-terrorist’ bucket into the ‘terrorist’ bucket, but no one signal was decisive.” It’s the latter case that is becoming more common, as machine learning and the “training” of data create classification algorithms that do not behave in wholly predictable manners.

Further on, the author mentions the dangers or this kind of programming that is not fully predictable.

Philosophy professor Samir Chopra has discussed the dangers of such opaque programs in his book A Legal Theory for Autonomous Artificial Agents, stressing that their autonomy from even their own programmers may require them to be regulated as autonomous entities.

Chopra sees these algorithms as autonomous entities.  They may be unpredictable, but till now there is no will or conscious choice to go one path instead of another.  Programs are being told to maximize a particular benefit, and how to measure that benefit is a calculated by a  human written function.  Now as time goes by, and technological advances go their way, I can easily see that the benefit function could include certain feedback the program gets from ‘real world’ that could make the behavior of the algorithm still more unpredictable than now.  At that point we can think of algorithms that can evaluate or ‘choose’ to be on the regulated side.. or not? Will it reaches the point of them having a kind of survival instinct?   Where it may lead that…we’ll know it soon enough.

photo by:

Citizen Science hits again with EyeWire

Hear of this crowdsourcing success story at EyeWire:

EyeWire

 

Crowd-sourced science isn’t just fun and games anymore; it has produced a scientific discovery new and important enough to be published in the journal Nature.

The social gaming venture EyeWire lured citizen scientists to follow retinal neurons across multiple two-dimensional photos with the chance to level up and outperform competitors. And with their help, EyeWire has solved a longstanding mystery about how mammals perceive motion.

The use of gamification in conjunction with collaboration techniques, and the multiplication factor of reaching a motivated worldwide crowd,  is giving great results! 


Computers are not very good at identifying objects in an image (to see where one object ends and another one begins), something humans do at a glance.  On this particular game, EyeWire, there are more than 120.000 players from 100 countries coloring the presented neuron cells.  Players are doing the job of identifying cell by cell the path from the eye to the brain.

But that’s not the only thing the crowd is contributing with, because the players’ results is also used to train ‘learning algorithms’ in identifying objects in an image.  Learning algorithms are a very special kind of programs that can adapt through feedback. So when we give to the algorithm a positive (or negative) example of output, the program changes some internal parameters in order to adapt and give the desired outcome. With this game, the images with the colored cells that humans are doing in the game are being used as positive examples.  Next generation of image recognition programs will be more powerful also thanks to crowdsourcing.

Is your Robot feeling lonely? Connect it to RoboEarth

It (or he/she?) doesn’t need to, there is now a platform to connect to others.  I wouldn’t call it the Facebook for Robots, it’s more like a giant Academia 🙂  but RoboEarth enable robots to share their experiences, their learnings.  It is a Cloud environment that allows them also to use external storage and computation capabilities, that means freeing them of physically  carrying the extra kilos of storage space or processor needed to execute their tasks.
See the official definition:

What is RoboEarth?

At its core, RoboEarth is a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment. Bringing a new meaning to the phrase “experience is the best teacher”, the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction.

RoboEarth offers a Cloud Robotics infrastructure, which includes everything needed to close the loop from robot to the cloud and back to the robot. RoboEarth’s World-Wide-Web style database stores knowledge generated by humans – and robots – in a machine-readable format. Data stored in the RoboEarth knowledge base include software components, maps for navigation (e.g., object locations, world models), task knowledge (e.g., action recipes, manipulation strategies), and object recognition models (e.g., images, object models).

I think this platform will make an exponential leap on robots capabilities.  It is sometimes hard for humans to learn by example, but it is not so for robots.

And isn’t this like crowdsourcing between robots?

 

Innovative robot takes stock

Andyvision is the name of this ET-looking robot.  You can find it at the CMU store near the Carnegie Mellon University, checking the inventory.

Andyvision[…] scans the shelves to generate a real-time interactive map of the store, which customers can browse via an in-store screen. At the same time, the robot performs a detailed inventory check, identifying each item on the shelves, and alerting employees if stock is low or if an item has been misplaced.

The prototype has been rolling around the floors of the store since mid-May. This Tuesday, Priya Narasimhan, a professor at CMU who heads the Intel Science and Technology Center in Embedded Computing, demonstrated the system to attendees at an Intel Research Labs event in San Francisco.

While making its rounds, the robot uses a combination of image-processing and machine-learning algorithms; a database of 3-D and 2-D images showing the store’s stock; and a basic map of the store’s layout—for example, where the T-shirts are stacked, and where the mugs live. The robot has proximity sensors so that it doesn’t run into anything.

The map generated by the robot is sent to a large touch-screen system in the store and a real-time inventory list is sent to iPad-carrying staff.

This is not a break-through discovery, there is nothing technologically new.  It is a great example of innovation, of what can be done by just combining existing types of algorithms in a novel way.  It is based on many computer-vision programs, as scanning barcodes, reading text, and using visual information of shape, size or color to identify an item.  But it can also infer the identity from the knowledge it has of the structure of the shop and its proximity to other items:

“If an unidentified bright orange box is near Clorox bleach, it will infer that the box is Tide detergent,” she says.

Narasimhan’s group developed the system after interviewing retailers about their needs. Stores lose money when they run low on a popular item, and when a customer puts down a jar of salsa in the detergent aisle where it won’t be found by someone who wants to buy it; or when customers ask where something is and clerks don’t know. So far, the robotic inventory system seems to have helped increase the staff’s knowledge of where everything is. By the fall, Narasimhan expects to learn whether it has also saved the store money.

Narasimhan thinks computer-vision inventory systems will be easier to implement than wireless RFID tags, which don’t work well in stores with metal shelves and need to be affixed to every single item, often by hand. A computer vision system doesn’t need to be carried on a robot; the same job could be done by cameras mounted in each aisle of a store. [..]  The biggest challenge for such a system, she says, is whether it “can deal with different illuminations and adapt to different environments.”

After its initial test at the campus store, Narasimhan says, the Carnegie Mellon system will be put to this test in several local stores sometime next year.

I particularly find it cute to have an ET wandering around, so let’s hope their economical expectations are fulfilled, and think of for more innovative ideas of this order!