BIG Data and Ethics was held a few weeks ago in the new premises of the DigitYser, downtown Brussels.
It was a great Meetup, with interesting speakers and an interested public đ Itâs always a pleasure when the public can contribute and presentations raise great discussions, and it is more important here on this gathering on ethics, as people still have to position themselves on the different aspects of this topic.
I was particularly surprised when Michael Ekstrand from Boise State University mentioned a use of the recommendations systems that I hadnât think of: using it as a tool to tackle the intention behaviour gap: âI don’t do what I want to doâ (for example not eating while on a diet). Recommenders can be used to help you change your behaviour, giving you nudges as incentive.
Jochanan Eynikel also mentioned the use of technology as a morality enforcer.
Still, there are possible drawbacks:
Another area that was discussed was the ethical fact that Personalisation has a direct negative impact on Insurance as it goes against Risk mitigation (mutualising it among customers). There are sensible domains where a âhumanâ approach should be taken.
How to ensure ethical and moral concerns are taken into account? One approach is through participatory design, that is a framework to get users voices on the subject during the design phase. MIT is strongly pushing participatory design to tackle many basic dilemmas.
Solving and clarifying our human position on these kind of dilemmas is more than relevant when we are talking here about autonomous technology, that is when technology is teachings itself, as driving cars learning from users.
Can we imagine not having human supervision in all domains? How to introduce Ethics so that the system itself can choose the âgoodâ decision and discard the others?
Pierre-Nicolas Schwab presented us the General Data Protection Regulation as âthe only thing that the EC can do to force companies to take data privacy into account: fine them if they donâtâ:
At the end of the meeting, this question has been raised: âDo data scientist and programmers need an Hippocratic oath?â Like ACM that has a code of conduct, something like âdon’t harm with your codeâ.
Whatâs your opinion on this?