Select Page

The problem of ethical decision making, which has long been a grand challenge for AI (Wallach and Allen 2008), has recently caught the public imagination. Perhaps its best-known manifestation is a modern variant of the classic trolley problem (Jarvis Thomson 1985):

 

An autonomous vehicle has a brake failure, leading to an accident with inevitably tragic consequences; due to the vehicle’s superior perception and computation capabilities, it can make an informed decision. Should it stay its course and hit a wall, killing its three passengers, one of whom is a young girl? Or swerve and kill a male athlete and his dog, who are crossing the street on a red light?

 

There’s been enormous change in communication technology, as noted throughout this website in archived posts. But it is often hard to even imagine what these changes might produce in the way of doing things differently. One particularly challenging aspect is how groups might “govern themselves” when the groups in question could be massive, as in Massive Open Online Course, or MOOC.

It’s a truism that groups derive their decision-making ability from the consent of the governed.

One of the “new realities” emerges when we realize that today, groups can make decisions for themselves in dramatically different ways when powered by real-time BIG DATA, and really really smart and innovative algorithms.

A social learning construct, as PSA calls a “learning group” using cloud tools, has to set up a protocol that defines how people will behave inside the group, and how decisions will be made by the group, if the former “teacher making the decisions role as top of the hierarchy” is obsolete in the future.. Collaborative learning is just that, and “rules” of how to interact need to be derived from the students/ learners themselves.

Not to say the Guide on the Side doesn’t have enormously useful input too. But it’s a limited role nonetheless.

This paper, on how “Many to One” communication can work to arrive at mutually derived ethical rule-making, sketches out how a new form of group consensus on “how to do things” can be formed. Where “Many” is the mass of individuals, and the “One” is the set of algorithms, and the set of Big Data coming from each individual aggregated, and a “result” kicked out on “what to do” according to “the people”.

This at least appears to be a stunning new way of creating polity and group decision-making that honors the individual’s input. It’s not what they did in Athens long ago when Democracy was said to begin, and it’s not the hierarchical representative form of group decision-making we have in the US today, both in government, and in institutions, including corporations. It’s something different with enormous potential, AFAIK.

Everything of course has down sides, but at least in terms of teaching and guiding “Self Driving AI” it seems like it just might work in real-time when decisions are complex but must be made in fractions of a second. Such also might work when “rules” need to be created by a group in a way that all “buy in” to them, without a hierarchical decision-making process. And especially when a group becomes bigger than a certain magic number, such as perhaps 8-10 people.

[gview file=”https://publicservicesalliance.org/wp-content/uploads/2018/06/Trolley-Problem-and-Mass-Ethical-Decision-Trees.pdf”]

 

In the paper below one finds many mathematical examples, formulas, equations etc that this writer has no idea at all what they are, and what they do. Which brings up a potential but well-known danger of not knowing how the algorithms work…often cited when machine learning creates the algorithms. But that’s a lengthy discussion for other posts.

[gview file=”https://publicservicesalliance.org/wp-content/uploads/2018/06/A-Voting-Based-System-for-Ethical-Decision-Making.pdf”]