Why the Creation of A.I. Requires the Cultivation of Wisdom on Our Part
Abstract: Most considerations concerning the ethics of A.I. are concerned with the ethical issues posed by the potential threat of the machines or concerning their ambiguous moral status and the resulting unclarity of our ethical obligations towards them. However, a cognitive scientific approach suggests an additional ethical issue. There is converging theory and empirical evidence that while necessary, intelligence in not sufficient for rationality. Rationality requires acquiring skills for overcoming the biases and the self-deception that inevitably result from any cognitive agent using optimization strategies. These heuristic strategies often reinforce each other because of the complex and recursively self-organization nature of cognitive processing. As our A.I. moves increasingly into Artificial General Intelligence (A.G.I), these patterns of self-deception increasing become possible in our machines. This vulnerability is pertinent to us because we are often unaware of our biases or how we are building them implicitly into our simulations of intelligence. Since self-deception and foolishness are an inevitable result of intelligence, as we magnify intelligence will may also magnify the capacity for self-deception. Our lack of rational self-correcting self-awareness could very well be built into our machines. The examination of a couple of historical examples will add plausibility to this argument. Given this argument, i will further argue that we have an ethical obligation to seriously cultivate a cognitive style of self-correcting self-awareness, i.e., wisdom, in individuals and communities of individuals who are attempting to create A.G.I.
☛ please register here
John Vervaeke
University of Toronto
Cognitive Science
Tue, Oct 30, 2018
04:00 PM - 06:00 PM
Centre for Ethics, University of Toronto
200 Larkin