かならず 読んでください

可能な心

Possible Mind: Twenty-Five Ways of Looking at AI, John Bockman ed., 2019

池田光穂

"It may very well be a good thing for humanity to have the machine remove from it the need of menial and disagreeable task, or it may not. I do not know" - N. Weiner 1961[1948]:27 N. Weiner, Cybernetics. 2nd ed., MIT Press. 人間から、卑しくて人が嫌がる仕事(タスク)を奪 い去ることが人間に対してとても善いことになるのか、あるいはそうではないのか、どうも私には分からない——ノーバート・ウィナー(1947年、メキシコ 市)--Information is information, not matter or energy.- Cybernetics: Or Control and Communication in the Animal and the Machine

++++++++++++++++++++++++++++++

The Human Use of Human Beings: Cybanetics and Society, 1950, 1954 --> https://monoskop.org/images/5/51/Wiener_Norbert_The_Human_Use_of_Human_Beings.pdf

Index

"The development of these computing machines has been very rapid since the war. For a large range of computational work, they have shown themselves much faster and more accurate than the human computer. Their speed has long since reached such a level that any intermediate human intervention in their work is out of the question. Thus they offer the same need to replace human capacities by machine capacities as those which we found in the anti-aircraft computer. The parts of the machine must speak to one another through an appropriate language, without speaking to any person or listening to any person, except in the terminal and initial stages of the process. Here again we have an element which has contributed to the general acceptance of the extension to machines of the idea of communication. " (Weiner 1950: 151).

●John Brockman ed., Possible minds : twenty-five ways of looking at AI, 2019. Penguin,2019

Science world luminary John Brockman assembles twenty-five of the most important scientific minds, people who have been thinking about the field artificial intelligence for most of their careers, for an unparalleled round-table examination about mind, thinking, intelligence and what it means to be human. "Artificial intelligence is today's story--the story behind all other stories. It is the Second Coming and the Apocalypse at the same time: Good AI versus evil AI." --John Brockman More than sixty years ago, mathematician-philosopher Norbert Wiener published a book on the place of machines in society that ended with a warning: "we shall never receive the right answers to our questions unless we ask the right questions.... The hour is very late, and the choice of good and evil knocks at our door." In the wake of advances in unsupervised, self-improving machine learning, a small but influential community of thinkers is considering Wiener's words again. In Possible Minds, John Brockman gathers their disparate visions of where AI might be taking us. The fruit of the long history of Brockman's profound engagement with the most important scientific minds who have been thinking about AI--from Alison Gopnik and David Deutsch to Frank Wilczek and Stephen Wolfram--Possible Minds is an ideal introduction to the landscape of crucial issues AI presents. The collision between opposing perspectives is salutary and exhilarating; some of these figures, such as computer scientist Stuart Russell, Skype co-founder Jaan Tallinn, and physicist Max Tegmark, are deeply concerned with the threat of AI, including the existential one, while others, notably robotics entrepreneur Rodney Brooks, philosopher Daniel Dennett, and bestselling author Steven Pinker, have a very different view. Serious, searching and authoritative, Possible Minds lays out the intellectual landscape of one of the most important topics of our time.

Twenty-five essayists contributed essays related to artificial intelligence (AI) pioneer Norbert Wiener's 1950 book The Human Use of Human Beings, in which Weiner, fearing future machines built from vacuum tubes and capable of sophisticated logic, warned that "The hour is very late, and the choice of good and evil knocks at our door. We must cease to kiss the whip that lashes us."[1][2] Wiener stated that an AI "which can learn and can make decisions on the basis of its learning, will in no way be obliged to make such decisions as we should have made, or will be acceptable to us".[3] The essayists seek to address the question: What dangers might advanced AI present to humankind? Prominent essayists include Daniel Dennett, Alison Gopnik, Jaan Tallinn, and George Dyson.[4] Brockman interleaves his own intros and anecdotes between the contributors' essays.[5]

Multiple essayists state that artificial general intelligence is still two to four decades away. Most of the essayists advice proceeding with caution. Hypothetical dangers discussed include societal fragmentation, loss of human jobs, dominance of multinational corporations with powerful AI, or existential risk if superintelligent machines develop a drive for self-preservation.[1] Computer scientist W. Daniel Hillis states "Humans might be seen as minor annoyances, like ants at a picnic".[2] Some essayists argue that AI has already become an integral part of human culture; geneticist George M. Church suggests that modern human are already "transhumans" when compared with humans in the stone age.[4] Many of the essays are influenced by past failures of AI. MIT's Neil Gershenfeld states "Discussions about artificial intelligence have been (manic-depressive): depending on how you count, we're now in the fifth boom-and-bust cycle." Brockman states "over the decades I rode with (the AI pioneers) on waves of enthusiasm, and into valleys of disappointment".[3] Many essayists emphasize the limitations of past and current AI; Church notes that 2011 Jeopardy! champion Watson required 85,000 watts of power, compared to a human brain which uses 20 watts.[5]

Kirkus stated readers who want to ponder the future impact of AI "will not find a better introduction than this book."[6] Publishers Weekly called the book "enlightening, entertaining, and exciting reading".[4] Future Perfect (Vox) noted the book[a] "makes for gripping reading, (and the book) can get perspectives from the preeminent voices of AI... but (the book) cannot make those people talk to each other."[3] Booklist stated the book includes "many rich ideas" to "savor and contemplate".[7] In Foreign Affairs, technology journalist Kenneth Cukier called the book "a fascinating map".[2]

Lloyd, Seth
Seth Lloyd (born August 2, 1960) is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology. His research area is the interplay of information with complex systems, especially quantum systems. He has performed seminal work in the fields of quantum computation, quantum communication and quantum biology, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.[1]
セス・ロイド
Pearl, Judea
Judea Pearl (born September 4, 1936) is an Israeli-American computer scientist and philosopher, best known for championing the probabilistic approach to artificial intelligence and the development of Bayesian networks (see the article on belief propagation). He is also credited for developing a theory of causal and counterfactual inference based on structural models (see article on causality). In 2011, the Association for Computing Machinery (ACM) awarded Pearl with the Turing Award, the highest distinction in computer science, "for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning".[1][2][3][4]
ジューディア・パール
Russell, StuartJ.
Stuart Jonathan Russell (born 1962) is a computer scientist known for his contributions to artificial intelligence.[5][3] He is a Professor of Computer Science at the University of California, Berkeley and Adjunct Professor of Neurological Surgery at the University of California, San Francisco.[2][6] He holds the Smith-Zadeh Chair in Engineering at University of California, Berkeley.[7] He founded and leads the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley.[8] Russell is the co-author of the most popular textbook in the field of artificial intelligence: Artificial Intelligence: A Modern Approach used in more than 1,400 universities in 128 countries.[9]

Dyson, George
George Dyson (born 26 March 1953) is an American non-fiction author and historian of technology whose publications broadly cover the evolution of technology in relation to the physical environment and the direction of society.[1] He has written on a wide range of topics, including the history of computing, the development of algorithms and intelligence, communications systems, space exploration, and the design of watercraft.

Dennett, Daniel C.
Daniel Clement Dennett III (born March 28, 1942) is an American philosopher, writer, and cognitive scientist whose research centers on the philosophy of mind, philosophy of science, and philosophy of biology, particularly as those fields relate to evolutionary biology and cognitive science.[8] As of 2017, he is the co-director of the Center for Cognitive Studies and the Austin B. Fletcher Professor of Philosophy at Tufts University. Dennett is an atheist and secularist, a member of the Secular Coalition for America advisory board,[9] and a member of the Committee for Skeptical Inquiry, as well as an outspoken supporter of the Brights movement. Dennett is referred to as one of the "Four Horsemen of New Atheism", along with Richard Dawkins, Sam Harris, and the late Christopher Hitchens.[10] Dennett is a member of the editorial board for The Rutherford Journal.[11]

Brooks, Rodney
Rodney Allen Brooks (born 30 December 1954[1]) is an Australian roboticist, Fellow of the Australian Academy of Science, author, and robotics entrepreneur, most known for popularizing the actionist approach to robotics. He was a Panasonic Professor of Robotics at the Massachusetts Institute of Technology and former director of the MIT Computer Science and Artificial Intelligence Laboratory. He is a founder and former Chief Technical Officer of iRobot[2] and co-Founder, Chairman and Chief Technical Officer of Rethink Robotics (formerly Heartland Robotics). Outside the scientific community Brooks is also known for his appearance in a film featuring him and his work, Fast, Cheap & Out of Control.[3]

Wilczek, Frank
Frank Anthony Wilczek (/ˈwɪltʃɛk/;[2] born May 15, 1951) is an American theoretical physicist, mathematician and a Nobel laureate. He is currently the Herman Feshbach Professor of Physics at the Massachusetts Institute of Technology (MIT), Founding Director of T. D. Lee Institute and Chief Scientist at the Wilczek Quantum Center, Shanghai Jiao Tong University (SJTU), Distinguished Professor at Arizona State University (ASU) and full Professor at Stockholm University.[3] Wilczek, along with David Gross and H. David Politzer, was awarded the Nobel Prize in Physics in 2004 "for the discovery of asymptotic freedom in the theory of the strong interaction."[4]
フランク・ウィルチェック
Tegmark, Max
Max Erik Tegmark[1] (born 5 May 1967) is a Swedish-American physicist, cosmologist and machine learning researcher. He is a professor at the Massachusetts Institute of Technology and the scientific director of the Foundational Questions Institute. He is also a co-founder of the Future of Life Institute and a supporter of the effective altruism movement, and has received donations from Elon Musk to investigate existential risk from advanced artificial intelligence.[2][3][4][5]
マックス・テグマーク
Tallinn, Jaan
Jaan Tallinn (born 14 February 1972) is an Estonian computer programmer and investor[2] known for his participation in the development of Skype[3] in 2002 and FastTrack/Kazaa, a file-sharing application, in 2000.[4] Jaan Tallinn is partner and co-founder of the development company Bluemoon which created the game SkyRoads.[5]

Pinker, Steven
Steven Arthur Pinker (born September 18, 1954)[3][4] is a Canadian-American cognitive psychologist, linguist, and popular science author. He is an advocate of evolutionary psychology and the computational theory of mind.[5][6][7][8]Pinker's academic specializations are visual cognition and psycholinguistics. His experimental subjects include mental imagery, shape recognition, visual attention, children's language development, regular and irregular phenomena in language, the neural bases of words and grammar, and the psychology of cooperation and communication, including euphemism, innuendo, emotional expression, and common knowledge. He has written two technical books that proposed a general theory of language acquisition and applied it to children's learning of verbs. In particular, his work with Alan Prince published in 1989 critiqued the connectionist model of how children acquire the past tense of English verbs, arguing instead that children use default rules such as adding "-ed" to make regular forms, sometimes in error, but are obliged to learn irregular forms one by one.

Deutsch, David
David Elieser Deutsch FRS[6] (/dɔɪtʃ/; born 18 May 1953)[1] is a British physicist at the University of Oxford. He is a Visiting Professor in the Department of Atomic and Laser Physics at the Centre for Quantum Computation (CQC) in the Clarendon Laboratory of the University of Oxford. He pioneered the field of quantum computation by formulating a description for a quantum Turing machine, as well as specifying an algorithm designed to run on a quantum computer.[7] He has also proposed the use of entangled states and Bell's theorem for quantum key distribution[7] and is a proponent of the many-worlds interpretation of quantum mechanics.[8]

Griffith, Tom
Tom Griffiths, is Henry R. Luce Professor of Information Technology, Consciousness and Culture, Prinston University(→"Manifesto for a new (computational) cognitive revolution")

Dragan, Anca
Anca Dragan is Assistant Professor in the EECS Department at UC Berkeley.

Anderson, Chris
Chris Anderson (born July 9, 1961)[2] is a British-American author and entrepreneur. He was with The Economist for seven years before joining WIRED magazine in 2001, where he was the editor-in-chief until 2012. He is known for his 2004 article entitled The Long Tail; which he later expanded into the 2006 book, The Long Tail: Why the Future of Business Is Selling Less of More.[3] He is the cofounder and current CEO of 3D Robotics, a drone manufacturing company.[4]

Kaiser, David
David I. Kaiser is an American physicist and historian of science. He is Germeshausen Professor of the History of Science at the Massachusetts Institute of Technology (MIT), head of its Science, Technology, and Society program, and a full professor in the department of physics.[1] Kaiser is the author or editor of several books on the history of science, including Drawing Theories Apart: The Dispersion of Feynman Diagrams in Postwar Physics (2005), and How the Hippies Saved Physics: Science, Counterculture, and the Quantum Revival (2011).[2] He was elected a Fellow of the American Physical Society in 2010.[1] In March 2012 he was awarded the MacVicar fellowship, a prestigious MIT undergraduate teaching award.[3]

Gershenfeld, Neil
Neil Adam Gershenfeld (born December 1, 1959) is an American professor at MIT and the director of MIT's Center for Bits and Atoms, a sister lab to the MIT Media Lab. His research studies are predominantly focused in interdisciplinary studies involving physics and computer science, in such fields as quantum computing, nanotechnology, and personal fabrication. Gershenfeld attended Swarthmore College, where he graduated in 1981 with a B.A. degree in physics with high honors, and Cornell University, where he earned his Ph.D.in physics in 1990.[1] He is a Fellow of the American Physical Society. Scientific American has named Gershenfeld one of their "Scientific American 50" for 2004 and has also named him Communications Research Leader of the Year.[2] Gershenfeld is also known for releasing the Great Invention Kit in 2008, a construction set that users can manipulate to create various objects.[3]

Hillis, W. Daniel
William Daniel "Danny" Hillis (born September 25, 1956) is an American inventor, entrepreneur, and scientist, who pioneered parallel computers and their use in artificial intelligence. He founded Thinking Machines Corporation, a parallel supercomputer manufacturer, and subsequently was a fellow at Walt Disney Imagineering. More recently, Hillis co-founded Applied Minds[1] and Applied Invention, an interdisciplinary group of engineers, scientists, and artists.[2] He is a visiting professor at the MIT Media Lab.[3]

Ramakrishnan, Venki
Venkatraman "Venki" Ramakrishnan (born 1952) is an Indian-born British-American structural biologist who is the current President of the Royal Society. In 2009, he shared the Nobel Prize in Chemistry with Thomas A. Steitz and Ada Yonath, "for studies of the structure and function of the ribosome".[3][8][9][10] He was elected President of the Royal Society for a term of five years starting in 2015.[11] Since 1999, he has worked as a group leader at the Medical Research Council (MRC) Laboratory of Molecular Biology (LMB) on the Cambridge Biomedical Campus, UK.[12][13][14][15][16]
ヴェンカトラマン・ラマクリシュナン
Pentland, Alex "Sandy"
Alex Paul "Sandy" Pentland (born 1951) is an American computer scientist, the Toshiba Professor at MIT, and serial entrepreneur.

Obrist, Ulrich Hans
Hans Ulrich Obrist (born 1968) is a Swiss art curator, critic and historian of art. He is artistic director at the Serpentine Galleries, London. Obrist is the author of The Interview Project, an extensive ongoing project of interviews. He is also co-editor of the Cahiers d'Art review.

Gopnik, Alison
Alison Gopnik (born June 16, 1955) is an American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley. She is known for her work in the areas of cognitive and language development, specializing in the effect of language on thought, the development of a theory of mind, and causal learning. Her writing on psychology and cognitive science has appeared in Science, Scientific American,[1] The Times Literary Supplement, The New York Review of Books, The New York Times, New Scientist, Slate and others.[2] Her body of work also includes four books and over 100 journal articles.

Galison, Peter
Peter Louis Galison (born May 17, 1955, New York) is an American historian and philosopher of science. He is the Joseph Pellegrino University Professor in history of science and physics at Harvard University.

Church, George M.
George McDonald Church (born 28 August 1954) is an American geneticist, molecular engineer, and chemist. He is the Robert Winthrop Professor of Genetics at Harvard Medical School, Professor of Health Sciences and Technology at Harvard and MIT, and a founding member of the Wyss Institute for Biologically Inspired Engineering.[3][2][7] As of March 2017, Church serves as a member of the Bulletin of the Atomic Scientists' Board of Sponsors.[8]

Jones, Caroline A.
Caroline A Jones (Born: April 21, 1954)studies modern and contemporary art, with a particular focus on its technological modes of production, distribution, and reception. Trained in visual studies and art history at Harvard, she did graduate work at the Institute of Fine Arts in New York before completing her PhD at Stanford University in 1992.

Wolfram, Stepen
Stephen Wolfram (/ˈwʊlfrəm/; born 29 August 1959) is a British-American[6] computer scientist, physicist, and businessman. He is known for his work in computer science, mathematics, and in theoretical physics.[7][8] In 2012, he was named an inaugural fellow of the American Mathematical Society.[9] As a businessman, he is the founder and CEO of the software company Wolfram Research where he worked as chief designer of Mathematica and the Wolfram Alpha answer engine. His recent work has been on knowledge-based programming, expanding and refining the Wolfram Language, which is the programming language of the mathematical symbolic computation program Mathematica.

Brockman, John
John Brockman (born February 16, 1941) is a literary agent and author specializing in scientific literature. He established the Edge Foundation, an organization that brings together leading edge thinkers across a broad range of scientific and technical fields. Brockman was born to immigrants of Polish-Jewish descent in a poor Irish Catholic enclave of Boston, Massachusetts.[1] Expanding on C.P. Snow's "two cultures", he introduced the "third culture"[2] consisting of "those scientists and other thinkers in the empirical world who, through their work and expository writing, are taking the place of the traditional intellectual in rendering visible the deeper meanings of our lives, redefining who and what we are." He led a scientific salon for 20 years, asking an annual question to a host of renowned scientists and publishing their answers in book form,[3] which he decided to symbolically close down in 2018.[4] He is an editor of Edge.org.[5][6]


++++++++++++++++++++++++++++++

『サイバネティクスより』9章 学習と自己生成的機 械について 169 On Learning and Self-Reproducing Machines(Learning_Self-Reproducing_Machines.pdf)with password

IX On Learning and Self-Reproducing Machines

Two of the phenomena which we consider to be characteristic of living systems are the power to learn and the power to reproduce themselves. These properties, different as they appear, are intimately related to one another. An animal that learns is one which is capable of being transformed by its past environment into a different being and is therefore adjustable to its environment within its individual lifetime. An animal that multiplies is able to create other animals in its own likeness at least approximately, although not so completely in its own likeness that they cannot vary in the course of time. If this variation is itself inheritable, we have the raw material on which natural selection can work. If the hereditary invariability concerns manners of behavior, then among the varied patterns of behavior which are propagated some will be found advantageous to the continuing existence of the race and will establish themselves, while others which are detrimental to this continuing existence will be eliminated. The result is a certain sort of racial or phylogenetic learning, as contrasted with the ontogenetic learning of the individual. Both ontogenetic and phylogenetic learning are modes by which the animal can adjust itself to its environment.

Both ontogenetic and phylogenetic learning, and certainly the latter, extend themselves not only to all animals but to plants and, indeed, to all organisms which in any sense may be considered to be living. However, the degree to which these two forms of learning are found to be important in different sorts of living beings varies widely. In man, and to a lesser extent in the other mammals, 169 170 CYBERNETICS ontogenetic learning and individual adaptability are raised to the highest point. Indeed, it may be said that a large part of the phylogenetic learning of man has been devoted to establishing the possibility of good ontogenetic learning.

It has been pointed out by Julian Huxley in his fundamental paper on the mind of birds 1 that birds have a small capacity for ontogenetic learning. Something similar is true in the case of insects, and in both instances it may be associated with the terrific demands made on the individual by flight and the consequential pre-emption of the capabilities of the nervous system which might otherwise be applied to ontogenetic learning. Complicated as the behavior patterns of birds are-in flying, in courtship, in the care of the young, and in nest building-they are carried out correctly the very first time without the need of any large amount of instruction from the mother.

It is altogether appropriate to devote a chapter of this book to these two related subjects. Can man-made machines learn and can they reproduce themselves 1 We shall try to show in this chapter that in fact they can learn and can reproduce themselves, and we shall give an account of the technique needed for both these activities.

The simpler of these two processes is that of learning, and it is there that the technical development has gone furthest. I shall talk here particularly of the learning of game-playing machines which enables them to improve the strategy and tactics of their performance by experience.

There is an established theory of the playing of games-the von Neumann theory.• It concerns a policy which is best considered by working from the end of the game rather than from the beginning. In the last move of the game, a player strives to make a winning move if possible, and if not, then at least a drawing move. His opponent, at the previous stage, strives to make a move which will prevent the other player from making a winning or a drawing move. If he can himself make a winning move at that stage, he will do so, and this will not be the next to the last but the last stage of the game. The other player at the move before this will try to act in such a way that the very best resources of his opponent will not prevent him from ending with a winning move, and so on backward.

There are games such as ticktacktoe where the entire strategy is known, and it is possible to start this policy from the very be- 1 Huxley, J., Evolution: The Modern Synthesis, Harper Bros., New York, 1943. 2 von Neumann, J., and O. Morgenstern, Theory of Games and Economic Behavior, Princeton University Press, Princeton, N.J., 1944. ON LEARNING AND SELF~REPRODUCING MACHINES 171 ginning. When this is feasible, it is manifestly the best way of playing the game. However, in many games like chess and checkers our knowledge is not sufficient to permit a complete strategy of this sort, and then we can only approximate to it. The von Neumann type of approximate theory tends to lead a player to act with the utmost caution, assuming that his opponent is the perfectly wise sort of a master.

This attitude, however, is not always justified. In war, which is a sort of game, this will in general lead to an indecisive action which will often be not much better than a defeat. Let me give two historical examples. When Napoleon fought the Austrians in Italy, it was part of his effectiveness that he knew the Austrian mode of military thought to be hidebound and traditional, so that he was quite justified. in assuming that they were incapable of taking advantage of the new decision-compelling methods of war which had been developed by the soldiers of the French Revolution. When Nelson fought the combined fleets of continental Europe, he had the advantage of fighting with a naval machine which had kept the seas for years and which had developed methods of thought of which, as he was well aware, his enemies were incapable. If he had not made the fullest possible use of this advantage, instead of acting as cautiously as he would have had to act under the supposition that he was facing an enemy of equal naval experience, he might have won in the long run but could not have won so quickly and decisively as to establish the tight naval blockade which was the ultimate downfall of Napoleon. Thus, in both cases, the guiding factor was the known record of the commander and of his opponents, as exhibited statistically in the past of their actions, rather than an attempt to play the perfect game against the perfect opponent. Any direct use of the von Neumann method of game theory in these cases would have proved futile.

In a similar way, books on chess theory are not written from the von Neumann point of view. They are compendia of principles drawn from the practical experience of chess players playing against other chess players of high quality and wide knowledge; and they establish certain values or weightings to be given to the Joss of each piece, to mobility, to command, to development, and to other factors which may vary with the stage of the game.

It is not very difficult to make machines which will play chess of a sort. The mere obedience to the Jaws of the game, so that only legal moves are made, is easily within the power of quite simple computing machines. Indeed, it is not hard to adapt an ordinary digital machine to these purposes.

172 CYBERNETICS Now comes the question of policy within the rules of the game. Every evaluation of pieces, command, mobility, and so forth, is intrinsically capable of being reduced to numerical terms; and when this is done, the maxims of a chess book may be used for the determination of the best moves of each stage. Such machines have been made; and they will play a very fair amateur chess, although at present not a game of master caliber.

Imagine yourself in the position of playing chess against such a machine. To make the situation fair, Jet us suppose you are playing correspondence chess without the knowledge that it is such a machine you are playing and without the prejudices that this knowledge may excite. Naturally, as always is the case with cheas, you will come to a judgment of your opponent's chess personality. You will find that when the same situation comes up twice on the chessboard, your opponent's reaction will be the same each time, and you will find that he has a very rigid personality. If any trick of yours will work, then it will always work under the same conditions. It is thus not too hard for an expert to get a line on his machine opponent and to defeat him every time.

However, there are machines that cannot be defeated so trivially. Let us suppose that every few games the machine takes time off and uses its facilities for another purpose. This time, it does not play against an opponent, but examines all the previous games which it has recorded on its memory to determine what weighting of the different evaluations of the worth of pieces, command, mobility, and the like, will conduce most to winning. In this way, it learns not only from its own failures but its opponent's successes. It now replaces its earlier valuations by the new ones and goes on playing as a new and better machine. Such a machine would no longer have as rigid a personality, and the tricks which were once succeasful against it will ultimately fail. More than that, it may absorb in the course of time something of the policy of its opponents.

All this is very difficult to do in chess, and as a matter of fact the full development of this technique, so as to give rise to a machine that can play master chess, has not been accomplished. Checkers offers an easier problem. The homogeneity of the values of the pieces greatly reduces the number of combinations to be considered. Moreover, partly as a consequence of this homogeneity, the checker game is much leas divided into distinct stages than the chess game. Even in checkers, the main problem of the end game is no longer to take pieces but to establish contact with the enemy so that one is in a position to take pieces. Similarly, the valuation of moves in the ON LEARNING AND SELFMREPRODUCING MACHINES 173 chess game must be made independently for the different stages. Not only is the end game different from the middle game in the considerations which are paramount, but the openings are much more devoted to getting the pieces into a position of free mobility for attack and defense than is the middle game. The result is that we cannot be even approximately content with a uniform evaluation of the various weighting factors for the game as a whole, but must divide the learning process into a number of separate stages. Only then can we hope to construct a learning machine which can play master chess.

The idea of a first-order programming, which may be linear in certain cases, combined with a second-order programming, which uses a much more extensive segment of the past for the determination of the policy to be carried out in the first-order programming, has been mentioned earlier in this book in connection with the problem of prediction. The predictor uses the immediate past of the flight of the airplane as a tool for the prediction of the future by means of a linear operation; but the determination of the correct linear operation is a statistical problem in which the long past of the flight and the past of many similar flights are used to give the basis of the statistics.

The statistical studies necessary to use a long past for a determination of the policy to be adopted in view of the short past are highly non-linear. As a matter of fact, in the use of the Wiener-Hopf equation for prediction,! the determination of the coefficients of this equation is carried out in a non-linear manner. In general, a learning machine operates by non-linear feedback. The checker-playing machine described by Samuel 2 and Watanabe• can learn to defeat the man that programmed it in a fairly consistent way on the basis of from IO to 20 operating hours of programming.

Watanabe's philosophical ideas on the use of programming machines are very exciting. On the one hand, he is treating a method of proving an elementary geometrical theorem which shall conform in an optimal way according to certain criteria of elegance and simplicity, as a learning game to be played not against an individual opponent but against what we may call "Colonel Bogey." A similar 1 Wiener, N., Extrapokuion, Interpolation, and Smoothing of Stationary Time Series with Engineering Applications, The Technology Press of M.I.T. and John Wiley & Sons, New York, 1949. 2 Samuel, A. L., '' Some Studies in Machine Learning, Using the Game of Checkers,'' IBM Journal of Research and Developmeflt, 3, 210-229 (1959). 3 Watanabe, S., "Information Theoretical Analysis of Multivariate Correlation," IBM Journal of Research and Development,,, 66--82 (1960). 174 CYBERNETICS game which Watanabe is studying is played in logical induction, when we wish to build up a theory which is optimal in a similar quasi~aesthetic way, on the basis of an evaluation of economy, directness, and the like, by the determination of the evaluation of a finite number of parameters which are left free. This, it is true, is only a limited logical induction, but it is well worth studying.

Many forms of the activity of struggle, which we do not ordinarily consider as games, have a great deal of light thrown on them by the theory of game-playing machines. One interesting example is the fight between a mongoose and a snake. As Kipling points out in "Rikki-Tikki-Tavi," the mongoose is not immune to the poison of the cobra, although it is to some extent protected by its coat of stiff hairs which makes it difficult for the snake to bite home. As Kipling states, the fight is a dance with death, a struggle of muscular skill and agility. There is no reason to suppose that the individual motions of the mongoose are faster or more accurate than those of the cobra. Yet the mongoose almost invariably kills the cobra and comes out of the contest unscathed. How is it able to do this?

I am here giving an account which appears valid to me, from having seen such a fight, as well as motion pictures of other such fights. I do not guarantee the correctness of my observations as interpretations. The mongoose begius with a feint, which provokes the snake to strike. The mongoose dodges and makes another such feint, so that we have a rhythmical pattern of activity on the part of the two animals. However, this dance is not static but develops progressively. As it goes on, the feints of the mongoose come earlier and earlier in phase with respect to the darts of the cobra, until finally the mongoose attacks when the cobra is extended and not in a position to move rapidly. This time the mongoose's attack is not a feint but a deadly accurate bite through the cobra's brain.

In other words, the snake's pattern of action is confined to single darts, each one for itself, while the pattern of the mongoose's action involves an appreciable, if not very long, segment of the whole past of the fight. To this extent the mongoose acts like a learning machine, and the real deadliness of its attack is dependent on a much more highly organized nervous system.

As a Walt Disney movie of several years ago showed, something very similar happens when one of our western birds, the road runner, attacks a rattlesnake. While the bird fights with beak and claws, and a mongoose with its teeth, the pattern of activity is very similar. A bullfight is a very fine example of the same thing. For it must be remembered that the bullfight is not a sport but a dance with death, ON LEARNING AND SELF-REPRODUCING MACHINES 175 to exhibit the beauty and the interlaced coordinating actions of the bull and the man. Fairness to the bull has no part in it, and we can leave out from our point of view the preliminary goading and weakening of the bull, which have the purpose of bringing the contest to a level where the interaction of the patterns of the two participants is most highly developed. The skilled bullfighter has a large repertory of possible actions, such as the flaunting of the cape, various dodges and pirouettes, and the like, which are intended to bring the bull into a position in which it has completed its rush and is extended at the precise moment that the bullfighter is ready to plunge the estoque into the bull's heart.

What I have said concerning the fight between the mongoose and the cobra, or the toreador and the bull, will also apply to physical contests between man and man. Consider a duel with the smallsword. It consists of a sequence of feints, parries, and thrusts, with the intention on the part of each participant to bring his opponent's sword out of line to such an extent that he can thrust home without laying himself open to a double encounter. Again, in a championship game of tennis, it is not enough to serve or return the ball perfectly as far as each individual stroke is considered; the strategy is rather to force the opponent into a series of returns which put him progressively in a worse position until there is no way in which he can return the ball safely.

These physical contests and the sort of games which we have supposed the game-playing machine to play both have the same element of learning in terms of experience of the opponent's habits as well as one's own. What is true of games of physical encounter is also true of contests in which the intellectual element is stronger, such as war and the games which simulate war, by which our staff officers win the elements of their military experience. This is true for classical war both on land and at sea, and is equally true with the new and as yet untried war with atomic weapons. Some degree of mechanization, parallel to the mechanization of checkers by learning machines, is possible in all these.

There is nothing more dangerous to contemplate than World War III. It is worth considering whether part of the danger may not be intrinsic in the unguarded use of learning machines. Again and again I have heard the statement that learning machines cannot subject us to any new dangers, because we can turn them off when we feel like it. But can we! To turn a machine off effectively, we must be in possession of information as to whether the danger point has come. The mere fact that we have made the machine does not 176 CYBERNETICS guarantee that we shall have the proper information to do this. This is already implicit in the statement that the checker-playing machine can defeat the man who has programmed it, and this after a very limited time of working in. Moreover, the very speed of operation of modern digital machines stands in the way of our ability to perceive and think through the indications of danger.

The idea of non-human devices of great power and great ability to carry through a policy, and of their dangers, is nothing new. All that is new is that now we possess effective devices of this kind. In the past, similar possibilities were postulated for the techniques of magic, which forms the theme for so many legends and folk tales. These tales have thoroughly explored the moral situation of the magtctan. I have already discussed some aspects of the legendary ethics of magic in an earlier book entitled The Human Use of Human Beings.• I here repeat some of the material which I have discussed there, in order to bring it out more precisely in its new context of learning machines.

One of the best-known tales of magic is Goethe's "The Sorcerer's Apprentice." In this, the sorcerer leaves his apprentice and factotum alone with the chore of fetching water. As the boy is lazy and ingenious, he passes the work over to a broom, to which he has uttered the words of magic which he has heard from his master. The broom obligingly does the work for him and will not stop. The boy is on the verge of being drowned out. He finds that he has not learned, or has forgotten, the second incantation which is to stop the broom. In desperation, he takes the broomstick, breaks it over his knee, and finds to his consternation that each half of the broom continues to fetch water. Luckily, before he is completely destroyed, the master returns, says the Words of Power to stop the broom, and administers a good scolding to the apprentice.

Another story is the Arabian Nights tale of the fisherman and the genie. The fisherman has dredged up in his net a jug closed with the seal of Solomon. It is one of the vessels in which Solomon has imprisoned the rebellious genie. The genie emerges in a cloud of smoke, and the gigantic figure tells the fisherman that, whereas in his first years of imprisonment he had resolved to reward his rescuer with power and fortune, he has now decided to slay him out of hand. Luckily for himself, the fisherman finds a way to talk the genie back into the bottle, upon which he casts the jar to the bottom of the ocean. l Wiener, N., The Human Use of Human Beings; Cybernetic.a and Society, Houghton Mifflin Company, Boston, 1950. ON LEARNING AND SELF·REPROOUCINO MACHINES 177

More terrible than either of these two tales is the fable of the monkey's paw, written by W. W. Jacobs, an English writer of the beginning of the century. A retired English workingman is sitting at his table with his wife and a friend, a returned British sergeantmajor from India. The sergeant-major shows his hosts an amulet in the form of a dried, wizened monkey's paw. This has been endowed by an Indian holy man, who has wished to show the folly of defying fate, with the power of granting three wishes to each of three people. The soldier says that he knows nothing of the first two wishes of the first owner, but the last one was for death. He himself, as he tells his friends, was the second owner but will not talk of the horror of his own experiences. He casts the paw into the fire, but his friend retrieves it and wishes to test its powers. His first is for £200. Shortly thereafter there is a knock at the door, and an official of the company by which his son is employed enters the room. The father learns that his son has been killed in the machinery, but that the company, without recognizing any responsibility or legal obligation, wishes to pay the father the sum of £200 as a solatium. The griefstricken father makes his second wish-that his son may return-and when there is another knock at the door and it is opened, something appears which, we are not told in so many words, is the ghost of the son. The final wish is that this ghost should go away.

In all these stories the point is that the agencies of magic are literal-minded; and that if we ask for a boon from them, we must ask for what we really want and not for what we think we want. The new and real agencies of the learning machine are also literal•minded. If we program a machine for winning a war, we must think well what we mean by winning. A learning machine must be programmed by experience. The only experience of a nuclear war which is not immediately catastrophic is the experience of a war game. If we are to use this experience as a guide for our procedure in a real emergency, the values of winning which we have employed in the programming games must be the same values which we hold at heart in the actual outcome of a war. We can fail in this only at our immediate, utter, and irretrievable peril. We cannot expect the machine to follow us in those prejudices and emotional compromises by which we enable ourselves to call destruction by the name of victory. Ifwe ask for victory and do not know what we mean by it, we shall find the ghost knocking at our door.

So much for learning machines. Now let me say a word or two about self-propagating machines. Here both the words machine and self-propagating are important. The machine is not only a form of 178 CYBERNETICS matter, but an agency for accomplishing certain definite purposes. And self-propagation is not merely the creation of a tangible replica; it is the creation of a replica capable of the same functions.

Here, two different points of view come into evidence. One of these is purely combinatorial and concerns the question whether a machine can have enough parts and sufficiently complicated structure to enable self-reproduction to be among its functions. This question has been answered in the affirmative by the late John von Neumann. The other question concerns an actual operative procedure for building self-reproducing machines. Here I shall confine my attentions to a class of machines which, while it does not embrace all machines, is of great generality. I refer to the non-linear transducer.

Such machines are apparatuses which have as an input a single function of time and which have as their output another function of time. The output is completely determined by the past of the input; but in general, the adding of inputs does not add the correspondink outputs. Such pieces of apparatus are known as transducers. One property of all transducers, linear or non~linear, is an invariance with respect to a translation in time. If a machine performs a certain function, then, if the input is shifted back in time, the output is shifted back by the same amount.

 Basic to our theory of self-reproducing machines is a canonical form of the representation of non-linear transducers. Here the notions of impedance and admittance, which are so essential in the theory of linear apparatus, are not fully appropriate. We shall have to refer to certain newer methods of carrying out this representation, methods developed partly by me 1 and partly by Professor Dennis Gabor• of the University of London.

While both Professor Gabor's methods and my own lead to the construction of non~linear transducers, they are linear to the extent that the non-linear transducer is represented with an output which is the sum of the outputs of a set of non-linear transducers with the same input. These outputs are combined with varying linear coefficients. This allows us to employ the theory of linear developments in the design and specification of the non-linear transducer. And in particular, this method allows us to obtain coefficients of the constituent elements by a least-square process. If we join this to a 1 Wiener, N., Nonlinear Problem11 in Random Theory, The Technology Press of M.I.T. and John Wiley & Sons, Inc., New York, 1958. 2. Gabor, D., "Electronic Inventions and Their Impact on Civilization," Inaugural Ledure, March 3, 1959, Imperial College of Science and Technology, University of London, England, ON LEARNING AND SELF-REPRODUCING MACHINES 179 method of statistically averaging over the set of all inputs to our apparatus, we have essentially a branch of the theory of orthogonal development, Such a statistical basis of the theory of non-linear transducers can be obtained from an actual study of the past statistics of the inputs used iu each particular case.

This is a rough account of Professor Gabor's methods. While mine are essentially similar, the statistical basis for my work is slightly different.

It is well known that electrical currents are not conducted continuously but by a stream of electrons which must have statistical variations from uniformity. These statistical fluctuations can be represented fairly by the theory of the Brownian motion, or by the similar theory of shot effect or tube noise, about which I am going to say something in the next chapter. At any rate, apparatus can be made to generate a standardized shot effect with highly specific statistical distribution, and such apparatus is being manufactured commercially. Note that tube noise is in a sense a universal input in that its fluctuations over a sufficiently long time will sooner or later approximate to any given curve. This tube noise possesses a very simple theory of integration and averaging.

In terms of the statistics of tube noise, we can easily determine a closed set of normal and orthogonal non-linear operations. If the inputs subject to these operations have the statistical distribution appropriate to tube noise, the average product of the output of two component pieces of our apparatus, where this average is taken with respect to the statistical distribution of tube noise, will be zero. Moreover, the mean square output of each apparatus can be normalized to one. The result is that the development of the general non-linear transducer in terms of these components results from an application of the familiar theory of orthonormal functions.

To be specific, our individual pieces of apparatus give outputs which are products of Hermite polynomials in the Laguerre coefficients of the past of the input. This is presented in detail in my Nimlinear Problems in Random Theory.

It is of course difficult to average in the first instance over a set of possible inputs. What makes this difficult task realizable is that the shot-effect inputs possess the property known as metric transitivity, or the ergodic property. Any integrable function of the parameter of distribution of shot-effect inputs has in almost every instance a time average equal to its average over the ensemble. This permits us to take two pieces of apparatus with a common shot-effect input, and to determine the average of their product over the entire 180 CYBERNETICS ensemble of the possible inputs, by taking their product and averaging it over the time. The repertory of operations needed for all these processes involves nothing more than the addition of potentials, the multiplication of potentials, and the operation of averaging over time. Devices exist for all th-. As a matter of fact, the elementary devices needed in Professor Gabor's methodology are the same as those needed in mine. One of his students has invented a particularly effective and inexpensive multiplying device depending on the piezo  electric effect on a crystal of the attraction of two magnetic coils.

What this amounts to is that we can imitate any unknown nonlinear transducer by a sum of linear terms, each of fixed characteristics and with an adjustable coefficient. This coefficient can be determined as the average product of the outputs of the unknown transducer and a particular known transducer, when the same shoteffect generator is connected to the input of both. What is more, instead of computing this result on the scale of an instrument and then transferring it by hand to the appropriate transducer, thus producing a piecemeal simulation of the apparatus, there is no particular problem in automatically effecting the transfer of the coefficients to the pieces of feedback apparatus. What we have succeeded in doing is to make a white box which can potentially assume the characteristics of any non-linear transducer whatever, and then to draw it into the similitude of a given black-box transducer by subjecting the two to the same random input and connecting the outputs of the structures in the proper manner, so as to arrive at the suitable combination without any intervention on our part.

I ask if this is philosophically very different from what is done when a gene acts as a template to form other molecules of the same gene from an indeterminate mixture of amino and nucleic acids, or when a virus guides into its own form other molecules of the same virus out of the tissues and juices of its host. I do not in the least claim that the details of these processes are the same, but I do claim that they are philosophically very similar phenomena.





++

●Steve Joshua Heim, The Cybernetics Group. MIT Press, 1991.

+++

リンク集

文献

その他の情報


Copyleft, CC, Mitzub'ixi Quq Chi'j, 1996-2099

Do not paste, but [re]think this message for all undergraduate students!!!