|Date||May 24th, 2017 (Wed) 17:50～19:30|
|Panelists||Yutaka Matsuo (University of Tokyo), Toyoaki Nishida (Kyoto University), Koichi Hori (University of Tokyo), Hideaki Takeda (NII), Takashi Hase (SF Writer), Makoto Shiono (IGPI), Hiromitsu Hattori (Ritsumeikan University), Hiroshi Yamakawa (dwango), Satoshi Kurihara (The University of Electro-Communications), Danit Gal (IEEE, Peking University, Tsinghua University, Tencent）|
|Moderators||Arisa Ema (The University of Tokyo), Katsue Nagakura (Science Writer)|
On 24th May 2017, The Japanese Society for Artificial Intelligence (JSAI) held an open discussion. Almost half of attendees were JSAI members and the rest were from the general public.
The JSAI released its “Ethical Guidelines” in February 2017. Many other documents on artificial intelligence and ethics/society are being published abroad. For example, the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems released “Ethically Aligned Design version 1” in December 2016, and The Future of Life Institute released the “Asilomar AI Principle” in January 2017. Both initiatives also discuss Autonomous Weapon systems.
Therefore, the open discussion was firstly informed by discussions on artificial intelligence and ethics/society in Japan and abroad. At a panel session, Danit Gal, chair of the outreach committee at the IEEE Global Initiative was invited and discussed to confirm future cooperation with the JSAI.
●Talk 1 “AI Principles and Ethics in Japan and Abroad” by Arisa Ema
●Talk 2 “Introduction of the JSAI Ethical Guidelines” by Yutaka Matsuo
●Talk 3 “Introduction of “Ethically Aligned Design version 1” by John C. Havens（IEEE）* Video message
●Talk 4 “Introduction of “Asilomar AI Principles” by Richard Mallah (FLI) * Video message
Talk 1 “AI Principles and Ethics in Japan and Abroad” by Arisa Ema
First, I would like to introduce the basic idea that technology and society are interacting. Some say that AI and robots will solve our social problems, and others say that AI and robots will create new social problems. Both are true because technology and society are mutually related. Therefore, we need to consider what kind of society we want to live in and how to design the society and technologies to that end.
I would like to roughly categorize “ethics” into three: research ethics, AI ethics and ethical AI. Research ethics is akin to the “social responsibility of researchers” or “scientific misconduct” that is not specific to AI research.
AI ethics could be divided into two. First is AI principles and governance that dictate how to develop AI. Second is the social, legal, economic and other implications of AI. To consider AI ethics, many stakeholders including engineers, policy makers, economists, sociologist, and ethicists should engage in discussion.
Ethical AI, on the other hand, means AI that acts in an ethical or moral manner. By behaving so, AI will reframe concepts like rights and autonomy, and help define man-machine relationships.
I referred this category to neuroethics. It has two aspects: the ethics of neuroscience and the neuroscience of ethics.
Next, I will introduce documents about AI and society/ethics. This is a partial list of documents released within the last six months. Governments, academia, NPO and many other stakeholders are publishing these kinds of documents.
I will introduce some of them. For example, the Conference toward AI networking society, by the Japanese Ministry of Internal Affairs and Communication, which discussed AI R&D Guidelines and AI usage guidelines.
This is the final report published by the Cabinet Office, Government of Japan in March 2017. It focuses on responding to public anxiety towards AI. The issues it addresses are organized by topic and case.
Also, the IEEE Global Initiatives, which today’s guest Danit Gal represents, released “Ethically Aligned Design version 1” in December 2016. This discusses 8 topics in 138 pages. They accepted feedback from around the world to update a version that will be released in the Autumn of this year.
Furthermore, The Japanese Society for Artificial Intelligence (JSAI) released it “Ethical Guidelines” with 9 articles in February 2017. This has the unique principle of “Abidance of ethics guidelines by AI” and the chair of the committee will explain its intention shortly.
Let me organize these documents according to the previously mentioned three categories.
JSAI “Ethical Guidelines” are mostly focused on research(ers) ethics such as the social responsibility of researchers. It lists codes of conduct that would be referred to by academia and engineers. MIC, however, differentiate between AI R&D guidelines that would be considered by engineers and the social implications for the public. The Cabinet Office is focused on AI’s social impact in response to public anxiety. The FLI and IEEE broadly deal with AI ethics. Expected stakeholders are not only engineers, but also other stakeholders such as founding agencies and lawyers. They also mention Ethical AI such as Artificial General Intelligence.
As you move to the lower part of the table, the more stakeholders are needed to be included in the discussion aside from engineers.
In fact, every document emphasizes interdisciplinary and international collaboration and communication. We should therefore consider not only the present but also the past and the future; not only humans but also the relationships between machines and the environment.
In this open discussion, we would like to discuss the role of the Ethics Committee of the JSAI.
Talk 2 “Introduction of the JSAI Ethical Guidelines” by Yutaka Matsuo
I would like to talk about how the guidelines were created. The ethics Committed was established in 2014.
We carried out discussions on “what is AI ethics” but it was difficult to come up with a uniform conclusion. Even having a consensus of “what is AI” was difficult. Therefore, we agreed to create a research project on ethics that would become the basis of our discussion.
We released our “Code of Ethics” at last year’s annual conference. Based upon the feedback from the editorial board and the Board of Directors of the JSAI, we reformulated the “Ethical Guidelines” as an update version and published it in February this year.
The “Ethical Guidelines is expressing what is obvious.” In the preamble, it says:
Artificial Intelligence (“AI”) research focuses on the realization of AI, which is the enabling of computers to possess intelligence and become capable of learning and acting autonomously. AI will assume a significant role in the future of mankind in a wide range of areas, such as Industry, Medicine, Education, Culture, Economics, Politics, Government, etc. However, it is undeniable that AI technologies can become detrimental to human society, or conflict with public interests due to abuse or misuse. (…) AI researchers must act ethically and in accordance with their own conscience and acumen.
The general public may be concerned with what artificial intelligence researchers would do with the technology. But we are not mad scientists. We wanted to express our good intentions in that we are genuinely considering the benefit of human beings and society.
The article contains various contributions to humanity, abidance by the appropriate laws and regulations, respect for the privacy of others, fairness, security, acting with integrity, accountability, and social responsibility, communication with society, and continuous self-development.
The last article, article no. 9 is the abidance of AI to the ethical guidelines, and Arisa categorized this as “ethical AI.” It says that “AI must abide by the policies described above in the same manner as the members of the JSAI in order to become a member or a quasi-member of society.” This technology is not yet present at this stage, but when we create high-level AI, then it should abide by our “Ethical Guidelines.” This emphasizes the reflexive nature of the Guidelines.
Talk 3 “Introduction of “Ethically Aligned Design version 1” by John C. Havens（IEEE）* Video message
John C. Havens, Executive Director, IEEE Global Initiative for Ethical
Considerations in Artificial Intelligence and Autonomous Systems
Talk 4 “Introduction of “Asilomar AI Principles” by Richard Mallah (FLI) * Video message
Richard Mallah, Director of AI Projects, Future of Life Institute
Panel Discussion Pictures