はじめにかならず読んでください

人工知能の理論問題

Artificial intelligence, AI

解説:池田光穂

Convolutional neural network
"In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep neural networks, most commonly applied to analyzing visual imagery.[1] They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.[2][3] They have applications in image and video recognition, recommender systems,[4] image classification, medical image analysis, natural language processing,[5] and financial time series.[6]" - convolutional neural network (CNN, or ConvNet)
Deep learning
"Deep learning (also known as deep structured learning) is part of a broader family of machine learning methods based on artificial neural networks with representation learning. Learning can be supervised, semi-supervised or unsupervised.[1][2][3]"- https://en.wikipedia.org/wiki/Deep_learning

Recurrent neural network
"A recurrent neural network (RNN) is a class of artificial neural networks where connections between nodes form a directed graph along a temporal sequence. This allows it to exhibit temporal dynamic behavior. Derived from feedforward neural networks, RNNs can use their internal state (memory) to process variable length sequences of inputs.[1] This makes them applicable to tasks such as unsegmented, connected handwriting recognition[2] or speech recognition.[3][4] "- Recurrent neural network.
Machine learning
"Machine learning (ML) is the study of computer algorithms that can improve automatically through experience and by the use of data.[1] It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to do so.[2] Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.[3]" -Machine learning.

Pareto efficiency
"Pareto efficiency or Pareto optimality is a situation where no individual or preference criterion can be better off without making at least one individual or preference criterion worse off or without any loss thereof. The concept is named after Vilfredo Pareto (1848–1923), Italian civil engineer and economist, who used the concept in his studies of economic efficiency and income distribution. The following three concepts are closely related: ●Given an initial situation, a Pareto improvement is a new situation where some agents will gain, and no agents will lose. ●●A situation is called Pareto dominated if there exists a possible Pareto improvement. ●●●A situation is called Pareto optimal or Pareto efficient if no change could lead to improved satisfaction for some agent without some other agent losing or if there is no scope for further Pareto improvement. The Pareto frontier is the set of all Pareto efficient allocations, conventionally shown graphically. It also is variously known as the Pareto front or Pareto set.[1]"-Pareto efficiency.

Federated learning
"Federated learning (also known as collaborative learning) is a machine learning technique that trains an algorithm across multiple decentralized edge devices or servers holding local data samples, without exchanging them. This approach stands in contrast to traditional centralized machine learning techniques where all the local datasets are uploaded to one server, as well as to more classical decentralized approaches which often assume that local data samples are identically distributed. Federated learning enables multiple actors to build a common, robust machine learning model without sharing data, thus allowing to address critical issues such as data privacy, data security, data access rights and access to heterogeneous data. Its applications are spread over a number of industries including defense, telecommunications, IoT, and pharmaceutics."

Federated Learning of Cohorts "Federated Learning of Cohorts (FLoC) is a type of web tracking through federated learning. It groups people into "cohorts" based on their browsing history for the purpose of interest-based advertising.[1][2] Google began testing the technology in the Chrome browser in March 2021 as a replacement for third-party cookies, which it plans to stop supporting in Chrome by early 2023. FLoC is being developed as a part of Google's Privacy Sandbox initiative,[3] which includes several other advertising-related technologies with bird-themed names.[1][4]:48"- Federated Learning of Cohorts.

Society of Mind
"The Society of Mind is both the title of a 1986 book and the name of a theory of natural intelligence as written and developed by Marvin Minsky.[1] In his book of the same name, Minsky constructs a model of human intelligence step by step, built up from the interactions of simple parts called agents, which are themselves mindless. He describes the postulated interactions as constituting a "society of mind", hence the title.[2]" -Society of Mind.