【翻译】通用人工智能的花火——GPT-4的早期研究 Sparks of Arti cial General Intelligence:Early experiments with GPT-4

10.1 Denitions of intelligence, AI, and AGI  智能、人工智能和通用人工智能的定义
In this paper, we have used the 1994 denition of intelligence by a group of psychologists [Got97] as a guiding framework to explore GPT-4's articial intelligence. This denition captures some important aspects of intelligence, such as reasoning, problem-solving, and abstraction, but it is also vague and incomplete. It does not specify how to measure or compare these abilities. Moreover, it may not reect the specic challenges and opportunities of articial systems, which may have dierent goals and  constraints than natural ones. Therefore, we acknowledge that this denition is not the nal word on intelligence, but rather a useful starting point for our investigation.

在本文中,我们使用了一组心理学家[Got97]在1994年对智力的否定作为探索GPT-4的人工智力的指导框架。这个概念抓住了智力的一些重要方面,如推理、解决问题和抽象,但它也是模糊和不完整的。它没有指定如何衡量或比较这些能力。此外,它可能无法反映人工系统的具体挑战和机遇,而人工系统可能具有 比自然目标和限制更频繁。因此,我们承认,这个结论不是关于情报的空洞词,而是我们调查的有用起点。 

There is a rich and ongoing literature that attempts to propose more formal and comprehensive definitions of intelligence, artificial intelligence, and artificial general intelligence [Goe14, Cho19], but none of them is without problems or controversies. For instance, Legg and Hutter [Leg08] ropose a goal-oriented definition of articial general intelligence: Intelligence measures an agent's ability to achieve goals in a wide range of environments. However, this definition does not necessarily  capture the full spectrum of intelligence, as it excludes passive or reactive systems that can perform complex tasks or answer questions without any intrinsic motivation or goal.

有一本丰富而持续的文献试图提出更正式和全面的智能、人工智能和通用人工智能的定义[Goe14,Cho19],但没有一个是没有问题或争议的。例如,Legg和Hutter [Leg08]提出了一个以目标为导向的通用人工智能的定义:智能衡量智能体在各种环境中实现目标的能力。然而,这种定义并不一定涵盖所有智力,因为它排除了可以执行复杂任务或回答问题的被动或被动系统,而没有任何内在动机或目标。

One could imagine as an artificial general intelligence, a brilliant oracle, for example, that has no agency or preferences, but can provide accurate and useful information on any topic or domain. Moreover, the definition around achieving goals in a wide range of environments also implies a certain degree of universality or optimality, which may not be realistic (certainly human intelligence is in no way universal or optimal).

人们可以想象为一个通用人工智能,例如一个聪明的神谕,它没有代理或偏好,但可以提供有关任何主题或领域的准确和有用的信息。此外,在各种环境中实现目标的定义也意味着一定程度的普遍性或最优性,这可能不现实(当然,人类的智力绝不是普遍的或最佳的)。 

The need to recognize the importance of priors (as opposed to universality) was emphasized in the definition put forward by Chollet in [Cho19] which centers intelligence around skill-acquisition eficiency, or in other words puts the emphasis on a single component of the 1994 definition: learning from experience (which also happens to be one of the key weaknesses of LLMs).
Another candidate denfiition of articial general intelligence from Legg and Hutter [LH07] is: a system that can do anything a human can do.

Chollet在[Cho19]中提出的定义中强调了承认先验(相对于普遍性)的重要性的必要性,该定义以技能获取效率为中心,或者换句话说,将重点放在1994年定义的单一组成部分上:从经验中学习(这也恰好是LLMs(大语言模型)的主要弱点之一)。 Legg and Hutter [LH07] 对一般智能的另一种候选者是:一个可以做人类能做的任何事情的系统。

However, this definition is also problematic, as it assumes that there is a single standard or measure of human intelligence or ability, which is clearly not the case. Humans have different skills, talents, preferences, and limitations, and there is no human that can do everything that any
other human can do. Furthermore, this definition also implies a certain anthropocentric bias, which may not be appropriate or relevant for articial systems. While we do not adopt any of those definitions in the paper, we recognize that they provide important angles on intelligence.

然而,这种定义也是有问题的,因为它假设人类智力或能力有一个单一的标准或衡量标准,但事实显然并非如此。人类有不同的技能、才能、偏好和局限性,没有人类可以做任何事情,其他人可以做到。此外,这种定义还意味着某种人类中心主义偏见,这可能不适合或与人工系统无关。虽然我们没有在论文中采用任何这些定义,但我们认识到它们提供了关于智力的重要角度。

For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fantascinating and important direction for future work.  With this direction of work, great care would have to be taken on alignment and safety per a system's abilities to take autonomous actions in the world and to perform autonomous self-improvement via cycles of learning. 

例如,智能是否可以在没有任何能动性或内在动机的情况下实现,这是一个重要的哲学问题。为LLMs提供代理和内在动机是未来工作的幻想和重要方向。 有了这个工作方向,必须非常注意系统在世界上采取自主行动的能力和通过学习周期进行自主自我改进的能力的一致性和安全性。 

你可能感兴趣的:(人工智能,微软)