人类能否像计算机一样思考?

来源:eurasiareview 时间:2019-03-28 编辑:

计算机,如那些为自动驾驶汽车提供动力的计算机,可能会被误导为随意涂鸦的火车,围栏甚至校车。人们不应该看到这些图像是如何绊倒计算机的,但在一项新的研究中,约翰霍普金斯大学的研究人员显示大多数人实际上可以。

研究结果表明,现代计算机可能与我们认为的人类不同,并证明了人工智能的进步如何继续缩小人与机器视觉能力之间的差距。这项研究今天发表在Nature Communications杂志上。

“大多数时候,我们这个领域的研究都是让计算机像人一样思考,”资深作家Chaz Firestone说,他是约翰霍普金斯大学心理与脑科学系的助理教授。 “我们的项目恰恰相反 - 我们在问人们是否可以像计算机一样思考。”

人类容易对计算机来说很难。人工智能系统长期以来比数学或记忆大量信息的人更好;但几十年来,人类在识别狗,猫,桌子或椅子等日常物品方面具有优势。但最近,模仿大脑的“神经网络”已经接近人类识别物体的能力,导致技术进步支持自动驾驶汽车,面部识别程序和帮助医生发现放射扫描中的异常。

但即使有了这些技术进步,也存在一个关键的盲点:有可能故意制作神经网络无法正确看到的图像。而这些被称为“对抗性”或“愚弄”图像的图像是一个大问题:它们不仅可以被黑客利用并导致安全风险,而且它们表明人类和机器实际上看到的图像差别很大。

在某些情况下,计算机将苹果称为汽车所需要的只是重新配置一两个像素。在其他情况下,机器看到犰狳和百吉饼看起来像无意义的电视静态。

“这些机器似乎是以人类永远不会想象的方式错误识别物体,”凡士通说。 “但令人惊讶的是,没有人真正测试过这一点。我们怎么知道人们看不到计算机做了什么?“

为了测试这一点,凡世通和主要作者周正龙(约翰霍普金斯大学认知科学专业)主要要求人们“像机器一样思考”。机器只有相对较小的词汇表来命名图像。因此,凡世通和周向人们展示了数十个已经骗过计算机的愚蠢图像,并为人们提供了与机器相同的标签选项。特别是,他们问人们计算机决定对象的两个选项中的哪一个 - 一个是计算机的真实结论,另一个是随机答案。 (这个blob被描绘成一个百吉饼还是一个风车?)事实证明,人们非常同意计算机的结论。

人们在75%的时间里选择与计算机相同的答案。也许更值得注意的是,98%的人倾向于像计算机那样回答。

接下来,研究人员通过让人们在计算机最喜欢的答案和下一个最佳猜测之间做出选择来提高赌注。 (这个blob是用百吉饼还是椒盐卷饼?)人们再次验证了计算机的选择,其中91%的受试者同意该机器的首选。

即使当研究人员猜测对象的48种选择之间,甚至当图像与电视静态相似时,绝大多数受试者都选择了机器选择远远超过随机机会的速率。在各种实验中共测试了1,800名受试者。

Firestone说:“我们发现,如果你把一个人放在与电脑相同的环境中,人类往往会同意这些机器。” “对于人工智能来说,这仍然是一个问题,但它并不像计算机所说的完全不像人类所说的那样。”

英文版(原文)

Researchers Get Humans To Think Like Computers

Computers, like those that power self-driving cars, can be trickedinto mistaking random scribbles for trains, fences and even schoolbusses. People aren’t supposed to be able to see how those images tripup computers but in a new study, Johns Hopkins University researchersshow most people actually can.

The findings suggest modern computers may not be as different fromhumans as we think, and demonstrate how advances in artificialintelligence continue to narrow the gap between the visual abilities ofpeople and machines. The research appears today in the journal Nature Communications.

“Most of the time, research in our field is about getting computersto think like people,” says senior author Chaz Firestone, an assistantprofessor in Johns Hopkins’ Department of Psychological and BrainSciences. “Our project does the opposite — we’re asking whether peoplecan think like computers.”

What’s easy for humans is often hard for computers. Artificialintelligence systems have long been better than people at doing math orremembering large quantities of information; but for decades humans havehad the edge at recognizing everyday objects such as dogs, cats, tablesor chairs. But recently, “neural networks” that mimic the brain haveapproached the human ability to identify objects, leading totechnological advances supporting self-driving cars, facial recognitionprograms and helping physicians to spot abnormalities in radiologicalscans.

But even with these technological advances, there’s a critical blindspot: It’s possible to purposely make images that neural networkscannot correctly see. And these images, called “adversarial” or“fooling” images, are a big problem: Not only could they be exploited byhackers and causes security risks, but they suggest that humans andmachines are actually seeing images very differently.

In some cases, all it takes for a computer to call an apple a car,is reconfiguring a pixel or two. In other cases, machines see armadillosand bagels in what looks like meaningless television static.

“These machines seem to be misidentifying objects in ways humansnever would,” Firestone says. “But surprisingly, nobody has reallytested this. How do we know people can’t see what the computers did?”

To test this, Firestone and lead author Zhenglong Zhou, a JohnsHopkins senior majoring in cognitive science, essentially asked peopleto “think like a machine”. Machines have only a relatively smallvocabulary for naming images. So, Firestone and Zhou showed peopledozens of fooling images that had already tricked computers, and gavepeople the same kinds of labeling options that the machine had. Inparticular, they asked people which of two options the computer decidedthe object was – one being the computer’s real conclusion and the other arandom answer. (Was the blob pictured a bagel or a pinwheel?) It turnsout, people strongly agreed with the conclusions of the computers.

People chose the same answer as computers 75 percent of the time.Perhaps even more remarkably, 98 percent of people tended to answer likethe computers did.

Next, researchers upped the ante by giving people a choice betweenthe computer’s favorite answer and its next-best guess. (Was the blobpictured a bagel or a pretzel?) People again validated the computer’schoices, with 91 percent of those tested agreeing with the machine’sfirst choice.

Even when the researchers had people guess between 48 choices forwhat the object was, and even when the pictures resembled televisionstatic, an overwhelming proportion of the subjects chose what themachine chose well above the rates for random chance. A total of 1,800subjects were tested throughout the various experiments.

“We found if you put a person in the same circumstance as acomputer, suddenly the humans tend to agree with the machines,”Firestone says. “This is still a problem for artificial intelligence,but it’s not like the computer is saying something completely unlikewhat a human would say.”

返回

联系我们

  • 热线:400-840-2800
  • 电话:(022)24102119
  • 传真:(022)24102164
  • E-mail:lonwin@lonwin.cn
  • 地址:天津市河东区龙潭路15号海委档案楼

版权所有:天津市龙网科技发展有限公司 津ICP备B2-20050030-1号 津公安备案12010202000392