九色国产,午夜在线视频,新黄色网址,九九色综合,天天做夜夜做久久做狠狠,天天躁夜夜躁狠狠躁2021a,久久不卡一区二区三区

打開APP
userphoto
未登錄

開通VIP,暢享免費電子書等14項超值服

開通VIP
High&NewTech:重磅!來自深度學習的三位大牛Yoshua、Hinton、LeCun榮獲2018年圖靈獎

High&NewTech:來自深度學習的三位大牛Yoshua Bengio、Geoffrey Hinton、Yann LeCun榮獲2018年圖靈獎

導讀
ACM提名?Yoshua Bengio,?Geoffrey Hinton,和?Yann LeCun為2018年ACM A.M. 即圖靈獎的獲獎者,授予那些使深度神經(jīng)網(wǎng)絡成為計算關鍵組成部分的概念和工程突破。Bengio是蒙特利爾大學教授,也是Mila, Quebec人工智能研究所的科學主任;Hinton,谷歌副總裁兼工程研究員,Vector研究所首席科學顧問,多倫多大學名譽教授;LeCun是紐約大學教授、Facebook副總裁兼首席人工智能科學家。

PS:博主是從Yann LeCun的推特上,得知此消息,也是非常高興,祝賀了Yann LeCun!說實話,這三位大牛,對于搞深度學習的同仁們,想必都是非常了解,因為這三位大牛的論文,我們都已是看了又看,好幾遍,對于我們學習人工智能,帶來了非常多的靈感和想法。
? ? ??縱觀整個深度學習的發(fā)展史,Yoshua Bengio、Geoffrey Hinton、Yann LeCun,三位的確功不可沒,實至名歸,毋庸置疑!當然,也包括我喜歡的Andrew Ng、Fei-Fei Li等大牛。
? ? ?感悟:在科研這條不歸路上,只要的確是努力付出了,剩下的,就交給時間,一切該來的,都會來的

原文:Fathers of the Deep Learning Revolution Receive ACM A.M. Turing Award Bengio, Hinton and LeCun Ushered in Major Breakthroughs in Artificial Intelligence


原文及其翻譯

? ? ? ?ACM named?Yoshua Bengio,?Geoffrey Hinton, and?Yann LeCun?recipients of the 2018 ACM A.M. Turing Award for conceptual and engineering breakthroughs that have made deep neural networks a critical component of computing. Bengio is Professor at the?University of Montreal and Scientific Director at Mila, Quebec’s Artificial Intelligence Institute; Hinton is VP and Engineering Fellow of?Google, Chief Scientific Adviser of?The Vector Institute, and University Professor Emeritus at the?University of Toronto; and LeCun is Professor at New York University?and VP and Chief AI Scientist at?Facebook.
? ? ? ?ACM提名?Yoshua Bengio,?Geoffrey Hinton,和?Yann LeCun為2018年ACM A.M. 即圖靈獎的獲獎者,授予那些使深度神經(jīng)網(wǎng)絡成為計算關鍵組成部分的概念和工程突破。Bengio是蒙特利爾大學教授,也是Mila, Quebec人工智能研究所的科學主任;Hinton,谷歌副總裁兼工程研究員,Vector研究所首席科學顧問,多倫多大學名譽教授;LeCun是紐約大學教授、Facebook副總裁兼首席人工智能科學家。

? ? ? ?Working independently and together, Hinton, LeCun and Bengio developed conceptual foundations for the field, identified surprising phenomena through experiments, and contributed engineering advances that demonstrated the practical advantages of deep neural networks. In recent years, deep learning methods have been responsible for astonishing breakthroughs in computer vision, speech recognition, natural language processing, and robotics—among other applications.
? ? ? ?Hinton, LeCun 和 Bengio三人獨立合作,為該領域發(fā)展了概念基礎,通過實驗發(fā)現(xiàn)了令人驚訝的現(xiàn)象,并為證明深度神經(jīng)網(wǎng)絡的實際優(yōu)勢的工程進展做出了貢獻。近年來,深度學習方法在計算機視覺、語音識別、自然語言處理和機器人等應用領域取得了驚人的突破。

? ? ? ?While the use of artificial neural networks as a tool to help computers recognize patterns and simulate human intelligence had been introduced in the 1980s, by the early 2000s, LeCun, Hinton and Bengio were among a small group who remained committed to this approach. Though their efforts to rekindle the AI community’s interest in neural networks were initially met with skepticism, their ideas recently resulted in major technological advances, and their methodology is now the dominant paradigm in the field.
? ? ? ?雖然人工神經(jīng)網(wǎng)絡作為一種幫助計算機識別模式和模擬人類智能的工具是在20世紀80年代被引入,但到21世紀初,LeCun, Hinton 和 Bengio等一部分人仍然堅持使用這種方法。盡管他們重新點燃人工智能社區(qū)對神經(jīng)網(wǎng)絡的興趣的努力,最初遭到了懷疑,但他們的想法最近帶來了重大的技術進步,他們的方法現(xiàn)在是該領域的主導范式。

? ? ? ?The ACM A.M. Turing Award, often referred to as the “Nobel Prize of Computing,” carries a $1 million prize, with financial support provided by Google, Inc. It is named for Alan M. Turing, the British mathematician who articulated the mathematical foundation and limits of computing.
? ? ? ?ACM A.M. 圖靈獎,通常被稱為“諾貝爾計算獎”,由谷歌公司提供財政支持,獎金100萬美元。它是以英國數(shù)學家阿蘭·m·圖靈的名字命名的,圖靈闡明了計算的數(shù)學基礎和極限。

? ? ? ?“Artificial intelligence is now one of the fastest-growing areas in all of science and one of the most talked-about topics in society,” said ACM President Cherri M. Pancake. “The growth of and interest in AI is due, in no small part, to the recent advances in deep learning for which Bengio, Hinton and LeCun laid the foundation. These technologies are used by billions of people. Anyone who has a smartphone in their pocket can tangibly experience advances in natural language processing and computer vision that were not possible just 10 years ago. In addition to the products we use every day, new advances in deep learning have given scientists powerful new tools—in areas ranging from medicine, to astronomy, to materials science.”
? ? ? ?“人工智能現(xiàn)在是所有科學領域中增長最快的領域之一,也是社會上談論最多的話題之一,”ACM主席Cherri M. Pancake 說?!叭斯ぶ悄艿陌l(fā)展和人們對它的興趣,在很大程度上要歸功于Bengio, Hinton 和 LeCun為之奠定基礎的深度學習的最新進展。這些技術被數(shù)十億人使用。任何口袋里有智能手機的人都能實實在在地體驗到自然語言處理和計算機視覺方面的進步,這在10年前是不可能的。除了我們每天使用的產(chǎn)品,深度學習的新進展也為科學家們提供了強大的新工具——從醫(yī)學、天文學到材料科學?!?/p>

? ? ? ?"Deep neural networks are responsible for some of the greatest advances in modern computer science, helping make substantial progress on long-standing problems in computer vision, speech recognition, and natural language understanding,” said Jeff Dean, Google Senior Fellow and SVP, Google AI. “At the heart of this progress are fundamental techniques developed starting more than 30 years ago by this year's Turing Award winners, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun. By dramatically improving the ability of computers to make sense of the world, deep neural networks are changing not just the field of computing, but nearly every field of science and human endeavor."
? ? ? ?谷歌高級研究員、谷歌人工智能高級副總裁杰夫·迪恩(Jeff Dean)表示:“深度神經(jīng)網(wǎng)絡負責現(xiàn)代計算機科學的一些最大進步,幫助在計算機視覺、語音識別和自然語言理解等長期存在的問題上取得實質(zhì)性進展?!薄斑@一進展的核心是由今年的圖靈獎獲得者Yoshua Bengio, Geoffrey Hinton, 和?Yann LeCun在30多年前開發(fā)的基本技術。通過大幅提高計算機對世界的理解能力深層神經(jīng)網(wǎng)絡不僅改變了計算領域,而且幾乎改變了科學和人類努力的每一個領域?!?/p>

Machine Learning, Neural Networks and Deep Learning

? ? ? ?In traditional computing, a computer program directs the computer with explicit step-by-step instructions. In deep learning, a subfield of AI research, the computer is not explicitly told how to solve a particular task such as object classification. Instead, it uses a learning algorithm to extract patterns in the data that relate the input data, such as the pixels of an image, to the desired output such as the label “cat.” The challenge for researchers has been to develop effective learning algorithms that can modify the weights on the connections in an artificial neural network so that these weights capture the relevant patterns in the data.
? ? ? ?在傳統(tǒng)計算中,計算機程序用明確的分步指令來指導計算機。在人工智能研究的一個子領域“深度學習”中,計算機并沒有被明確地告訴如何解決諸如對象分類之類的特定任務。相反,它使用一種學習算法來提取數(shù)據(jù)中的模式,這些模式將輸入數(shù)據(jù)(如圖像的像素)與所需的輸出(如標簽“cat”)相關聯(lián)。研究人員面臨的挑戰(zhàn)是開發(fā)有效的學習算法,該算法可以修改人工神經(jīng)網(wǎng)絡中連接的權重,以便這些權重捕獲數(shù)據(jù)中的相關模式。

? ? ? ?Geoffrey Hinton, who has been advocating for a machine learning approach to artificial intelligence since the early 1980s, looked to how the human brain functions to suggest ways in which machine learning systems might be developed. Inspired by the brain, he and others proposed “artificial neural networks” as a cornerstone of their machine learning investigations.
? ? ? ?Geoffrey Hinton自20世紀80年代初以來一直主張采用人工智能的機器學習方法,他研究了人腦的功能,以提出開發(fā)機器學習系統(tǒng)的方法。在大腦的啟發(fā)下,他和其他人提出“人工神經(jīng)網(wǎng)絡”作為他們機器學習研究的基石。

? ? ? ?In computer science, the term “neural networks” refers to systems composed of layers of relatively simple computing elements called “neurons” that are simulated in a computer. These “neurons,” which only loosely resemble the neurons in the human brain, influence one another via weighted connections. By changing the weights on the connections, it is possible to change the computation performed by the neural network. Hinton, LeCun and Bengio recognized the importance of building deep networks using many layers—hence the term “deep learning.”
?? ? ?在計算機科學中,術語“神經(jīng)網(wǎng)絡”是指由相對簡單的計算元素層組成的系統(tǒng),稱為“神經(jīng)元”,在計算機中進行模擬。這些“神經(jīng)元”,只是松散地類似于人腦中的神經(jīng)元,通過加權連接相互影響。通過改變連接的權重,可以改變神經(jīng)網(wǎng)絡的計算。Hinton、Lecun和Bengio認識到使用多個層次構建深層網(wǎng)絡的重要性,因此稱為“深層學習”。

? ? ? ?The conceptual foundations and engineering advances laid by LeCun, Bengio and Hinton over a 30-year period were significantly advanced by the prevalence of powerful graphics processing unit (GPU) computers, as well as access to massive datasets. In recent years, these and other factors led to leap-frog advances in technologies such as computer vision, speech recognition and machine translation.
? ? ? Lecun、Bengio和Hinton,在30年的時間內(nèi)所奠定的概念基礎和工程進展,因強大的圖形處理單元(GPU)計算機的普及以及對海量數(shù)據(jù)集的訪問,而顯著提高。近年來,這些因素和其他因素帶來了計算機視覺、語音識別和機器翻譯等技術的突飛猛進。

? ? ? ????????Hinton, LeCun and Bengio have worked together and independently. For example, LeCun performed postdoctoral work under Hinton’s supervision, and LeCun and Bengio worked together at Bell Labs beginning in the early 1990s. Even while not working together, there is a synergy and interconnectedness in their work, and they have greatly influenced each other.
? ? ? Hinton, LeCun 和 Bengio一起獨立工作。例如,Lecun在Hinton的指導下完成博士后工作,從20世紀90年代初開始,Lecun和Bengio就在貝爾實驗室一起工作,即使沒有一起工作,他們的工作也有協(xié)同作用和相互聯(lián)系,并且相互影響很大。

? ? ? ????????Bengio, Hinton and LeCun continue to explore the intersection of machine learning with neuroscience and cognitive science, most notably through their joint participation in the Learning in Machines and Brains program, an initiative of?CIFAR,?formerly known as the Canadian Institute for Advanced Research.
? ? ? ?Bengio、Hinton和Lecun繼續(xù)探索機器學習與神經(jīng)科學和認知科學的交叉點,尤其是通過他們共同參與機器和大腦學習計劃,這是CIFAR的一項倡議,以前被稱為加拿大高級研究所。

Select Technical Accomplishments

? ? ? ????????The technical achievements of this year’s Turing Laureates, which have led to significant breakthroughs in AI technologies include, but are not limited to, the following:
?? ? ?今年圖靈獎獲得者在人工智能技術方面取得重大突破的技術成果,包括但不限于:

Geoffrey Hinton

Backpropagation: In a 1986 paper, “Learning Internal Representations by Error Propagation,” co-authored with David Rumelhart and Ronald Williams, Hinton demonstrated that the backpropagation algorithm allowed neural nets to discover their own internal representations of data, making it possible to use neural nets to solve problems that had previously been thought to be beyond their reach. The backpropagation algorithm is standard in most neural networks today.
Boltzmann Machines: In 1983, with Terrence Sejnowski, Hinton invented Boltzmann Machines, one of the first neural networks capable of learning internal representations in neurons that were not part of the input or output.
Improvements to convolutional neural networks: In 2012, with his students, Alex Krizhevsky and Ilya Sutskever, Hinton improved convolutional neural networks using rectified linear neurons and dropout regularization. In the prominent ImageNet competition, Hinton and his students almost halved the error rate for object recognition and reshaped the computer vision field.
? ? ? ? ?反向傳播:在1986年的一篇論文中,“學習誤差傳播內(nèi)部表示,“與David Rumelhart 和?Ronald Williams, Hinton證明神經(jīng)網(wǎng)絡反向傳播算法,允許發(fā)現(xiàn)自己的內(nèi)部表示的數(shù)據(jù),使它可以使用神經(jīng)網(wǎng)絡來解決問題,以前認為是無可奈何。反向傳播算法是目前大多數(shù)神經(jīng)網(wǎng)絡的標準算法。
? ? ? ? ?玻爾茲曼機:1983年, Hinton和Terrence Sejnowski一起發(fā)明了玻爾茲曼機器,這是第一批能夠學習神經(jīng)元內(nèi)部表征的神經(jīng)網(wǎng)絡之一,這些神經(jīng)元不是輸入或輸出的一部分。
? ? ? ? ?卷積神經(jīng)網(wǎng)絡的改進:2012年,Hinton 和他的學生亞Alex Krizhevsky 和 Ilya Sutskever一起,利用校正的線性神經(jīng)元和缺失正則化改進了卷積神經(jīng)網(wǎng)絡。在著名的ImageNet比賽中,Hinton和他的學生幾乎將物體識別的錯誤率減半,重塑了計算機視覺領域。

Yoshua Bengio

Probabilistic models of sequences:?In the 1990s, Bengio combined neural networks with probabilistic models of sequences, such as hidden Markov models. These ideas were incorporated into a system used by AT&T/NCR for reading handwritten checks, were considered a pinnacle of neural network research in the 1990s, and modern deep learning speech recognition systems are extending these concepts.
High-dimensional word embeddings and attention: In 2000, Bengio authored the landmark paper, “A Neural Probabilistic Language Model,” that introduced high-dimension word embeddings as a representation of word meaning. Bengio’s insights had a huge and lasting impact on natural language processing tasks including language translation, question answering, and visual question answering. His group also introduced a form of attention mechanism which led to breakthroughs in machine translation and form a key component of sequential processing with deep learning.
Generative adversarial networks: Since 2010, Bengio’s papers on generative deep learning, in particular the Generative Adversarial Networks (GANs) developed with Ian Goodfellow, have spawned a revolution in computer vision and computer graphics. In one fascinating application of this work, computers can actually create original images, reminiscent of the creativity that is considered a hallmark of human intelligence.
? ? ? ? ?序列的概率模型:20世紀90年代,Bengio將神經(jīng)網(wǎng)絡與序列的概率模型(如隱馬爾可夫模型)結合起來。這些想法被納入AT&T/NCR用于閱讀手寫檢查的系統(tǒng)中,在20世紀90年代被認為是神經(jīng)網(wǎng)絡研究的頂峰,現(xiàn)代深度學習語音識別系統(tǒng)正在擴展這些概念。
? ? ? ? ?高維嵌入和注意模型:2000年,Bengio撰寫了一篇里程碑式的論文,“神經(jīng)概率語言模型”,將高維嵌入作為詞義的表示。Bengio的見解對自然語言處理任務(包括語言翻譯、問答和視覺問答)產(chǎn)生了巨大而持久的影響。他的小組還介紹了一種注意力機制的形式,這導致了機器翻譯的突破,并形成了帶深度學習的順序處理的關鍵部分。
? ? ? ? ?生成性對抗網(wǎng)絡:自2010年以來,Bengio關于生成性深度學習的論文,特別是與Ian Goodfellow共同開發(fā)的生成性對抗網(wǎng)絡(gans),引發(fā)了計算機視覺和計算機圖形的革命。在這項工作的一個迷人的應用中,計算機實際上可以創(chuàng)造出原始圖像,讓人想起被認為是人類智能標志的創(chuàng)造力。

Yann LeCun

Convolutional neural networks: In the 1980s, LeCun developed convolutional neural networks, a foundational principle in the field, which, among other advantages, have been essential in making deep learning more efficient. In the late 1980s, while working at the University of Toronto and Bell Labs, LeCun was the first to train a convolutional neural network system on images of handwritten digits. Today, convolutional neural networks are an industry standard in computer vision, as well as in speech recognition, speech synthesis, image synthesis, and natural language processing. They are used in a wide variety of applications, including autonomous driving, medical image analysis, voice-activated assistants, and information filtering.
Improving backpropagation algorithms: LeCun proposed an early version of the backpropagation algorithm (backprop), and gave a clean derivation of it based on variational principles. His work to speed up backpropagation algorithms included describing two simple methods to accelerate learning time.
Broadening the vision of neural networks: LeCun is also credited with developing a broader vision for neural networks as a computational model for a wide range of tasks, introducing in early work a number of concepts now fundamental in AI. For example, in the context of recognizing images, he studied how hierarchical feature representation can be learned in neural networks—a concept that is now routinely used in many recognition tasks. Together with Léon Bottou, he proposed the idea, used in every modern deep learning software, that learning systems can be built as complex networks of modules where backpropagation is performed through automatic differentiation. They also proposed deep learning architectures that can manipulate structured data, such as graphs.
? ? ? ? ?卷積神經(jīng)網(wǎng)絡:在20世紀80年代,LeCun開發(fā)了卷積神經(jīng)網(wǎng)絡,這是該領域的一個基本原理,它的優(yōu)勢之一,對于提高深度學習的效率至關重要。上世紀80年代末,在多倫多大學(University of Toronto)和貝爾實驗室(Bell Labs)工作時,LeCun是第一個訓練卷積神經(jīng)網(wǎng)絡系統(tǒng)處理手寫數(shù)字圖像的人。如今,卷積神經(jīng)網(wǎng)絡已經(jīng)成為計算機視覺以及語音識別、語音合成、圖像合成和自然語言處理領域的行業(yè)標準。它們被廣泛應用于各種應用中,包括自動駕駛、醫(yī)學圖像分析、聲控助手和信息過濾。
? ? ? ? ?改進后的反向傳播算法:LeCun提出了早期版本的反向傳播算法(backprop),并基于變分原理對其進行了清晰的推導。他在加速反向傳播算法方面的工作包括描述兩種加速學習時間的簡單方法。
? ? ? ? ?拓寬神經(jīng)網(wǎng)絡的視野:LeCun還被譽為為神經(jīng)網(wǎng)絡開發(fā)了更廣闊的視野,將其作為一種計算模型,用于廣泛的任務,在早期的工作中引入了一些現(xiàn)在在人工智能中基本的概念。例如,在識別圖像的背景下,他研究了如何在神經(jīng)網(wǎng)絡中學習分層特征表示——這一概念現(xiàn)在經(jīng)常用于許多識別任務。他和Léon Bottou一起提出了一個理念,這個理念被應用于每一個現(xiàn)代深度學習軟件中,即學習系統(tǒng)可以被構建為復雜的模塊網(wǎng)絡,在這個網(wǎng)絡中,反向傳播通過自動分化來執(zhí)行。他們還提出了能夠操作結構化數(shù)據(jù)(如圖表)的深度學習體系結構。

ACM will present the 2018 A.M. Turing Award at its annual Awards Banquet on June 15 in San Francisco, California.

PS:因為時間緊迫,翻譯有不準確的地方,還請留言指正,謝謝!

本站僅提供存儲服務,所有內(nèi)容均由用戶發(fā)布,如發(fā)現(xiàn)有害或侵權內(nèi)容,請點擊舉報
打開APP,閱讀全文并永久保存 查看更多類似文章
猜你喜歡
類似文章
剛剛,AI三巨頭獲2018年圖靈獎!
人工智能興衰史背后,連接主義的前世今生
深度學習鼻祖杰夫·辛頓及巨頭們的人才搶奪戰(zhàn)
回顧Deep Learning三劍客的艱難歷程,30年的不悔堅持,終成引領人工智能的風云人物
圖靈獎得主Yann LeCun的六十年
“三巨頭”齊獲圖靈獎!沉浮30載終于開啟AI復興時代
更多類似文章 >>
生活服務
熱點新聞
分享 收藏 導長圖 關注 下載文章
綁定賬號成功
后續(xù)可登錄賬號暢享VIP特權!
如果VIP功能使用有故障,
可點擊這里聯(lián)系客服!

聯(lián)系客服