22.09.06 Lecture: Guy Van den Broeck


Guy Van den Broeck is an Associate Professor and Samueli Fellow at UCLA, in the Computer Science Department, where he directs the Statistical and Relational Artificial Intelligence (StarAI) lab. His research interests are in Machine Learning, Knowledge Representation and Reasoning, and Artificial Intelligence in general. His papers have been recognized with awards from key conferences such as AAAI, UAI, KR, OOPSLA, and ILP. Guy is the recipient of an NSF CAREER award, a Sloan Fellowship, and the IJCAI-19 Computers and Thought Award.


Artificial Intelligence can learn from data. But can it learn to reason?


Modern artificial intelligence, and deep learning in particular, is extremely capable at learning predictive models from vast amounts of data. Many expect that AI will go from powering customer service chatbots to providing mental health services. That it will go from personalized advertisement to deciding who is given bail. That it will go from speech recognition to writing laws. The expectation is that AI will solve society’s problems by simply being more intelligent than we are. Implicit in this bullish perspective is the assumption that AI technology will naturally learn to reason from data: that it can form trains of thought that “make sense”, similar to how a mental health professional, a judge, or a lawyer might reason about a case, or more formally, how a mathematician might prove a theorem. This talk will investigate the question whether this behavior can be learned from data, and how we can design the next generation of artificial intelligence techniques that can achieve such capabilities, focusing on neuro-symbolic learning and tractable deep generative models.

Replay (需要科学上网和观看密码)




Guy Van den Broeck现为UCLA的长聘副教授,曾获斯隆奖(年轻科学家最受瞩目的奖项之一)、IJCAI计算机与思想奖(人工智能领域35岁以下最高奖之一)和多篇顶会最佳论文。Guy可以说是全球中青年一代中最懂逻辑和概率推理的学者。我们来思考一个简单的扑克问题。一副牌里,第一张牌是红心的概率是多少?第二张呢?第三张?我们都可以快速反应过来,无论是第几张牌,红心的概率都是1/4。那为什么现有在各种复杂任务上都展现强大性能的神经网络,没法快速推导出这样的答案呢?我们现在的人工智能系统在经过过去井喷式的10年发展,还有什么根本不足吗?第一代人工智能的符号主义,和当今盛行的连接主义之间有什么共性,可以互相借鉴的地方吗?在人工智能迈向复杂认知推理的无人区领域,Guy的研究显得越来越重要。希望这次讲座能给大家启发,在普遍低头调参的当下,大家能不忘随时仰望AI研究的广阔天空。

p.s. 皮一下,大家知道Guy的正确发音是什么吗?