Bio: 张钹计算机科学与技术专家。现为清华大学计算机科学与技术系教授。他历任清华大学校学位委员会副主任， “智能技术与系统”国家重点实验室主任，中国自动化学会智能控制专业委员会主任。1995年当选中国科学院院士，2011年当选德国汉堡大学自然科学名誉博士，2015年获得中国计算机学会颁发的2014年度CCF终身成就奖。2016年获微软研究院杰出合作贡献奖。张钹长期从事自动控制理论与技术、计算机科学与技术的教学与科研。他于1978年开始从事人工智能研究，是国内该领域最早的研究者之一。他是“智能技术与系统”国家重点实验室创建者之一。张钹基础理论研究扎实，先后发表论文200余篇，中英文专著4部。指导并参加建成了陆地自主车、图像与视频检索等实验研究平台。近年来，张钹教授带领的团队在深度学习和大规模概率建模及其在视觉信息处理的应用方面进行过深入的研究，发表高质量论文数十篇。他的研究成果先后获得ICL欧洲人工智能奖，国家自然科学三等奖、国家教委科技进步一等奖、二等奖，电子部科技进步一等奖、国防科工委科技进步一等奖，中国计算机学会自然科学一等奖。福建省王丹萍科技奖等荣誉。
Abstract: 目前的人工智能，特别是通过数据驱动建造的人工智能系统存在不可解释（不可理解）性、脆弱性以及推广能力弱等缺点。这些缺陷的存在，很大程度上限制了AI的应用。在这个报告中，我们将重点阐述知识在发展人工智能中的重要性，以及解决上述AI系统缺陷的办法。这些讨论将引出今后AI发展的方向，这就是将知识驱动与数据驱动方法结合起来，以及这种结合而带来的新一代的具有理解的人工智能（Artificial Intelligence with Understanding）。
特邀报告2：The Semantic Web: Vision, Reality and Revision
James A. Hendler教授（Rensselaer Polytechnic Institute）
Bio: Jim Hendler is the Director of the Institute for Data Exploration and Applications (IDEA) and the Tetherless World Professor of Computer, Web and Cognitive Sciences at Rensselaer Polytechnic Institute (RPI). He also heads the RPI-IBM Center for Health Empowerment by Analytics, Learning and Semantics (HEALS) and serves as the Chair of the Board of the UK’s charitable Web Science Trust. Hendler has authored over 400 books, technical papers and articles in the area of Artificial Intelligence including Semantic Web, agent-based computing and high performance AI processing. One of the originators of the “Semantic Web,” Hendler was the recipient of a 1995 Fulbright Foundation Fellowship, is a former member of the US Air Force Science Advisory Board, and is a Fellow of the AAAI, BCS, the IEEE, the AAAS and the ACM. Hendler was the first computer scientist to serve on the Board of Reviewing editors for Science(2004-2016). In 2010, Hendler was named one of the 20 most innovative professors in America by Playboy magazine and was selected as an “Internet Web Expert” by the US government. In 2012, he was one of the inaugural recipients of the Strata Conference “Big Data” awards for his work on large-scale open government data, and he is an associate editor of the Big Data journal. In 2013, he was appointed as the Open Data Advisor to New York State and in 2015 appointed a member of the US Homeland Security Science and Technology Advisory Committee and in 2016, became a member of the National Academies Board on Research Data and Information. He currently also serves as a senior subject matter expert on AI and government for the US National Academy of Public Administration.
Abstract: In 2001, I joined Web inventor Tim Berners-Lee and our colleague Ora Lassila in writing an article describing a vision for the Semantic Web. The paper, which appeared in Scientific American has been widely cited and led to much work in both academia and industry aimed at adding machine-readable text to the Web. Now, nearly 20 years later, Google reports that machine-readable metadata is found on over 40% of their crawl and linked data is used in many applications around the world. Knowledge graph technology, which also grew from this vision, is now a big business used by major organizations around the world. However, despite this success, much of the original vision of the Semantic Web remains unrealized. In this talk, I discuss what was in the original vision, what has occurred and, most importantly, what still remains to be done if we are truly to recognize the potential of the Semantic Web.
特邀报告4：What can you do with multilingual knowledge graphs? Experiences at Sapienza and Babelscape
Roberto Navigli教授（Sapienza University of Rome）
Bio: Roberto Navigli is Professor of Computer Science at the Sapienza University of Rome, where he heads the multilingual Natural Language Processing group. He was awarded the Marco Somalvico 2013 AI*IA Prize for the best young researcher in AI. He is one of the few European researchers to have received two prestigious ERC grants in computer science, namely an ERC Starting Grant on multilingual word sense disambiguation (2011-2016) and an ERC Consolidator Grant on multilingual language- and syntax-independent open-text unified representations (2017-2022). He was also a co-PI of a Google Focused Research Award on NLP. In 2015 he received the META prize for groundbreaking work in overcoming language barriers with BabelNet, a project also highlighted in The Guardian and Time magazine, and winner of the Artificial Intelligence Journal prominent paper award 2017. Based on the success of BabelNet and its multilingual disambiguation technology, he co-founded Babelscape, a Sapienza startup company which enables Natural Language Processing in hundreds of languages.
Abstract: Multilinguality is a key feature of today’s Web, and it is this feature that we leverage and exploit in our research work at the Sapienza University of Rome’s Linguistic Computing Laboratory and at Babelscape, a Sapienza startup company focused on multilingual Natural Language Processing. In this talk I am going to overview and showcase research and applications of the BabelNet and WordAtlas technology. I will introduce BabelNet live – the largest, continuously-updated multilingual encyclopedic dictionary – and then discuss a range of cutting-edge research and industrial use cases, including: multilingual interpretation of terms, multilingual disambiguation and entity linking, multilingual concept and entity extraction from text.