Neural-Symbolic Computing Bernd Finkbeiner

News

19.05.2020

First Talk on Recurrent Neural Networks

Dear participants,

this is a friendly reminder that we meet tomorrow, May 20th at 14:15 (in the same zoom room as our Kick-Off meeting) to our first talk on recurrent neural networks.

See you there!
Your seminar team

 

Neural-Symbolic Computing

You can register for the seminar at https://seminars.cs.uni-saarland.de/seminars20 until April 17th 23:59 CET.

The way our brain forms thoughts can be classified into two categories (according to Kahneman in his book “Thinking Fast and Slow”):

System 1: fast, automatic, frequent, stereotypic, unconscious.

  • Is this a cat or a dog?
  • What does this sentence mean in English?

System 2: slow, effortful, logical, conscious.

  • 17*16 = ?
  • If a -> b does b -> a?

The traditional view is that deep learning is limited to System 1 type of reasoning. Mostly because of the perception that deep neural networks are unable to solve complex logical reasoning tasks reliably. Historically, applications of machine learning were thus often restricted to sub-problems within larger logical frameworks, such as resolving heuristics in solvers.
In this seminar, we will explore new research that shows that deep neural networks are, in fact, able to reason on “symbolic systems”, i.e., systems that are built with symbols like programming languages or formal logics.

Example Topics:

  • What is a neural Turing machine?
  • Can a deep neural network solve complex equations better than Wolfram Alpha?
  • Why is “Attention” all we need?

 

 

Requirements

Participants should have strong interest in machine learning and/or logical reasoning. There is, however, no formal prerequisite.

 

 

Organization

There will be no physical meeting. All talks and discussions will be held via zoom (zoom.us).

The seminar is structured into three phases.

Phase A (Week 1-4): You will be assigned to one of the following four topic of machine learning: 1) (Recurrent) Neural Networks, 2) Message Passing and Graph Neural Networks, 3) Attention and Transformers, and 4) Reinforcement Learning. In a team of three students, you will then prepare an informal lecture on your topic and a following discussion; presenting the basics to your fellow students. Each topic is assigned to an advisor that will help you with your preparations. This informal lecture will be ungraded, so you can see it as a rehearsal. Phase A, thus, gives you the necessary foundations for phase B.

Phase B (Week 5-10): Depending on your choice of topic area, you will choose a matching research paper on neural-symbolic computing. You will prepare, with the help of your advisor, a research talk that presents the findings to your fellow students. This talk is weighted most in your final grade.

Phase C (Deadline at the end of September): You will be given a neural-symbolic computing task. Each team is required to solve this problem with the methods explored in this seminar, for example by applying deep neural network architectures of your area. We do *not* expect own implementations of the discussed methods. You can use any libraries available. This project has to be passed and it will *not* be graded. However, there will be prizes!

 

 

Dates

Kick-Off Meeting: May 6th, 2pm
Weekly meetings: Wednesdays 2pm



Privacy Policy | Legal Notice
If you encounter technical problems, please contact the administrators