About

The Reasoning & Explainable AI Lab

The Reasoning & Explainable AI group aims at developing systems which are capable of complex, abstract and flexible inference.

We operate at the interface between neural and symbolic AI methods aiming to enable the next generation of explainable, data-efficient and safe AI systems. Our research investigates how the combination of latent and explicit data representation paradigms can deliver better inference over data.

Areas of Research

  • Natural language inference
  • Abstractive inference
  • Explanation generation
  • Explainable question answering
  • Scientific inference & explanations
  •  Neuro-symbolic models
  • Multi-hop reasoning
  • Semantic control
  • Semantic probing
  • Extraction & Representation
  • Sentence & discourse representation
  • Open information extraction
  • Knowledge Graphs
  • Scalable Knowledge-based inference
  • AI applications in cancer research