# Course descriptions

## Fundamentals of metalogic

**Content: Completeness theorems, Model theory and proof theory.**

**Course description**: This course provides an introduction to the metatheory of elementary logic. Following a "refresher" on the basics of notation and the use of classical logic as a representation language, we concentrate on the twin notions of models and proof. An axiomatic system of first order logic is introduced and proved complete for the standard
semantics, and then we give a very brief overview of the basic concepts of proof theory and of formal set theory. The material in this course is presupposed by other courses in the Summer School, which is why it is presented first.

**Pre-requisites: ** it is assumed that students are at least familiar with logical notation for connectives and quantifiers, and can manipulate truth tables, some kind of proof system or semantic tableaux or the like. Normally they will have done a first logic course at tertiary level. Some basic mathematical competence is also presupposed, to be comfortable with the notion of a formal language, follow proofs by induction, be OK with basic set-theoretic notation and the like.

**Lecturer: **
John Slaney is the founder and convenor of the Logic Summer school. He originated in England, ever so long ago, and discovered Australia in 1977. Undeterred by the fact that it had apparently been discovered before, he gradually moved here, joining the Automated Reasoning Project in 1988 on a three-year contract because he could not think of anywhere better to be a logician. He is still here and still can't. His research interests pretty much cover the waterfront, including non-classical logics, automated deduction and all sorts of logic-based AI. He was formerly the leader of the Logic and Computation group at the ANU, and of the NICTA program of the same name.

## Introduction to modal logic

**Content: Kripke models, Hilbert calculi, Frame correspondences, Tableaux-based decisions procedures and
Propositional linear temporal logic**

**Course description: ** We cover the syntax, Kripke semantics, correspondence theory and tableaux-style proof theory of propositional modal and temporal logics. These logics have important applications in a diverse range of fields incuding Artificial Intelligence, Theoretical Computer Science and Hybrid Systems.

**Pre-requisites: ** You should have a good grounding in classical propositional and first-order logic, including material covered in the "Fundamentals of Metalogic" course. Some very basic notions from graph theory would also be useful, but are not essential.

**Lecturer: **
Rajeev Gore is currently a a professor in the ANU, and leader of the Logic and Computation research group. He was an Australian Research Council Queen Elizabeth II Fellow at the ANU from 1997-2001, a Research Fellow and later Senior Fellow at the ANU from 1994-2011, and a Research Associate at the University of Manchester from 1992-1993. He obtained his PhD in Modal Theorem Proving from the University of Cambridge in 1992.

## Overview of automated reasoning

**Content: Automated propositional theorem proving; automated first-order theorem proving; Interactive theorem proving**

**Course description: ** In many applications, we expect computers to reason logically. We might naively expect this to be what computers are good at, but in fact they find it extremely difficult. In this overview course, we look at various methods to automate logical reasoning, which are needed to support a variety of application domains.

Automated reasoning procedures are parametrized by the logic they are capable of reasoning with. We distinguish between propositional logic and first-order logic. Development and application of propositional logic procedures, also called SAT solvers, received considerable attention in the last ten years, e.g., for solving constraint satisfaction problems, applications in hardware design, verification, and planning and scheduling. Regarding automated deduction in first-order logic, we discuss applications, standard deductive procedures such as resolution, and basic concepts, such as unification. We also examine the dual problem of theorem proving, viz., generating models of a given theory, which has applications to finding counterexamples for non-theorems, and we provide an overview of some recent trends (instance-based methods, satisfiability modulo theories). Finally, we describe in some depth so-called quantifier elimination methods for theorem proving in decidable fragments of arithmetics over the reals and the integers.

**Pre-requisites: ** Elementary logic, as provided by the "Fundamentals of Metalogic" course of this Summer School.

**Lecturer: **
Peter Baumgartner is a researcher in the Canberra Laboratory of Data61 (part of CSIRO). In NICTA, the precursor of Data61, he was a Principal Researcher in Software Systems and formerly the lab manager of the "Managing Complexity" theme. Previous employments were at the Max Planck Institute for Computer Science and the University of Koblenz, both in Germany. His research interest is automated deduction, particularly for first-order logic, and its applications. He holds a PhD and a Habilitation degree in Computer Science.

## Computability and incompleteness

**Contents: Computability; recursive Functions and Turing machines; diagonalisation; Peano arithmetic and Gödel numbering; undecidability of first-order logic; incompleteness of Peano arithmetic.**

**Course Description: ** We begin with two formal accounts of the intuitive notion of computability: recursive functions and Turing machines. They turn out to be the same, hence Church's Thesis: functions that can be computed by any means are precisely the partial recursive functions. Then we revisit the old Cretan who says that all Cretans always lie, and other forms of diagonalisation argument such as Halting Problem. Next we look at an axiomatic theory of arithmetic, known as Peano Arithmetic (PA), and show how we can represent all recursive functions in PA. This will lead to Goedel numbering: a neat trick enabling us to effectively encode notions like "theorem", "proof" and "provability in PA" within PA itself. We spend a while discussing Diagonalisation Lemma and Derivability Conditions. Finally, in one fell swoop we prove undecidability of first-order logic (Church's Theorem), undefinability of truth (Tarski's Theorem), incompleteness of PA given consistency of PA (First Goedel's Theorem) and unprovability of consistency of PA given consistency of PA (Second Goedel's Theorem).

**Pre-requisites: ** Foundations of first-order logic

**Background reading: ** G. Boolos and R. Jeffrey, *Computability and Logic*.

**Lecturer: **
Michael Norrish is a Pricipal Researcher in the Canberra Laboratory of Data61 (CSIRO). He is originally from Wellington, New Zealand, and is very happy to be back in the southern hemisphere, after an extended stint in Cambridge, England. It was there that he did his PhD, and then spent three years as a research fellow at St. Catharine's College. His research interests include interactive theorem-proving, and the application of this technology to areas of theoretical computer science, particularly the semantics of programming languages.

## Deviant Logics

**Contents: ** Substructural logics as solutions to some philosophical problems; proofs in distributive substructural logics; semantics; γ and other metatheorems; substructural arithmetic

**Course description: **
We consider a range of non-classical paradigms called by the late Richard Sylvan "Relevant Logics and their Rivals". These have proof systems with restrictions on structural properties of proofs such as the ability to permute or reiterate assumptions, and semantics based on "worlds" rather like those of modal logics. Many of these logics reject the classical (Boolean) dogma that any contradiction logically entails everything; we examine that, and also the proof that preferred theories such as logic itself nonetheless admit some rules which classically depend upon that dogma. Finally, we look at the project of formulating number theory with a reduced logical base, and observe some startling effects of dropping the structural rule of weakening.

**Lecturer: **
John Slaney is the guy who spoke the truth about classical first order logic in week 1 of this summer school. In week 2 he reveals his true colours as a logical deviant who has been working on distributive substructural logics for around 40 years.

## Proof Complexity

**Contents: ** propositional proof systems, lower bound techniques, hard formulas, automatisability, relations to SAT/QBF solving, proof complexity of QBF and modal logic

**Course description: **
The aim of this course is to present an up-to-date introduction to propositional proof complexity with emphasis on connections to SAT solving and game-theoretic techniques. We will start with a gentle introduction to the complexity of theorem proving and its relations to central questions in computational complexity.

The course will cover important proof systems, current knowledge about proof lengths in these systems, and implications for the performance of satisfiability algorithms. We will also provide an overview of proof complexity results for non-classical logics as modal, intuitionistic and quantified boolean formulas and their relations to QBF solving.

**Pre-requisites: ** Elementary logic, basic knowledge of computational complexity.

**Lecturer: **
Olaf Beyersdorff is Professor of Computational Logic at the School of Computing of the University of Leeds. Before coming to Leeds in 2012, he was a visiting professor at Sapienza University Rome and held positions at Leibniz University Hanover and Humboldt University Berlin, where he obtained his PhD in 2006. His main research interests are in logic and complexity and specifically proof complexity.

## Modal Reasoning Through Resolution

**Contents: ** Propositional modal logic, Local and global satisfiability problems, Resolution for Modal Logics

**Course description: **
Modal logics have been extensively used in Computer Science as a language for naturally describing complex systems and their properties. Several proof methods have been designed to deal with the reasoning tasks for those logics. Resolution, a proof method proposed by Robinson in the 1960s, is one of the most widely implemented and successful tools for Automated Reasoning in classical logics. In this course, we will examine resolution-based proof methods for the basic propositional modal logic, K and some of its extensions. We will review the proof method proposed by Robinson for classical propositional logic as well as the basics of modal logics. We will then discuss what needs to be taken into consideration when adapting the classical method to deal with the satisfiability problem for modal languages and look at two different resolution-based methods for families of mono-modal logics: the clausal method proposed by Mints and the non-clausal destructive procedure proposed by Fitting.

**Pre-requisites: ** Elementary logic, as covered in the "Fundamentals of Metalogic" course.

**Lecturer: **
Cláudia Nalon is a full lecturer at the Department of Computer Science at the University of Brasília, Brazil. She obtained her PhD from the University of Liverpool, in the UK, where she was working in proof methods for non-classical logics and their combinations. Cláudia is interested in both the theoretical foundations and implementation of reasoning tools and has worked on for several families of normal modal logics (allowing symmetry, reflexivity, seriality, transitivity, Euclideaness, and also parametrised multi-modal confluent logics); interacting temporal logics of knowledge; Coalition Logics, the non-normal modal logic for reasoning about cooperative agency; dynamic logics indexed by Petri-net programs; and finitely many-valued logics.

## Deep Inference

**Contents: ** Introducing deep inference, deep inference for classical logic, other logics in deep inference, application to modelling process calculus.

**Course description: **
Deep inference refers to a proof theoretic formalism for presenting logical systems
where inference rules can be applied to any subexpressions of
a given logical expression (sequents, structures, or formulas). In this course, I will
discuss a particular deep inference formalism known as the calculus of structures (CoS),
originally conceived by Alessio Guglielmi. CoS systems can be viewed as rewriting
systems on logical formulas, with two mandatory rules that are the counterparts of the
identity and the cut rules in sequent calculus. I will highlight some unique
features of CoS through a couple of examples: one is a proof system for classical logic SKS, in which contraction can be restricted to an atomic form (applicable only to
atomic propositions), and the other is a proof system for a non-commutative extension
of multiplicative linear logic, called BV, for which no sequent system has been known to exist. I will present an argument that BV cannot be presented in any "sequent system".
To make the argument more rigorous, I will use a characterisation of sequent systems
as `shallow systems', where inference rules describe a syntactic manipulation
procedure that have limited view on the sequent, e.g., a typical inference rule
in sequent calculus relates premise(s) and conclusion that differ only on a
bounded number of symbols. Lastly, I will present an application of system BV in
embedding a simple fragment of the process calculus CCS. I will show that the embedding
is such that logical implication corresponds to a preorder in CCS.

**Lecturer: **
Alwen Tiu is a Senior Lecturer at the Research School of Computer Science, The Australian National University. He did his Masters and PhD degrees at TU Dresden and Penn State University, respectively, specialising in proof theory. His research interests span theoretical as well as practical aspects of computer science; these include formal methods, computational logic, automated theorem proving and computer security. More specifically, he is interested in modelling aspects of computational systems (such as parts of operating systems, communication protocols, simple authentication devices, etc) as mathematical theories, and developing tools and techniques to prove their correctness or to find potential flaws.