RBE595: Formal methods in robotics

Course description

This course introduces fundaments in formal methods, and how to integrate formal methods, control theory, and artificial intelligence (AI) to develop autonomous, reactive and trustworthy robotic systems.

  • Formal Specification: Formal methods are concerned with generating precise, non-ambiguous task specifications or constraints that an autonomous agent (not limited to robots) is expected to satisfy. Formal specification provides us a language, with formal syntax and semantics, so that we can communicate with AI agents about the intended behaviors, high-level tasks, and constraints. Topics include:

    • Automata theory.

    • Temporal logic.

  • Formal verification and synthesis (Discrete systems): Formal synthesis is a suite of planning and control methods, which ensure that the behavior of an AI agent satisfies its formal specification, in dynamic or stochastic environment. Topics include:

    • Probabilistic planning

    • Model-based and model-free reinforcement learning methods with formal specifications.

    • Game-theoretic planning in sequential decision making.

  • Formal Synthesis and verification (Continuous systems): Most robotics systems are continuous. Formal synthesis in continuous systems integrate abstraction-based planning and control design for discrete systems. Formal verification is to verify whether a closed-loop system satisfies desired properties in formal specifications. Topics include:

    • Bisimulation and abstraction-based methods.

    • Reachability analysis and safety verification for continuous dynamical systems.

    • Game-theoretic control design for games.

To summarize, this course will explore several key questions:

  • How to develop a computational framework for specifying a task in a high-level, human-like language?

  • Given the task in human-like language, how to automatically generate a provably correct control policy in a robotic system?

  • How to ensure a robot achieve its high-level mission during its interaction with its dynamic and uncontrollable environment (human users or other uncontrolled robots)?

  • How to train a reinforcement learning agent or a team of agents to behave as intended by the human user without reward engineering?

Prerequisites: Linear algebra; Probability theory; Linear control theory (Desire to have RBE502: Robot control or nonlinear control).

Grade categories:

  • Midterm exam 35%

  • Project 40%

  • Assignments 25%

The emphasis of course projects will be on the development of new theory and algorithms (under supervision of the instructor and her Ph.D. students) or applications of theory to practice, simulation and practical implementation of planning and control algorithms for robotic systems/cyber-physical systems, such as multi-robot teams and autonomous vehicles.

Course textbook/references:

Reading materials will be provided by the lecturuer.

  • Andrew M. Pitts, “Lecture Notes on Regular Languages and Finite Automata”, Cambridge University Computer Laboratory. part I (pp 1-30.)

  • Hopcroft, John E., Rajeev Motwani, and Jeffrey D. Ullman. “Introduction to Automata Theory, Languages, and Computation.”(2006). WPI Library copy Available online.

  • Baier, Christel, Joost-Pieter Katoen, and Kim Guldstrand Larsen. “Principles of model checking”. MIT press, 2008. Available online. Chap 3, 5, 7, 10.

  • Thomas, Wolfgang, and Thomas Wilke. “Automata, logics, and infinite games: a guide to current research.” Vol. 2500. Springer Science & Business Media, 2002. Available at WPI Library, Chap Part I - Introduction- 2. Infinite Games

Other useful references:

  • “Reinforcement Learning: An Introduction” by Richard S. Sutton and Andrew G. Barto: 2017 new version: link

  • “Dynamic Programming and Optimal Control” by Dimitri P. Bertsekas (Two-Volume Set, i.e., Vol. I, 4th ed. and Vol. II, 4th edition) link

Research papers will be provided in lectures/canvas.