# Robotics (H02A4A)

## Lecturer

 Herman Bruyninckx KU Leuven — University of Leuven Dept. Werktuigkunde Room 01.053 Celestijnenlaan 300B B-3001 Leuven (Heverlee) Belgium Tel: +32 (0)16 328056

## Mission statement

To help students understand why and how to realise the Trustworthy Turing Test for robotics (and other autonomous) systems. That is, society should only allow robots to be built and deployed when there is a transparantly and certified guarantee that they can answer, at any time, the following questions, and motivate that answer:

• What is the robot doing?
• Why is it doing it?
• How well is it doing it?
• What else could it have been doing instead?

The official KU Leuven course description can be found here: H02A4AE.

Place and time of (first and only) lecture of the second semester: Monday 13 February 2017, 13h00–14h00 (room C300, Aud. E, ground floor of the Department of Mechanical Engineering, Celestijnenlaan 300, Heverlee).

Groups of two, preferably three students have to be formed, and their projects have to be chosen, before 24 February 2017!

This course is organized as guided self study: there is only one introductory lecture in class, and for the rest of the course the students work on a project of their own choice, with regular discussion sessions with the lecturer.

No organized examination: this course uses continuous evaluation, such that there is no need for organized examination sessions in June. Students have time till the end of the month of June for finalizing their project. But note that the lecturer has lots of other students to see in that month, so be pro-active and finish this course earlier, please!

This is an optional course for all students, so students are responsible for informing their individual curriculum administration about their participation to this course.

Since the official load of this course is 4 study points, the expected course load is about 100 hours per student. I expect each student to keep track of the spent efforts, and report that during each of the interactive discussion sessions with the lecturer.

This course is different from Advanced Robot Control Systems (Master of Mechanical Engineering), and Agricultural Robotics (Master of Bio Engineering). While the contents of these courses may, at first sight, seem similar, their focus is very different: while the companion courses are geared towards engineering students interested in advanced mechanical and bio-engineering robotic systems, this Master of Artificial Intelligence course is for students from different backgrounds, who want to get acquainted with the domain of intelligent robots. For one, this means that the ELSE (Ethical, Legal and Socio-Economic) aspects around the (expected) penetration of robotics technology into our daily life are part of the course contents. But most of all, the focus of the course is about how to bring in knowledge in a scientific way; the methodology is based on Bayesian Theory. as taught in the course on Uncertainty Reasoning in Knowledge Systems.

Students from the following curricula will have to study the fundamental concepts of Bayesian theory first, with just enough depth to fit with the interests of those curricula:

• Master in Artificial Intelligence: the emphasis is on “intelligent” control approaches for robot devices such as humanoids, walking robots, mobile robots, etc.
• Master Biomedische ingenieurstechnieken: the emphasis is on applying advanced kinematics and dynamics techniques from robotics to the case of the human body, for analysis as well as synthesis (e.g., rehabilitation, operation planning, gait analysis, etc.).
• Master in Space Studies: the emphasis is on applying advanced kinematics and dynamics techniques from robotics to the case of flying and orbital robots and satellites. The European Space Agency, ESA, has a large programme dedicated to robotics, Peraspera, which could serve as inspiration for projects.

Serving so many different backgrounds implies that it is next to impossible to provide the “most appropriate” level of detail in a traditional teaching approach of ex cathedra lectures. Irrespective of the students' background, a bachelor level knowledge of physics and of mathematics (linear algebra, statistics, differential equations, logic) is required. In all cases, it's impossible to follow this course if you are afraid of trying to understand mathematical algorithms!

In this context, “mathematical” can mean any combination of the following formal levels of abstraction of the real world:

• continuous. For example: each motion of a robot takes place in continuous space-time, with a dimensionality eqaul to the number of joints in the robot's kinematic structure.
• discrete. For example, in order to achieve a specific task, the robot will have to discretize it in a sequence of individual motions. Or, it will have to be able to recongize a discrete set of objects.
• logical. First-order logic is the simplest form of reasoning that a robot must be able to perform.
• symbolical. Knowledge is typically represented by formal relationships, that link all of the above in often complex combinations, and with often “new” mathematical properties. For example, how to represent the physical interaction between robots and humans?

## Course contents

This course is an introduction to intelligent robotic systems, i.e., machines that move (i.e., they move themselves and/or move objects in their environment), sense what is going on in their (immediate) neighbourhood, interpret how this sensing information is relevant to their task at hand, decide and act in order to achieve a planned task, while having only uncertain knowledge about themselves and their environment. “Motion”, “modelling”, “perception”, “planning”, “learning”, “adaptation”, “decision making”, “uncertainty” and “control” are key concepts in every intelligent robot, and hence also in this course.

Although you might have seen a lot of videos with robots that behave intelligently, this course guides students towards an understanding of what “intelligence” could or should mean for a machine. The mission statement questions can serve as a rule of thumb to assess that intelligence, if it is the robot itself that can answer them, and while trying to do something together with another robot.

Currently, the state of the art in robotics is far away from being able to build such systems, despite a public (and professional…) press that suggests otherwise. The paragraphs below introduce diagrams at a high level of abstraction, that have proven over the last decade to be able to serve students quite well as a guide towards understanding where exactly the diffifult, unsolved aspects of intelligent robotics systems lie. Especially the complexity of formalising “common knowledge” is a problem, since “action” and “perception; algorithms heavily depend on such knowledge before they can become “intelligent”.

Since several years, already, the European robotics community has engaged in an extensive effort “to roadmap” the R&D&I (Research, Development and Innovation) of robotics in general, and intelligent robots in particular. Reading its Multi-Annual Roadmap document is mandatory for all students, since it shows how diverse, difficult and heterogeneous the domain of intelligent robotics is.

The big challenge for intelligent robot systems is to integrate various types of knowledge into an appropriate software architecture that combines dozens of individual algorithms, and that also matches to the hardware and software platforms that the robot is running on.

The students of the course are expected to reach the phase of understanding how the following five knowledge types are (i) formally represented, and (ii) integrated at runtime by a robot's software architecture:

• Robot platform capabilities: what are the motions that the robot can achieve? what are the constraints that it imposes on the application? what are (un)safe modes of operations? what kinds of objective functions can the platform optimize? Etc.
Students must learn to understand what motion capabilities robots systems can have, from the purely geometric level of abstraction, down to the energy consumption of the actuators.
• Object affordances: in the mainstream of robotics research, objects are just pieces of geometry, without any knowledge attached to them. This is a completely wrong starting point, since for humans all objects come with lots of so-called “affordances”: each object thas a natural set of approach motions, manipulation actions, perception features, operational conditions, etc., and those should be modelled in a knowledge base that the robot controller can query at runtime.
Students must be able to explain how to represent object affordances of objects of daily use, such as tables, doors, elevators, spoons, etc.
• Task specification: a good task specification is (i) purely declaratieve (that is, it provides the information about what has to be achieved in the task, and not how), and (ii) described without any specific assumptions about the “robot platform” that eventually will execute the task.
Students must be able to make such task specifications, for simple operations.
• World model: one of the biggest mistakes in mainstream robot systems is that the knowledge about how the world looks like is nowhere represented explicitly, and in an integrated way. Instead, bits and pieces of the world knowledge are spread over control algorithms, perception activities, planners, etc.
Students must be able to add “robotics” world model primitives to state of the art world map tools, such as OpenStreetMaps.
• Common knowledge: this might look like the most obvious knowledge to represent, but surprisingly little formally described common knowledge is currently already available for robots to use.
Students must learn how to formally describe “easy” knowledge such as: after grasping an object, the chances are high that the object will move together with the robot that has grasped it; when releasing it during motion, it will follow a ballistic trajectory described by Newton's Law; when object A is “behind” object B, the perception of both objects will be “disturbed”; etc.

The “intelligence” in robot systems comes, more and more, from better perception of the environment by the robots. In contradiction to “pure” perception research (e.g., computer vision), the perception in a robot needs to happen on-line and in the context of the task that the robot is executing, and helped by the robot's motion. The figure below sketches the many different interactions that take place in a sensori-motor control approach based on the “Bayesian network” paradigm.
Students should learn to understand how this represents the generic template to understand and design intelligent motion and sensing in robots. The following figure must become the basis for all projects:

The major difficulty to realise the knowledge structure above in real systems is that it involves several levels of data association challenges:

• From (sensor) causes to (sensor) features. For example, which pixels in a camera image can/should be taken together to cosntruct the most relevant visual clues for the task at hand?
• From sensor features to object features. For example, how do several visual clues help the robot to recognize a specific object?
• From object features to task features. For example, which object features are relevant for the robot in the current phase of its task?
• From task features to the purpose of the whole mission the robots are involved in.

Note that these levels of data association conform to the levels in the Knowledge Pyramid.

This figure uses the graphical notation of a Bayesian network, and the key aspects of this representation are:

• every node is a “random variable”: a (possibly very complex) data structure that represents the value of the “state of the world”, including a probability density function on the random variable's parameter values.
• every arrow is a “probabilistic relationship” between random variables, more precisely a conditional probability of the form $P\left(a|b,M\right)$, where $a$ and $b$ are random variables, and $M$ are all the mathematical models that link the random variables together.

For example, in the figure above: if the position of $obj1$ is known with respect to the $robot$, one can predict where several of its sensor features, $fea1$ and $fea2$, can appear in the camera $sensor1$, on the basis of knowledge about the optical parameters of the camera and its lense, a model of a “pin-hole camera” and knowledge of where the camera is placed on the robot. The power of Bayesian probability theory is then to provide algorithms “to inverse the arrows”, that is, to find better (probabilistic) estimates of the position of $obj1$ with respect to the $robot$.

Both parts together form a formal model of the “world”, including all the knowledge that the engineer can or wants to bring in. Identifying what “robotics knowledge” can be brought in, and how that is being processed, is the major learning objective of the course. Hence, this figure will be part of every project discussion between lecturer and students.

Currently, the state of the art is still mostly scratching the surface of the lowest level, that of data reduction: a lot of data is reduced to a smaller set of data, that humans (and not the robots themselves!) can interpret as useful information.

## Evaluation

The students are given a lot of freedom in how they approach their project: it can be a study of a number of research papers; it can have a large programming flavour to it, in simulation or on a real robot; it can be (mechanical, software) design-oriented or very A.I.-oriented; … The only requirement is that students sit together with the lecturer and agree on a time schedule and learning targets. During the project, team members and the lecturer meet four or five times for discussion sessions of about one hour.

Every project starts with a search in the library; here is a list of possible starting points. Only journal papers are accepted as material for students to use in their preparation, unless explicitly agreed upon differently by the lecturer. Discussion preparations that do not follow this requirement will just not be started, and students will be sent home to improve their preparation. Of course, the Web is another important source of information, but it should never be the only source, and certainly not the first one!

There is no final examination session: students are continuously evaluated during the face-to-face meetings. Note that not the end-result of the project is evaluated, but the students' progress in their attempt to reach that result. Three criteria form the core of the evaluation:

1. Critical digestion of the studied material: students not only have to understand the project material, but also be very critical about what they read and be able to ask precise questions about the topics they do not understand.

This is the major attitude that this course wants to stimulate in the students. Make sure that you don't give criticism without knowing what you are talking about! Hence, a critical attitude goes together with an independent research and study attitude.

2. Progress in the digestion of the project material and towards the agreed-on project goals.

In each interactive session, the students should show their skills of being able to put the material that they have digested so far into a wider robotics perspective, to see generic foundations of different solutions or research domains, to go from analysis to synthesis, to reflect on their own trajectory of (not) understanding the material, etc.

3. Creativity in the discussion of project material, in identifying problems, and in suggesting potential solutions.

It's the students' responsibility to prove orally and interactively their qualities in all these criteria. So, students with poor assertivity skills and poor Dutch or English (spoken!) language skills are at risk, and are advised not to select this course.

## Projects

Students make their choice of project, and form a team of 2-3 students. (Individual student projects are possible, but not preferred, because students loose the added value of being able to interact with each other.) Recall that the lecturer wants to talk to each group interactively for four to five hours during the project, to discuss problems, to give feedback on the progress, and to explain concepts and theory (after independent preparation by the student group!). Each group is responsible for its own timing. Appointments can be made by emailing the lecturer.

Contact the lecturer before making a final decision on the project topic, in order to agree on the subject and the (approximate) goals to be achieved. These goals depend not only on the project, but also on the number of students in the team and on their background. The concrete contents of a project are always defined after discussion with the students, such that they get ample opportunity to incorporate their own ideas and interests, and to adapt the project description to their personal background. It is not uncommon for the subject of the project shifting all the time during the semester, depending on the interests and progress of the students.

In addition, it is always possible for students to come with their own suggestions. Such project suggestions must first be discussed with the lecturer, in order to guarantee that they contain sufficient robotics content.

Most projects are open-ended, in the sense that there is no single “best” or “correct” solution. The open-endedness is a deliberately chosen feature of the projects, because (i) this situation will almost certainly show up in your future professional career too, and (ii) the intelligent robots of the future will have to survive in the “open world” anyway. So, this project is a good opportunity to test one's creativity, problem-solving skills, and team-work abilities.

Most projects can be chosen by more than one group, since this course's individual approach to the projects results most often in significant differences between the concrete contents of the different groups.

### Project suggestions — Spring 2017

The following is just a list of suggestions. Students need to come up with their own ideas, for which they feel most motivated.

1. Bayesian networks and logical rules in robotics

This project studies Bayesian algorithms and (probabilistic) logical rules that are often used in robotics, for parameter estimation, pattern recognition, map building, decision making, knowledge application, etc.

Students should have a background in Bayesian Theory, for instance from the course on Uncertainty Reasoning in Knowledge Systems.

2. Multi-resolution active sensing

This project experiments with strategies to find and recognize objects with force and/or distance sensors (“blind man sensing”), using active sensing, i.e., performing actions with a robot to get more information from the environment.

This project can be chosen by different groups, focusing on different approaches and objects. The project is a perfect choice for a software-oriented project.

3. Human(oid) motion software

This project studies the physics to describe the kinematics and dynamics of moving humans, or of humanoid robots, as well as the control algorithms to realise dynamic walking. Both instantaneous motion and timed motion trajectories are investigated, as well as the link between both.

4. Robotics in biomedical technology

The human is by far the best ``robotic'' machine, and, hence, the study of the 3D kinematics, dynamics, perception and control of the human is highly relevant for understanding and building robots.

In the opposite way, a lot of robotics know-how and technology is being transfered to medical applications, not just for manipulating surgical tools, but also for sensor-based navigation and registration on and within the human body, for the analysis and simulation of the human gait, etc.

This project is well suited for medical technology students, to investigate the synergies between robotics and biomedical technologies.

5. Agricultural robotics

Although agriculture is, especially in Europe, an economic sector in which intelligent automation would be very welcome, due to the high labour costs and the intensive worldwide competition, very few robotics applications exist already in this sector. The major reason is the high lack of structure in a typical agricultural environment, and the high manipulation skills required to treat fruit or vegetables with appropriate care and with economically justifiable speed.

This project studies the state of the art in agricultural robotics, and identified the key challenges and opportunities, so it is well suited for bio-engineering students.

6. How to represent and learn robot skills?

This project investigates what it means for a robot to have a “skill”, or to learn it. For example, the skill to grasp a bottle, or to do a simple assembly task. You will learn how difficult it is to represent this skill in a computer-readable form, and how much task and environment knowledge needs to be encoded for such skills to be of realistic complexity. In addition, the on-line control and sensor processing required to execute skills are also impressively involved.

7. Add robotics content to Wikipedia

This project critically discusses the quality of the robotics items that are already described in the on-line Wikipedia encyclopedia, and adds improvements and new items.

This project can be chosen by different groups, each focusing on a different set of robotics domains. Suggested domains are: kinematics and dynamics, control theory, sensor processing, planning, Bayesian information processing, humanoid robots, mobile robots. (This project can be combined with the previous ones.)

The reporting connected to this project consists of actual contributions to the Wikipedia.