Distinguished Lectures


  • Prof Christoph Schommer
  • Prof Leon van der Torre
  • Prof Raymond Bisdorff
  • Prof Martin Theobald
  • Prof Ulrich Sorger
  • Prof Pascal Bouvry
  • Prof Peter Ryan
  • Prof Christoph Benzmüller, FU Berlin




Dr. Kary Främling, Umea University, Sweden

Contextual Importance and Utility: how Could it Change Explainable AI?
21 March 2022, 16h00 via Webex

Meeting link: https://unilu.webex.com/unilu/j.php?MTID=m513f8ada798af729205020f4669e74bf
Meeting number: 2730 181 3468  --- Password: akRzARHp222

Abstract: Explaining the results of AI systems in ways that are understandable for different categories of end-users has been a challenge for AI since its beginning. This challenge has become even greater in recent years with increasingly complex machine learning models, leading to the development of numerous so called explainable AI (XAI) methods. In practice it seems like a majority of XAI research is focusing on trying to explain image classification, e.g. "I believe there's a cat in this image and I show you where it is". Such explanations are visually attractive and might help developers of deep neural networks to assess how well their network has learned what it is supposed to. However, such XAI methods do not give added value to the citizen whose loan application was refused or who didn't get called to a job interview, as decided by an AI system. It can be doubted whether current XAI methods are capable of providing such insight, and whether they actually allow to detect bias or discrimination in AI systems. Some solutions are presented, such as the Contextual Importance and Utility (CIU) method.

Host: Dr. Aleks Knoks, ICR Research Group, University of Luxembourg




Prof Toby Walsh

Professor of artificial intelligence at the University of New South Wales

The AI Future

Time and Place: 27 June 2019, 16h00 - 17h00. NEW: Room MSA 3.370

Abstract: Artificial intelligence offers considerable promise - the robots can perhaps take the sweat, and do all the dirty, dull and difficult jobs - but how do we ensure the future is bright? How long even do we have before machines are as capable as us? And when they are, how can we be sure they'll behave ethically and to our benefit?


Bio: Toby Walsh is a leading researcher in Artificial Intelligence. He was named by the Australian newspaper as a "rock star" of Australia's digital revolution. He is Scientia Professor of Artificial Intelligence at UNSW, leads the Algorithmic Decision Theory group at Data61, Australia's Centre of Excellence for ICT Research, and is Guest Professor at TU Berlin. He has been elected a fellow of the Australian Academy of Science, and has won the prestigious Humboldt research award as well as the NSW Premier's Prize for Excellence in Engineering and ICT. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden. He regularly appears in the media talking about the impact of AI and robotics. He is passionate that limits are placed on AI to ensure the public good. In the last two years, he has appeared in TV and the radio on the ABC, BBC, Channel 7, Channel 9, Channel 10, CCTV, CNN, DW, NPR, RT, SBS, and VOA, as well as on numerous radio stations. He also writes frequently for print and online media. His work has appeared in the New Scientist, American Scientist, Le Scienze, Cosmos, the Conversation and "The Best Writing in Mathematics". His twitter account has been voted one of the top ten to follow to keep abreast of developments in AI. He often gives talks at public and trade events like CeBIT, the World Knowledge Forum, TEDx, and Writers Festivals in Melbourne, Sydney and elsewhere. He has played a leading role at the UN and elsewhere on the campaign to ban lethal autonomous weapons (aka "killer robots").



Prof Dr Marco Aldinucci, Associate Professor at Computer Science Department, University of TorinoPrincipal Investigator of the Parallel Computing group

The evolution of high-performance systems: from HPC to Big Data to Deep Learning

Time and Place: 5 June 2019, 15h00 - 16h00. Room MSA 4.510

Abstract : Computer science evolves through successive abstractions. Today, after 30 years of lethargy, high-performance computing (HPC) is extending beyond its traditional fields of application. For years HPC systems have been feeding with differential equations; the ability to calculate many mathematical operations per second (FLOPS) was the key to solving ever larger problems and to find ever more precise solutions. The explosion of data resulting from digital transformation has shifted the demand for high performance from traditional applications (equations, simulations, etc.) to methods for the analysis of large amounts of data (BigData, Deep Learning, etc). Under this impulse, the programming and use models of HPC systems are evolving towards much more abstract models, able to satisfy different application needs and to simplify the development of new applications. The challenges for designers are renewed: from FLOPS to the efficient management of data in memory; from mathematics in double precision to that in small but efficient precision for deep neural networks. A blow of life for high-performance systems researchers: experimenting with new workloads, platforms, programming models, provisioning models. To meet these challenges, the University of Turin and Polytechnic University of Turin have joined forces in the HPC4AI centre to create a federated competence centre on High-Performance Computing (HPC), Artificial Intelligence (AI) and Big Data Analytics (BDA). A centre capable to collaborate with entrepreneurs to boost their ability to innovate on data-driven technologies and applications. In the talk some recent results on distributed training at scale will be presented.


Bio : Marco Aldinucci is an associate professor at Computer Science Department of the University of Torino (UNITO) since 2014. Previously, he has been postdoc at University of Pisa, researcher at Italian National Research Agency (ISTI-CNR), and University of Torino. He is the author of over a hundred papers in international journals and conference proceeding. He has been participating in over 20 national and international research projects concerning parallel and autonomic computing. He is the recipient of the HPC Advisory Council University Award 2011, the NVidia Research award 2013, the IBM Faculty Award 2015. He is the P.I. of the parallel computing group alpha@UNITO, the director of the “data-centric computing” laboratory at ICxT@UNITO innovation centre, vice-president of the C3S@UNITO competency centre, and the coordinator of HPC4AI. From Nov. 2018, he is a member of the Governing Board of the EuroHPC Joint Undertaking. He co-designed, together with Massimo Torquati, the FastFlow programming framework and several other programming frameworks and libraries for parallel computing. His research is focused on parallel and distributed computing.



Prof. El-Ghazali Talbi, University of Lille & INRIA (France) and Invited Professor @ University of Luxembourg

How Machine Learning can help Optimization

Time and Place: 2 May 2019, 16h00, Room: MSA 3.540


Abstract : During the last years, research in using machine learning (ML) in designing efficient and effective optimization algorithms such as metaheuristics have become increasingly popular. Many of those hybrid approaches have generated high quality results and represent state-of-the-art optimization algorithms. Although various hybrid appproaches have been used, there is a lack of a comprehensive survey on this research topic. In this talk we will investigate the different opportunities for using ML into metaheuristics. We define the various ways synergies may be achieved. A new detailed taxonomy is proposed according to the concerned search component: target optimization problem, low-level and high-level components of metaheuristics. We also identify some open research issues in this topic which needs further in-depth investigations.

Bio :
Prof. El-ghazali Talbi received the Master and Ph.D degrees in Computer Science, both from the Institut National Polytechnique de Grenoble in France. Then he became an Associate Professor in Computer Sciences at
the University of Lille (France). Since 2001, he is a full Professor at the University of Lille and the head of the optimization group of the Computer Science laboratory (CRISTAL). He is an invited Professor at the University of Luxembourg (2019-2022). His current research interests are in the field of optimization, parallel algorithms, metaheuristics, high-performnce computing, , optimization and machine learning, and application to energy, logistics/transportation, bioimedical and networks. Professor Talbi has to his credit more than 300 publications in journals, chapters in books, and conferences. He is the co-editor of ten books. He was a guest editor of more than 17 special issues in different journals (Journal of Heuristics, Journal of Parallel and Distributed Computing, European Journal of Operational Research, Theoretical Computer Science, Journal of Global Optimization). He was the head of the INRIA Dolphin project and the
bioinformatics platform of the Genopole of Lille. He has many collaborative national, European and international projects. He is the co-founder and the coordinator of the research group dedicated to Metaheuristics: Theory and Applications (META). He served as a conference chair of more than 20 international conferences (e.g. EA'2005, ROADEF'2006, META'2008, IEEE AICCSA'2010, META'2014, MIC'2015, OLA'2018, MOPGP'2019).



Prof Dr Philipp Slusallek, University Saarbrücken and DFKI Saarbrücken

CLAIRE: A European Initiative for "Excellence in All of AI, for all of Europe, with a Human-Centered Focus

( Understanding the World with AI: Training and Validating Smart Machines Using Synthetic Data ) 

Time and Place: 7 March 2019, 16h00, Room: MSA 2.240

Abstract : The world around us is highly complex but Autonomous Systems must be able to reliably make accurate decisions that in many cases may even affect human lives. With Digital Reality we propose an approach that instead of only relying on real data, learns models of the real world and uses synthetic sensor data generated via simulations, for the training and -- even more importantly -- the validation of Autonomous Systems. This is extended by a continuous process of validating the models against the real world for improving and adapting them to a changing environment. A highly relevant application of this approach is in intelligent sensor systems. Using a model about the object to be measured and the measuring process these systems are aware of what and how they are measuring and can adapt the measuring strategy and parameters accordingly, e.g. to obtain accurate measurements or target high throughput.

Bio :
Philipp Slusallek is Scientific Director at the German Research Center for Artificial Intelligence (DFKI), where he heads the research area on Agents and Simulated Reality. At Saarland University he has been a professor for Computer Graphics since 1999, a principle investigator at the German Excellence-Cluster on “Multimodal Computing and Interaction” since 2007, and Director for Research at the Intel Visual Computing Institute since 2009. Before coming to Saarland University, he was a Visiting Assistant Professor at Stanford University. He originally studied physics in Frankfurt and Tübingen (Diploma/M.Sc.) and got his PhD in Computer Science from Erlangen University. He is associate editor of Computer Graphics Forum, a fellow of Eurographics, a member of acatech (German National Academy of Science and Engineering), and a member of the European High-Level Expert Group on Artificial Intelligence. His research covers a wide range of topics including artificial intelligence, simulated/digital reality, computational sciences, real-time realistic graphics, high-performance computing, motion modeling & synthesis, novel programming models, 3D-Internet technology, and others.






Dr Smadar Bustan, Université Paris Diderot

How Can We Measure Human Suffering for Medical purposes?

Time and Place: 19 December 2018, 16h00, Room: MSA 2.400

Abstract : Recent years have witnessed growing demands that physicians become more attentive in the evaluation of their patients’ suffering, introduced by Eric Cassel (1982) as one of the fundamental goals of medicine. These concerns have not fallen on deaf ears, but were difficult to follow in the absence of appropriate methods for simple use by the healthcare professionals. To fill this gap, I propose an integrative approach that relies on both philosophy, experimental and clinical research seeking to provide a measure accompanied by an interpretation of the patient’s suffering. My goal is to provide the medical profession with a simple and effective tool for diagnosing the suffering in patients.
To demonstrate a 10 year research, the talk will start with philosophical concepts on pain and suffering embedded into experimental and clinical studies assessing pain-related suffering in healthy volunteers and then in chronic pain patients. During our five year collaborative investigation in Germany and Luxembourg (PASCOM project), we aimed to first induce suffering in a pain laboratory setting and secondly to manipulated it by means of stimulation length, frequency of stimulation, and various stimulation methods. Our results confirm the possibilities of measuring pain-related suffering in experimental pain and what psychological variables testify to who is more prone to suffer under pain. The next, completing, stage will be to study the mechanisms underlying the interplay between pain and suffering in brain processes using artificial intelligence and applying current approaches in cognitive sciences.

Bio : Smadar Bustan is a philosopher and a scientist. She first conducted experimental studies at the Pain Laboratory, the University of Luxembourg (Prof. F. Anton) and then clinical studies on pain and suffering at the French National Institute of Health and Medical Research unit situated at the Pain Clinic at Hospital Ambroise Paré, Paris. She is the co-founder at Harvard and head of the Interdisciplinary and International Program on Suffering and pain (www.suffering-pain.com). She also teaches Modern and Contemporary Philosophy as well as Philosophy of Cognitive Sciences and philosophy of Medicine at the University of the Catholic Institute of Paris and the University of Paris Diderot. 




Prof. Roland Bouffanais, Director of Graduate Studies at the Singapore University of Technology and Design (SUTD)

Swarming Systems: From Networked Control Systems Theory to Swarm Robotics Experiment

Prof Roland Bouffanais

Time and Place: Wednesday, 19 September 2018; 16h00; Room: Maison du Savoir, Room 4.340

Abstract: Consensus and cooperation in networked multi-agent systems is a topic that is starting to receive significant attention in control theory and distributed computing owing to numerous possible engineering applications. For instance, the power grid, urban traffic, arrays of distributed sensors, multi-robot systems, and social networks are various examples of collective systems requiring an effective response to local perturbations. Using a number of theoretical models, we present some unexpected features arising in the collective dynamics of networked multi-agent systems. With the help of two swarm robotics testbeds, we experimentally analyze these features, and discuss their far-reaching practical implications for the design and understanding of distributed systems.

Bio: Prof Bouffanais and his group focus on both fundamental and applied interdisciplinary problems rooted in the field of Complexity. Depending on the nature of the problem, experiments, analytical theory and computation are considered and practiced. Specifically, we investigate complexity in engineering, biological and physical systems with a particular emphasis on the study of dynamical processes in relation with emergent collective behaviors: cell aggregation, fish schooling, cellular sensory networks, quantum emergent behaviors, turbulence in fluids, etc. We aim at devising some commonalities in those awe-inspiring collective behaviors by studying the system dynamics of a set of parameters borrowed from information theory, control theory and network science.




 Dr Hagen Fürstenau , Amazon Development Center Germany GmbH.

Neural Machine Translation

Time and Place: Wednesday, 20 June 2018; 15h30; Room: MSA 2.230

Abstract: Machine translation is the process of automatically translating text from one language into another. Research on designing and improving machine translation algorithms has been active for decades, but recent advances in machine learning with deep neural networks have led to revolutionary improvements in translation quality. Compared to the best technology of only a few years ago, these “neural machine translation” systems produce amazingly fluent and correct translations. In this talk, I will give a broad overview of how these systems work and address some of the remaining challenges and open questions. I will also speak about how machine translation is used at Amazon to help our customers cross language boundaries.

Bio: Hagen Fuerstenau is a research manager at Amazon, leading the machine translation team in Berlin. Before joining Amazon, his research focus was on computational semantics, learning representations of the meanings of words and sentences from data. His PhD work at Saarland University and University of Edinburgh was on semi-supervised methods for semantic role labeling, trying to automatically identify events and their participants from text data and some human annotations. During a postdoc stay at Columbia University, he then investigated if such information can also be extracted from text alone in an unsupervised setting. He later became interested in machine translation as a challenging application, since translating well is not possible without capturing the meaning of texts.




Prof Mateja Jamnik, University of Cambridge, Dept of Artificial Intelligence.

Prof Mateja Jamnik

Capturing Intuition - the Case of Diagrammatic Reasoning

Time and Place: Wednesday, 6 June 2018; 15h30; Room: 4.390

Abstract: In this talk we examine the role that diagrams play in human reasoning from the computational, algorithmic and implementational point of view. We argue that diagrams can provide part of human reasoning that carries the intuition of the solution to a problem. Our interest does not lie in machine-oriented reasoning, which symbolic logic is typically used for. Instead, we study diagrammatic systems as one plausible way to model human-oriented approaches to reasoning. Machine-oriented approaches to reasoning are typically motivated by a categorical answer to the question of whether a conjecture is a theorem, by the speed of finding a proof, or by the number of proofs that the system is able to find. In contrast, human-oriented approaches to reasoning seek explanations to solutions that are understandable to humans, often relying on the right level of abstraction in the solution to a problem. This approach to modelling reasoning is motivated by the goals of artificial intelligence: the hope is that such diagrammatic reasoning systems will give us insights into human reasoning. Indeed, we aim to demonstrate that human-like reasoning with diagrams can be computationally modelled in logically formal ways on machines.


Bio: Dr Mateja Jamnik is a Reader in Artificial Intelligence at the Department of Computer Science and Technology of the University of Cambridge, UK. She has recently served as a Specialist Adviser to the House of Lords Select Committee on Artificial Intelligence, helping the UK government in policy direction, priority and focus in relation to the impact of AI on society. Previously, she held an EPSRC Advanced Research Fellowship.
Mateja's research focusses on how people solve problems using informal techniques like diagrams, and she then computationally models this type of reasoning on computers to enable machines to reason in a similar way to humans. She is essentially trying to humanise computer thinking. Her PhD work at the University of Edinburgh focussed on particular forms of mathematical reasoning and was published by Stanford University's CSLI Press. At the start of the millennium, she was one of the founders of a new interdisciplinary research area and conference series "Diagrams" on the theory and application of diagrams. Mateja's research bridges theoretical computer science (such as automated reasoning) and artificial intelligence, and has been supported by the UK Engineering & Physical Sciences Research Council (EPSRC), the Leverhulme Trust, and the European Research Council. Mateja is passionate about bringing science closer to the general public and engages frequently with the media and in public science events. She is an active supporter of women scientists and in 2003 founded a national network, women@CL, for women in computing research. In recognition of these contributions, Mateja was awarded the Athena Prize in 2016 by the Royal Society.



Prof David Naccache, ENS Information Security Group.

Prof David Naccache

Artificial Intelligence & Security . Research Directions & Scientific Challenges

Time and Place: Thursday, 31 May 2018; 15h30; Room: 3.380, Maison du Savoir.

Abstract The Distinguished Lecture concerns the meeting of Artificial Intelligence with Security aspects and challenges. Some research directions and scome scientific challenges will be discussed.

Bio I head the ENS' ISG. My research areas are code security, forensics, the automated and the manual detection of vulnerabilities. Before joining ENS Paris (PSL) I was a professor during 10 years at UP2 (Sorbonne Universités). I previously worked for 15 years for Gemplus (now Gemalto), Philips (now Oberthur) and Thomson (now Technicolor). I studied at UP13 (BSc), UP6 (MSc), IMAC (Eng), TPT (PhD), UP7 (HDR), IHEDN and ICP (STB underway). I am a forensic expert by several courts, a member of OSCP and the incumbant of the Law and IT forensics chair at EOGN. Click here for contact information, affiliationsex-PhD students, editorial boardsawards, publications and hobbies.





Prof Jan Broersen, Professor of Philosophy at University of Utrecht.

Prof Jan Broersen

Using stit-theory to formalize responsibility for actions

Time and Place: Wednesday, 2 May 2018; 15h30; Room: 3.230

AbstractIn this talk I will review the use of stit-logic (stit = seeing to it that) as a paradigm for formalizing responsibility for actions and outcomes. I will go through one of my favorite examples and discuss issues like the relation with causal modeling, the role of interventions and the role of knowledge and intention.

Bio: Jan Broersen is associate professor at the department of Philosophy and Religious Studies at Utrecht University, The Netherlands. Before moving to theoretical philosophy he worked in computer science (intelligent systems) for 12 years. His doctoral thesis was on logics for normative system verification. He studied mathematics at the Delft University of Technology. His main research interests are in logics for agency, norms and AI. In 2013 he was awarded an ERC project on the topic of Responsible Intelligent Systems (REINS). The project investigates how to endow AI with a sense of responsibility, and how to actually verify if AI has behaved or will behave responsibly. A second project he is involved in is the NWO project “The Digital Turn in Epistemology”, that investigates new learning methods that adapt to our modern day, embedded, embodied and extended stance towards knowledge in digital environments.




Prof Susan Anderson, University of Connecticut;

Prof Michael Anderson (Hartford University)

Machine Ethics

Time and Place: Wednesday, 28 March 2018; 16h45; Room: 2.400 (Maison du Savoir)

Abstract: We argue that ethically significant behavior of autonomous systems should be guided by explicit ethical principles determined through a consensus of ethicists. Such a consensus is likely to emerge in many areas in which autonomous systems are apt to be deployed and for the actions they are liable to undertake, as we are more likely to agree on how machines ought to treat us than on how human beings ought to treat one another. Given such a consensus, particular cases of ethical dilemmas where ethicists agree on the ethically relevant features and the right course of action can be used to help discover principles needed for ethical guidance of the behavior of autonomous systems. Such principles help ensure the ethical behavior of complex and dynamic systems and further serve as a basis for justification of their actions as well as a control abstraction for managing unanticipated behavior.

Bio: Michael Anderson, professor emeritus of computer science at the University of Hartford, earned his Ph.D. in computer science and engineering at the University of Connecticut. Susan Leigh Anderson, professor emerita of philosophy at the University of Connecticut, earned her Ph.D. in philosophy at the University of California, Los Angeles. They have been instrumental in establishing machine ethics as a bona fide field of study, co-chairing/authoring the AAAI Fall 2005 Symposium on Machine Ethics, a IEEE Intelligent Systems special issue on machine ethics, and an invited article for Artificial Intelligence Magazine on the topic. Further, their research in machine ethics was selected for Innovative Applications of Artificial Intelligence as an emerging application in 2006. Scientific American (Oct. 2010) features an invited article on their research in which the first robot whose behavior is guided by an ethical principle is debuted. They have published "Machine Ethics" with Cambridge University Press (2011).



 Prof Dr Beishui Liao , Zhejiang University, China.

Argumentation theory and its Applications

Time and Place: Wednesday, 28 March 2018; 15h30; Room: 2.400 (Maison du Savoir)

Abstract: Generally speaking, argumentation is a mechanism for people to exchange arguments in the context of disagreement with an aim of persuasion or resolving conflicts. There are basically two research lines regarding argumentation: informal argumentation and formal argumentation. The former, also known as informal logic, is about studying everyday argumentation from the perspectives of linguistics, rhetoric and ordinary reasoning, while the latter is more about building nonmonotonic reasoning formalisms in the field of artificial intelligence. In this talk, I’ll first introduce some basic notions, theories and methods in the field of argumentation, and then give some examples to show possible applications of argumentation.

Bio: Beishui Liao graduated from the College of Computer Science and Technology, Zhejiang University, China, in March 2006. He is now a full professor of logic and artificial intelligence (since 2013), and a Distinguished Young Scholar, of Zhejiang University. He is a visiting professor of Luxembourg University (since 2014). He serves as the vice-director of the Institute of Logic and Cognition of Zhejiang University (from 2014), and is a standing member of the National Committee of Logic of Science of China (from 2014). He was/is an organizer of Chinese Conference on Logic and Argumentation (CLAR), Dagstuhl Perspectives Workshop on Present and Future of Formal Argumentation, and the International Workshop on Big Data, Reasoning and Decision-Making (BRaD). He visited the University of Texas at Austin from Jul 2009 to Jul 2011, and the Università degli Studi di Brescia from Jul to Oct 2014. As principal investigator, he has conducted three nationally funded projects, two provincially funded projects and an INTER MOBILITY project supported by the National Research Fund Luxembourg (FNR) (cooperating with Professor Leon van der Torre). He is participating a HORIZON 2020 Research and Innovation Staff Exchange (RISE) project (MIREL - MIning and REasoning with Legal texts). He has published 3 monographs, and a number of papers in some leading journals such as AIJ, JLC, AMAI, Information Sciences, etc.



Please feel free to download here the Proceedings of the ILILAS Distinguished Lectures 2017. 

We are honored to host the following speakers:



 Prof Jean-Yves Beziau, Federal University of Rio de Janeiro & Ecole Normale Supérieure, Paris

Is the Principle of Contradiction a consequence of xx = x ?

Time and Place: Monday, 18 December 2017, 16h00, Room 1.040, Maison du Nombre;



In his famous book, The Laws of Thought (1854), George Boole presents a surprising result: the proof that the principle of contradiction is a consequence of xx=x (PROPOSITION IV of Chapter III), an equality he calls the fundamental law of thought, a symbolic feature of what is nowadays called a Boolean Algebra. In this talk I will discuss this proposition, examining its various interpretations: in set theory, algebra, first order logic and propositional logic.


Jean-Yves Béziau works in the field of logic - in particular, paraconsistent logic, the square of opposition and universal logic. He holds a Maîtrise in Philosophy from Pantheon-Sorbonne University, a DEA in Philosophy from Pantheon-Sorbonne University, a PhD in Philosophy from the University of São Paulo, a MSc and a PhD in Logic and Foundations of Computer Science from Denis Diderot University. Prof Béziau is the editor-in-chief of the journal Logica Universalis and of the South American Journal of Logic - an online, open-access journal - as well as of the Springer book series Studies in Universal Logic. He also co-edits College Publication's Portuguese-language book series Cadernos de Lógica e Filosofia (Source: en.wikipedia.org)




 Prof Julia Stoyanovich, Professor of Computer Science at Drexel University, US.

Data, Responsibly

Time and Place: Thursday, 7 December 2017; 13h30; Room: 3.390, Maison du Savoir


Data-driven algorithmic decision making promises to improve people's lives, accelerate scientific discovery and innovation, and bring about positive societal change. Yet, if not used responsibly, this same technology can reinforce inequity, limit accountability and infringe on the privacy of individuals: irreproducible results can influence global economic policy; algorithmic changes in search engines can sway elections and incite violence; models based on biased data can legitimize and amplify discrimination in the criminal justice system; algorithmic hiring practices can silently reinforce diversity issues and potentially violate the law; privacy and security violations can erode the trust of users and expose companies to legal and financial consequences.

In this talk I will discuss our recent work on establishing a foundational new role for database technology, in which managing data in accordance with ethical and moral norms, and legal and policy considerations becomes a core system requirement. I will define properties of responsible data management, which include fairness, transparency, and data protection. I will highlight some of our recent technical advances, and will discuss the over-all framework in which these responsibility properties are managed and enforced through all stages of the data lifecycle. The broader goal of our project is to help usher in a new phase of data science, in which the technology considers not only the accuracy of the model but also ensures that the data on which it depends respect the relevant laws, societal norms, and impacts on humans. Additional information about our project is available at DataResponsibly.com


Julia Stoyanovich is an Assistant Professor of Computer Science at Drexel University in Philadelphia, USA, and an affiliated faculty at the Center for Information Technology Policy (CITP) at Princeton Univeristy. She holds M.S. and Ph.D. degrees in Computer Science from Columbia University. Julia's research focuses on responsible data management and analysis practices: on operationalizing fairness, diversity, transparency, and data protection in all stages of the data acquisition and processing lifecycle. She established the Data, Responsibly consortium, co-organized a Dagstuhl seminar by the same name, and serves on the ACM task force to revise the Code of ethics and professional conduct. Julia's research has been supported by the US National Science Foundation (NSF), the US-Israel Bi-national Science Foundation (BSF), and by Google.



 Joanne Bryson, Professor of Computer Science at the University of Bath, UK, and Princeton University.

AI Ethics: Artificial Intelligence, Robots, and Society

Time and Place: Tuesday, 7 November 2017; 15h00; Campus Belval. *** NEW ROOM: 3.510 *** Maison du Savoir, Campus Belval

Artificial intelligence (AI) is often described as an existential threat to humanity, but is this threat the extinction of our species or only the challenge to our sense of self worth? Can AI be designed for safety? Is an intelligent system necessarily one worthy of moral consideration? Do intelligent systems necessarily compete for resources? I will review some basic theoretical dynamics of intelligence, cooperation and sociality as revealed both in evolved animal and planned human societies, then use these to discuss the most likely outcomes for adding artificial intelligence into our societies. I will then make regulatory and engineering recommendations for how we present as well as develop AI so that we can incorporate it into our society with minimal disruption and to maximal positive effect.

Joanna Bryson is a Reader (tenured Associate Professor) at the University of Bath, and an affiliate of Princeton's Center for Information Technology Policy (CITP). She has broad academic interests in the structure and utility of intelligence, both natural and artificial. Venues for her research range from reddit to Science. She is best known for her work in systems AI and AI ethics, both of which she began during her PhD in the 1990s, but she and her colleagues publish broadly, in biology, anthropology, sociology, philosophy, cognitive science, and politics. Current projects include ``Public Goods and Artificial Intelligence'', with Alin Coman of Princeton Psychology and Mark Riedl of Georgia Tech, funded by Princeton's University Center for Human Values. This project includes both basic research in human sociality and experiments in technological interventions. Other current research include understanding the causality behind the correlation between wealth inequality and political polarization, generating transparency for AI systems, and research on machine prejudice deriving from human semantics. She holds degrees in Psychology from Chicago and Edinburgh, and in Artificial Intelligence from Edinburgh and MIT. At Bath she founded the Intelligent Systems research group (one of four in the Department of Computer Science) and heads their Artificial Models of Natural Intelligence.




Kittichai Lavangananda, Professor of Computer Science at KMUTT University, Bangkok, Thailand.

Application of Genetic Algorithm in Spatial Economics : Emergence of Cities

Time and Place: Wednesday, 25 October 2017; 15h00; Room 2.240, Campus Belval. Maison du Savoir, Campus Belval

The emergent behavior is crucial to the study and planning of cities. Scientists and city planners have proposed different theories on how cities are emerged from trading among individuals and there are several factors which can effect the final outcome. These theories had been transformed into different algorithms on emergence of cities. Changes in location of individuals within a region are usually governed by constrains, which is based on consumers and producers relationship that exist among the population. However, these algorithms are described superficially with high level of abstraction. Not enough attention had been paid on how the effect of different initial setups, and how different evaluation methods in the computation of the algorithm may have on the emergence. Assessing the efficiency of the final emergence from these algorithms is best performed where an ideal or optimal emergence is available for comparison. However, without performing exhaustive search, determination of optimal emergences from an arbitrary setup is almost impossible. From computational and optimization perspectives, there are so many aspects in the simple trading of individuals in this type of Spatial Economics. This talk will focus on just three aspects. Firstly, to illustrate whether different types of initial setups and computations may have an effect on the final emergence. Secondly, to apply Genetic Algorithm in an attempt to determine an optimal emergence from a given setup. Finally, to determine whether global transaction cost at an optimal emergence is likely to subsume individual transaction costs for all individuals too.

Prof Lavangnanada received his B.Sc. in Computational Science from Hull University, in 1985 and M.Sc. in Computing from Cardiff University in 1987, U.K. He completed his Ph.D. studies in Artificial Intelligence at Mechanical Engineering Centre (MEC) at Cardiff University, U.K. in 1995. He has been a faculty member at the School of Information Technology (SIT), King Mongkut’s University of Technology Thonburi (KMUTT), Thailand since 1996. At present, he is an Associate Professor and an Assistant Dean for Special Activities. His administrative experience includes the Associate Dean for Research and International Affairs from 2000 to 2009. He was the Project Leader of the Thailand partner in 2 EU-Asia IT&C projects (APoST - ASI/B7-301/97/0126-15 and EAPSTRA - ASI B7-301/71 548(3152-094) during 2000 to 2004. Prof Lavangnananda is an active research member of the Data and Knowledge Engineering Laboratory (D-Lab) at SIT. His research interest is in the Computational Intelligence related areas (Data Mining, Evolutionary Computation, Machine Learning, Neural Networks) and their applications. His latest research and development was the Prototype of Signature Verification program for the Department of Special Investigation (DSI), Thailand. He is a Senior Member of IEEE Association and a member of the Editorial Board for Cogent Engineering Journal.



 Luis Moniz Pereira, Professor at Universidade Nova de Lisboa.

Programming Machine Ethics



Time and Place: 14 September 2017; 11h00; Campus Belval. Maison du Savoir, Room 04.020

see press release here

Abstract (listen to the talk / audio file):
We stand at the crossroads of Artificial Intelligence, Machine Ethics and their impact on society. Recently, I co-authored a monograph titled "Programming Machine Ethics," where we explore inroads into the terra incognita of machine ethics, by employing Logic Programming and Evolutionary Game Theory. In it we consider both the cognitive realm of the individual and the realm of population morality. My lecture will overview the book's ethics background, scientific and philosophical motivation, theoretical and experimental results, and ongoing research. Beyond that, the roles and impacts of machine ethics for society will be discussed, namely in what regards moral philosophy, jurisprudence and regulations; design of moral autonomous agents; teaching and learning of morality; ethically safe software; and applications.


Luís Moniz Pereira, born in Lisbon in 1947, is a Professor of Computer Science (retired) at Universidade Nova de Lisboa, and founder and director of CENTRIA (1993-2008), the Centre for Artificial Intelligence at the same University. He was elected a Fellow of the European Coordinating Committee for Artificial Intelligence (ECCAI) in 2001, received a doctor honoris causa degree from the TU Dresden in 2006 and has been part of the Board of Trustees and the Scientific Advisory Board of IMDEA, the Madrid Advanced Studies Software Institute, since 2006. He is the founding president of the Portuguese Artificial Intelligence Association (APPIA), and serves on the editorial boards of various scientific journals. His research, which he has presented in hundreds of publications, focuses on knowledge representation and reasoning, logic programming, and the cognitive sciences. Currently, he is affiliated with the NOVA Laboratory for Computer Science and Informatics (NOVALINCS). More detailed information, including his awards, teaching and publications, can be found at http://centria.di.fct.unl.pt/~lmp/



 Manolis KOUBARAKIS, Professor of the Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens.

Big Linked Geospatial Data and its Application to Earth Observation

Time and Place: Wednesday, 6 September 2017; Campus Belval. Maison du Savoir, Campus Belval (Room 4.020 ; 15h00 s.t.)

Abstract: Terabytes of geospatial data have been made freely available recently on the Web. For example, data from gazetteers such as Geonames, maps from geospatial search engines like Google Maps and OpenStreetMap, and user-contributed content form social networks such as Foursquare.

Some particularly important rich sources of open and free geospatial data are the satellite programs of various countries such as the Landsat program of the US and the Copernicus programme of the European Union. Satellite images can be utilized in many applications with financial and environmental impact in areas such as emergency management, climate change, agriculture and security. This potential has not been fully realized up to now, because satellite data ``is hidden'' in various archives operated by NASA, ESA and national space agencies. Therefore, a user that would like to develop an application needs to search in these archives, discover the needed data and integrate it in his application. In this talk we show how to ``break these silos open'' by publishing their data as RDF, interlink it with other relevant data, and make it freely available on the Web to enable the easy development of geospatial applications. This work has been funded by EU projects Copernicus App Lab, Melodies, Optique, LEO and TELEIOS.
More information can be found in the following magazine article  http://cgi.di.uoa.gr/~koubarak/publications/2016/grsm.pdf

Manolis Koubarakis is a Professor in the Dept. of Informatics and Telecommunications, National and Kapodistrian University of Athens. He is also an Adjunct Researcher at the Institute of the Management of Information Systems (IMIS) of the “Athena” Research and Innovation Center. He is a Fellow of EurAI (European Association for Artificial Intelligence). He has published more than 170 papers that have been widely cited in the areas of Artificial Intelligence (especially Knowledge Representation), Databases, Semantic Web and Linked Data. His research has been financially supported by the European Commission (projects CHOROCHRONOS, DIET, BRIDGEMAP, Evergrow, OntoGrid, SemsorGrid4Env, TELEIOS, Optique, LEO, MELODIES, WDAqua and BigDataEurope), the Greek General Secretariat for Research and Technology (more recently through the Research Excellence Grant SCARE), the European Space Agency (project Prod-Trees) and industry sources (Microsoft Research and British Telecommunications). He recently co-chaired the European Data Forum 2014 (http://2014.data-forum.eu/), the top European event aiming towards the development of a strong data economy in Europe. He has co-ordinated the well-known projects TELEIOS (http://www.earthobservatory.eu/) and LEO (http://www.linkedeodata.eu/) which developed tools for linked Earth Observation data and linked geospatial data, and applied them to the development of environmental and commercial applications.

Manolis’ team has also developed the linked data infrastructure of project MELODIES (http://www.melodiesproject.eu/) which studied how to exploit linked open data in a variety of environmental applications. Manolis currently participates in the project BigDataEurope (http://www.big-data-europe.eu/), a European effort in the area of Big Data with application scenarios in all societal challenges of the Horizon 2020. He also participates in the project WDAqua (http://wdaqua.eu/, Answering Questions using Web Data), a Marie Skłodowska-Curie Innovative Training Network. Finally, he also participates in the project Copernicus App Lab (http://www.app-lab.eu/ ) which will make Copernicus services data available as linked data to aid the development of applications by mobile developers.



 Daniel KUDENKO, Professor of the Dept of Computer Science, University of York, UK.

Abstract Markov Decision Processes for Reinforcement Learning

Time and Place: Wednesday, 5 July 2017, 15h00; Campus Belval. Maison du Savoir, Room 04-4.020, Campus Belval


While reinforcement learning (RL) had recent great successes in game AI and other decision making tasks, there are still two major challenges:

  • Scaling up RL to complex tasks
  • Assuring properties such as safety of the learning process and the learning result.

In this talk, I will show how abstract Markov Decision Processes, reward shaping, and quantitative verification can be used to tackle these challenges.





Daniel Kudenko is a member of the Computer Science faculty at the University of York. He got a Ph.D. from Rutgers University and a Masters degree from University of the Saarland, Germany. His research interests include machine (reinforcement) learning, multi-agent systems, user modeling, and artificial intelligence for games and interactive entertainment. Within these areas, he published more than 80 peer-reviewed papers, and has been a member of numerous program committees. Dr. Kudenko is currently heading the Reinforcement Learning Group, is carrying out work in Games, Interactive Entertainment and Drama, and is a member of the Artificial Intelligence Research Group.



 Dov Gabbay, Emeritus Professor, Augustus de Morgan Professor of Logic, King's College London, UK, and Professor, Bar-Ilan University, Israel.

Weaponizing the Fallacies: Human Effective Argumentation - The Next Step in the Evolution of Logic

Time and Place: Tuesday, 9 May, 16h00s.t.; Maison du Savoir, Room 04-4.310, Campus Belval


This lecture is about weaponizing the Fallacies, and offering them as the new evolutionary logic for the 21st Century. Logicians since Aristotle considered the fallacies as wrong arguments which look correct but are not. They classified them into groups, discussed them and left them by the side-lines of logic as failures. Modern society, with the rise of the internet, Twitter, Facebook and YouTube showed the fallacies as most used and most effective in argumentation and debate. If this is the way humans reason and think then we need to develop the logical theory of the fallacies and legitimise them. This manifesto outlines our approach to the new logic of the 21st century- The systematic use of the fallacies in argumentation and debate as practiced by people in the mass media.





 Raymond BISDORFF, Full Professor of Computer Science, Dept of Computer Science, University of Luxembourg.

Algorithmic decision theory for solving complex decision problems

Time and Place: Wednesday, 3 May 2017, 16h00s.t.; Maison du Savoir, Room 04-4.020, Campus Belval



The objective of Algorithmic Decision Theory (ADT) is to improve the ability of decision makers to perform well when facing these new challenges and problems through the use of methods from theoretical computer science, in particular algorithmic methods. The primary goal of ADT is hence to explore and develop algorithmic approaches for solving decision problems arising in a variety of applications areas. Examples include, but are not limited to:

  • Computational tractability/intractability of social consensus and multiple criteria compromise functions;
  • Improvement of decision support and recommender systems;
  • Development of automatic decision devices including on-line decision procedures;
  • Robust decision making;
  • Learning for multi-agent systems and other on-line decision devices.

This presentation will focus more specifically on multiple criteria decision aiding methodology, the actual research field of the author.



Raymond Bisdorff holds an LBA Degree in Business Administration from the University of Liège, Belgium (1975), a MScBA (NATO Graduate Degree Apprenticeship in Systems Sciences grant, 1975-1978) and a PhD in Operations Research (OR), supervisor Prof. B. Roy, from the University Paris-Dauphine (1981). He holds, furthermore, a PhD in Management Sciences from the University of Liège, Belgium (2002). He is since 2003 full professor of Applied Mathematics and Computer Science at the University of Luxembourg, where he teaches courses on algorithmic decision theory, multiple criteria decision aiding, computational statistics and discrete mathematics.
He served the international OR community as president of ORBEL - the Belgian OR society (2008-2010), vice-president of EURO - the Association of European OR Societies (1997-2000) and vice-president of IFORS -the International Federation of OR Societies (2005-2007). He was an honourable collaborator of the Institute of Mathematics at the University of Liège (1996-2005), and of the Polytechnical Faculty of Mons (2010-2017). In 2004, he received an honour diploma from HELLORS -the Hellenic Operational Research Society- for chairing the international Programme Committee of the XXth EURO'2004 Conference, Island of Rhodes. His main research interest is focused on outranking based decision aiding algorithms for selecting, ranking, sorting or rating, and clustering with multiple incommensurable performance criteria of uncertain significance and/or missing data ( see http://sma.uni.lu/bisdorff/research.html ). His major articles appeared in EJOR, Computers & OR, 4OR and in JMCDA ( see http://sma.uni.lu/bisdorff/publications.html).



 CHRISTOPH BENZMüLLER, PD, Dr. habil, Dept of Mathematics and Computer Science, FU Berlin, Germany

Computational Metaphysics: The Virtues of Formal Computer Proofs Beyond Maths

Time and Place: Monday, 27 March 2017, 16h00 s.t.; Maison du Savoir, Room 04-4.020, Campus Belval


Formal computer proofs - irrespective of being developed interactively with modern proof assistants, fully automatically by automated theorem provers, or in a combination of both - are still rather unpopular amongst many mathematicians. Benzmüller will challenge this stance and point to recent success stories of computer-assisted proofs in maths and beyond. In particular, he will demonstrate how the rigorous assessment of rational arguments in philosophy can be fruitfully supported by modern theorem proving technology. A prominent example includes the “Ontological Argument for the Existence of God“ for which even relevant new insights were recently revealed by automated theorem provers. The latter research activities have inspired the conception of a new, awarded lecture course on “Computational Metaphysics“ at Freie Universität Berlin which brings together students from computer science, maths and philosophy.


Christoph Benzmüller is affiliated as Privatdozent for Computer Science and Mathematics with Freie Universität Berlin and Saarland University, Saarbrücken. Currently, he is visiting University of Luxemburg. Previous research stations of Christoph include Stanford University, USA (visiting scholar), Articulate Software, USA (senior researcher), Intl. University of Germany, Bruchsal (full professor), University of Cambridge, UK (senior researcher), Saarland University (associate professor), University of Birmingham, UK and the University of Edinburgh, UK (postdoc). Christoph received his PhD (1999) and his Habilitation (2007) in computer science from Saarland University. His PhD was partly conducted at Carnegie Mellon University, USA. In 2012, Christoph had been awarded with a Heisenberg Research Fellowship of the German National Research Foundation (DFG). Christoph is an expert in higher-order automated and interactive theorem proving, which he utilises as a basis for an approach towards universal logic reasoning. His broader interests concern all aspects of knowledge representation and reasoning. Moreover, Christoph is well known for his interdisciplinary applications. Most recently, for example, he has pioneered, together with colleagues, the area of computational metaphysics. Christoph is trustee and vice-president of CADE (Conference on Automated Deduction), board member of AAR (Association of Automated Reasoning) and spokesman of the section Deduction Systems of the Gesellschaft für Informatik. He serves in various further functions (chair, editorial board, steering committee, trustee, etc.) for various conferences and organisations.



 MADY DELVAUX-STEHRES, European Parliament

Civil Law Rules on Robotics

Time and Place: Monday, 20 March 2017, 10h00 s.t.; Campus Belval, Maison du Savoir, Room MSA-E04-4.020


Social assistive robots, cyber-physical systems in Industry or Artificial Intelligence (AI) are no longer a concepts in science fiction. We are living through a new Industrial Revolution and it is changing our society. In this context the new agents surpass current laws and now it is necessary to adapt civil rules and to create robot regulations to face the new scenario.



Mady Delvaux-Stehres est députée européenne luxembourgeoise depuis mai 2014. Membre du Groupe des Socialistes & Démocrates au Parlement européen, elle est élue Vice-présidente de la commission des affaires juridiques et suppléante de la commission des affaires économiques et monétaires. Elle fait également partie du comité consultatif sur le code de conduite des députés européens. Entre 1989 et 2014, Mady occupe plusieurs postes ministérielles. Entre 2004 et 2014, elle est Ministre de l’Éducation nationale et de la Formation professionnelle. En tant que Ministre de la Sécurité sociale, des Transports et des Communications de 1994 à 1999, elle s’occupe entre autres de la mise en place de la nouvelle assurance dépendance, de la réorganisation des chemins de fer d’après les directives européennes et de la libéralisation des services téléphoniques. Nommée en 1989 secrétaire d’État à la Santé, à la Sécurité sociale, à la Jeunesse et à l’Éducation physique et aux Sports, elle y engage notamment la réforme de l’assurance maladie. De 1999 à 2004, Mady est membre de la Chambre des députés. En octobre 1987, elle est élue pour la première fois conseillère communale de la Ville de Luxembourg, mandat que les électeurs lui confient à nouveau en 1999. Pendant toutes ces années, Mady est active au sein des Femmes socialistes. Après ses études de lettres classiques à la Sorbonne, Mady est professeur au Lycée Michel Rodange à Luxembourg-Ville jusqu’en 1989, lorsqu’elle est nommée au gouvernement. Mady est née le 11 octobre 1950 à Luxembourg-Ville.



 James Cochran, Professor of Statistics, Department of Computer Science, University of Alabama, USA

The importance of Collective Science

Time and Place: Monday, 30 January 2017, 15h00 s.t.; Room E.112 (Campus Kirchberg)


The scientific method, which is embodied by statistical inference, is the heart of who we are and what we do as an academic community (which is why some, including me, argue that statistics is the purest of the sciences). Unfortunately, although we may individually think and act scientifically, collectively we often fail to do so. This has led in numerous instances to nonscientific collective behavior. In addition to dramatically retarding the progress of science, this collective behavior has led to misunderstanding of science by the non-scientific community, co-opting of science by the business community, and poor decision making by policy makers those who have relied on the results of our research efforts. In this talk I will address the lack of collective scientific behavior in research on the relative age effect and the ramifications that have resulted. Specifically, I will discuss the i) lack of appreciation for results that are not statistically significant, ii) disregard for conditions that are necessary for establishing a relationship between an antecedent event and a consequent event, and iii) failure to consider, establish, and test appropriate null and alternative hypotheses. I will also present what I believe are appropriate corrective measures.


James J. Cochran is Professor of Statistics, Rogers-Spivey Faculty Fellow, and Head of the Department of Information Systems, Statistics and Management Science at The University of Alabama. He earned a PhD in Statistics from the University of Cincinnati in 1997, and he has been a Visiting Scholar with Stanford University, the University of South Africa, the Universidad de Talca, and Pôle Universitaire Léonard De Vinci. Professor Cochran was a founding co-chair of Statistics Without Borders and a member of the founding committee for INFORMS Pro Bono Analytics initiative. He established INFORMS’ Teaching Effectiveness Colloquium series and annual case competition. In 2005 Professor Cochran established the International Education Workshop series; through this series colloquia have been held in Uruguay, South Africa, Colombia, Tanzania, Argentina, Kenya, India, Fiji, Croatia, Nepal, Cuba, and Mongolia (with plans for upcoming colloquia in Moldova, Madagascar, and Romania). In 2008 he organized the 2008 ORPA Conference on Using Operations Research to Address Urban Transport and Water Resource Management Issues in Africa. Professor Cochran is founding Editor-in-Chief of the Wiley Encyclopedia of Operations Research and the Management Sciences and the Wiley Series in Operations Research and Management Science as well as the forthcoming Guide to the Analytics Body of Knowledge. He has published over forty research articles and a dozen book chapters, and he is coauthor of seven textbooks in statistics, operations research, and analytics. He has served as a consultant to a wide variety of corporations, government agencies, and not-for-profit organizations around the world. He served as Editor-in-Chief of INFORMS Transactions on Education and serves on the boards of several journals. In 2006 Professor Cochran was elected to the International Statistics Institute, in 2008 he received the INFORMS Prize for the Teaching of OR/MS Practice, in 2010 he received the Mu Sigma Rho Statistical Education Award, and in 2011 he was named a Fellow of the American Statistical Association. In 2014 he became the 86th recipient of the American Statistical Association’s Founders Award, and in 2015 he received the Karl E. Peace Award for outstanding statistical contributions for the betterment of society.