The 16th Learning and Intelligent Optimization Conference

June 5-10, 2022 (Milos Island, Cyclades, Greece)

LION 16.2 (June 11th - June 15th, 2022)

LION16.2 is the extension program of LION16 that aims to aspire to research-oriented dialogues, create networks, and open discussions among all interested participants. The extension, LION16.2, will take place after the main conference of LION16 from the 11th of June to the 15th of June 2022. Moreover, it offers a place where scholars from various parts of the world can debate the research findings, possible new methodologies, compare results, and establish new collaborations. More details, regarding LION16.2 will be announced during the conference.

LION 16 Conference Poster

The LION16 conference poster is available here.


LION16 is proud to announce a plenary talk given by one of the world's leading computer science theorists, Christos Papadimitriou, best known for his work in computational complexity, helping to expand its methodology and reach.

How does our mind emerge from our brain?

Abstract: Fruit flies learn through optimization! A powerful inhibitory neuron called APL selects, out of 2000 neurons called Kenyon cells, about 100 who are most excited by what happens in the environment at the moment; this sparse 2000 long bit vector is the new memory. In our brains a more complicated neural circuit creates in a similar manner populations of spiking neurons called assemblies or ensembles, which many believe mediate higher cognitive functions. I will present the Assembly Calculus, a computational framework based on this phenomenon, and present evidence that such a system may be implicated in language, planning, learning, and reasoning.
Two papers for a quick look at the figures:


Short-bio: Christos Papadimitriou has also explored other fields through what he calls the algorithmic lens, having contributed to biology and the theory of evolution, economics, and game theory (where he helped found the field of algorithmic game theory), artificial intelligence, robotics, networks and the Internet, and more recently the study of the brain.

He authored the widely used textbook Computational Complexity, as well as four others, and has written three novels, including the best-selling Logicomix and his latest, Independence. He considers himself fundamentally a teacher, having taught at UC Berkeley for the past 20 years, and before that at Harvard, MIT, the National Technical University of Athens, Stanford, and UC San Diego.

Papadimitriou has been awarded the Knuth Prize, IEEE’s John von Neumann Medal, the EATCS Award, the IEEE Computer Society Charles Babbage Award, and the Gödel Prize. He is a fellow of the Association for Computer Machinery and the National Academy of Engineering, and a member of the National Academy of Sciences.


Tutorial 1: A Random Generator of Hypergraphs Ensembles

Mario Rosario Guarracino 1, Amor Messaoud 2, Yassine Msakni 2, Giovanni Camillo Porzio 1

1Department of Economics and Law, University of Cassino and Southern Lazio, Italy

2Ecole Supérieure de Commerce de Tunis, Tunisia


A graph is a pair G = (V, E) where V is a set whose elements are called vertices, and E is a set of paired vertices, whose elements are called edges. When the paired vertices are ordered, the graph is said directed. Hypergraphs are a generalization of graphs, in that each edge can contain any number of nodes. Therefore, a hypergraph H = (S, E) consists of a set of vertices S and a set E of subsets of vertices called hyperedges. In a directed hypergraph hyperedges are not sets, but an ordered pair of subsets of X, constituting the tail and head of the hyperedge.

Hypergraphs have a variety of applications: they are used to represent functional dependence in databases, Horn formulas in propositional logic, context-free grammars, to cite a few. In this work, we provide a brief historical introduction to the notion of Directed Hypergraphs and some relevant applications. We introduce HyperGen, a random generator of directed weighted hypergraphs which uses random models. HyperGen generates ensembles of hypergraphs with a different number of hypergraphs, classes, nodes, hyperedges, and nodes per hyperedge.

We will demonstrate that HyperGen can generate weighted directed hypergraphs of different sizes in a scalable manner and in a predictable time. Thus, our model can be used as a proxy to test data analytics techniques.

Keywords: Hypergraphs, Hypergraphs random models, Directed weighted hypergraph

Tutorial 2: Crops, Tuples and Disasters

Bernhard Garn, Klaus Kieseberg, and Dimitris E. Simos

MATRIS Research Group, SBA Research


In this tutorial, we focus our talk on the beginning, development, and rise of DoE (Design of Experiments) in the second half of the 20th century.

We, especially, highlight the recent successes of applying the methods of DoE and its thinking technique to software systems, cyber security, and AI/ML. We emphasize the newly emerging area of applications of applied -design- combinatorics together with a DoE mentality or which we see great potential in the future: combinatorics of disasters.

Keywords: Design of Experiments, Cyber security, DoE, Disasters, Combinatorics, Combinatorics of disasters

Tutorial 3: Numerical Infinities and Infinitesimals in Optimization

Yaroslav D. Sergeyev

University of Calabria, Rende, Italy

Lobachevsky State University, Nizhni Novgorod, Russia



In this talk, a recent computational methodology is described. It has been introduced with the intention to allow one to work with infinities and infinitesimals numerically in a unique computational framework. It is based on the principle ‘The part is less than the whole’ applied to all quantities (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). The methodology uses as a computational device the Infinity Computer (a new kind of supercomputer patented in several countries) working numerically with infinite and infinitesimal numbers that can be written in a positional system with an infinite radix. On a number of examples (numerical differentiation, ordinary differential equations, etc.) it is shown that the new approach can be useful from both theoretical and computational points of view. The main attention is dedicated to applications in optimization (local, global, and multi-objective). The accuracy of the obtained results is continuously compared with results obtained by traditional tools used to work with mathematical objects involving infinity. The Infinity Calculator working with infinities and infinitesimals numerically is shown during the lecture.

For more information see the dedicated web page and this survey: Sergeyev Ya.D. Numerical infinities and infinitesimals: Methodology, applications, and repercussions on two Hilbert problems, EMS Surveys in Mathematical Sciences , 2017, 4(2), 219–320. The web page developed at the University of East Anglia, UK is dedicated to teaching the methodology:

Tutorial 4: Tourism and Hospitality: Relevant Problems for Machine Learning and Intelligent Optimization

Roberto Battiti

University of Trento, Italy and Ciaomanager SrL Italy



The tourism and hospitality sector are undergoing disruptive changes made possible by the connection between abundant data (searches, reservations, reviews, …), models based on machine learning, and optimization tools to identify optimal or improving decisions. Through the cloud, this wave of innovation is percolating from international chains to individual hotels of medium and small size. The complexity and relevance of the sector are worth further investigation by competent researchers.

In the talk we review the main applications of learning and intelligent optimization techniques to tourism and hospitality, dealing with issues related to:

  • Revenue and total profit management (optimization of dynamic pricing schemes
  • Simulation-Based Optimization (SBO) for hotel management
  • Prediction of demand
  • Optimized property management, optimal allocation of rooms (RoomTetris)
  • Intelligent customer-relationship management
  • Collaborative recommendation systems
  • Machine learning and clustering in marketing research

Tutorial 5: Distributed Adaptive Gradient Methods for Online Optimization

George Michailidis

University of Florida, Department of Informatics


Adaptive gradient based optimization methods (Adam, Adagrad, RMSProp) are widely used in solving large scale machine learning problems including training deep learning neural networks. A number of schemes have been proposed in the literature aiming at parallelizing them, based on communications of peripheral nodes with a central node, or amongst themselves. In this presentation, we briefly review centralized adaptive gradient based algorithms and then discuss distributed variants. We discuss their convergence properties in both stochastic and deterministic settings. The algorithms are illustrated on applications, including training of deep neural networks.


The 16th Learning and Intelligent Optimization (LION) conference is planned to
take place as a physical conference on June 5-10, 2022, Milos Island, Cyclades, Greece

The 2022 LION Organization is closely monitoring the ongoing COVID-19 situation. The safety and well-being of all conference participants is our top priority. After studying and evaluating the announcements, guidance, and news released by relevant national departments, we are prepared to convert LION16 into a hybrid or virtual conference experience. The dates of the conference will remain the same.

Milos emerged as the best island in the world for 2021

Milos emerged as the best island in the world for 2021 from the readers of the american travel magazine "travel+leisure". More information can be found here.

The 16th Learning and Intelligent Optimization Conference

June 5-10, 2022 (Milos Island, Cyclades, Greece)

The large variety of heuristic algorithms for hard optimization problems raises numerous interesting and challenging issues. Practitioners using heuristic algorithms for hard optimization problems are confronted with the burden of selecting the most appropriate method, in many cases through expensive algorithm configuration and parameter tuning. Scientists seek theoretical insights and demand a sound experimental methodology for evaluating algorithms and assessing strengths and weaknesses. This effort requires a clear separation between the algorithm and the experimenter, who, in too many cases, is "in the loop" as a motivated intelligent learning component. LION deals with designing and engineering ways of "learning" about the performance of different techniques, and ways of using past experience about the algorithm behavior to improve performance in the future. Intelligent learning schemes for mining the knowledge obtained online or offline can improve the algorithm design process and simplify the applications of high-performance optimization methods. Combinations of different algorithms can further improve the robustness and performance of the individual components.

This meeting explores the intersections and uncharted territories between machine learning, artificial intelligence, energy, mathematical programming and algorithms for hard optimization problems. The main purpose of the event is to bring together experts from these areas to discuss new ideas and methods, challenges and opportunities in various application areas, general trends and specific developments. We are excited to be bringing the LION conference in Greece for the fourth time.


All papers must be submitted using EasyChair at at

Conference Program

LION16 conference program is now available via this link.


LION16 accepted papers will be presented at the conference and published in the LNCS Springer Series as Post-Proceedings. Selected papers will also be invited to a special issue of the Annals of Mathematics and Artificial Intelligence (AMAI) Springer journal.

Conference Center

LION16 will take place at the fabulous Milos Conference Center – George Eliopoulos which is located at Adamas, the Milos Island’s main port, on a 10,000 sq. meters plot.

List of Available Hotels (to be updated)

As the high season starts in Greece, we suggest you book your accommodation as soon as possible. Here are some recommendations:

Important Dates

All deadlines are Anywhere on Earth (AoE = UTC-12h).

Submission deadline February 15, 2022 February 28, 2022 March 15, 2022
Author notification March 31, 2022April 15, 2022
Registration opens May 20, 2022
Conference June 5-10, 2022
Submission of camera-ready papers, source files, formatted according to Springer's LNCS guidelines TBA

Conference Organization

Co-General Chair Francesco Archetti Università degli Studi di Milano-Bicocca, Italy
Co-General Chair Panos Pardalos University of Florida & Laboratory of Algorithms and Technologies for Networks Analysis (LATNA), Higher School of Economics (HSE)
Program Committee Co-Chair Dimitris E. Simos SBA Research, Austria & Graz University of Technology, Austria & NIST, USA
Program Committee Co-Chair Varvara A. Rasskazova Moscow Aviation Institute (National Research University), Moscow, Russia
Local Organizing Committee Chair Ilias Kotsireas CARGO Lab, Wilfrid Laurier University, Canada
Publicity Chair Izem Chaloupka MATRIS Research Group, SBA Research, Austria

The LION16 conference organization is a joint partnership of the Università degli Studi di Milano-Bicocca, Italy, University of Florida & the Laboratory of Algorithms and Technologies for Networks Analysis (LATNA) of the Higher School of Economics (HSE), the MATRIS research group of SBA Research, Austria, the Moscow Aviation Institute, Russia and the CARGO lab of Wilfrid Laurier University, Canada.

Program Committee


Interested in participating in or sponsoring LION16?

If you would like to be alerted about the call for papers, the call for contests and special sessions, and additional organization details please contact the Conference Chairs. You can find their emails in their respective websites.

Photo Credit

Paraskevas Karvouniaris.