17 January 2019 - 17 January 2019

This talk describes the combination of machine learning with microscopy techniques for investigating the mechanisms of the immune system. After giving an overview about the applications of machine learning to in-vivo imaging, the capabilities of a graph-based, semi-supervised clustering algorithm will be presented. More in details, the immune system involves a complex network of cellular interactions. This network can be described as a system whose output can be either protective (i.e. from pathogens and tumors) or pathogenic (i.e. leading to autoimmune diseases). In-vivo video microscopy (IVM) is a recently developed method to investigate the behavior of the immune system in living animals. IVM acquires 4D videos capturing the migration of cells which correlates to their spatiotemporal interaction patterns. However, automatic classical automatic analysis methods for this type of data require cell segmentation and tracking which are challenging due to the high plasticity, lack of textures and frequent contacts between cells. To this end, we present a semi-supervised clustering algorithm for segmentation and tracking by grouping voxels according with a trainable grouping criterion. Moreover, we present novel analysis methods that do not require segmentation nor tracking.
IDSIA, Galleria 1

31 January 2019

Oscillations are a fundamental property of life and oscillatory activity is observed throughout the central nervous system at all levels of organization. They are observed across vastly different time scales ranging from year-long cycles to milliseconds. Oscillations interact across different time scales in a complex manner with one of the emerging principles being that lower level (faster) oscillations are embedded in higher level (slower) oscillations. Sleep is a prototype for such a multilevel oscillation. On the one hand, sleep is part of the slower oscillation of the circadian cycle, which interacts with the homeostatic oscillator to regulate the timing and the intra-sleep dynamics. Sleep itself is made up of ultradian cycles, that is, the 90 to 120 minute cycle between non-rapid-eye-movement (NREM) sleep and REM sleep. NREM sleep again consists of several sleep stages, currently labeled N1 to N3, which denote the succession from lighter to deeper, slow wave sleep. The specific sleep stage is characterized by the predominance of brain activity oscillations that include several EEG frequency bands. Bridging the gap between the EEG frequency bands and the ultradian cycle, recent research has begun to characterize a further oscillation of brain activity that is even slower than the slow waves and represents multi-second oscillations with periodicities between 10 to 100 s. These oscillations are called infra-slow oscillations (ISO) and include a synchronous oscillations of motor (periodic legs movements), autonomic and cortical activity during sleep. The physiological as well the pathological meaning of the motor component of ISO has become an hot topic in sleep research.
Manno, Galleria 1, 2nd floor, room G1-201

12 February 2019

Non-stationarity in data can arise due to the changes in various unobserved influencing factors. One way to account for non-stationarity is to employ models with time-varying parameters. Such models can be parametric or non-parametric depending on underlying assumptions they impose. The presented non-stationary approach identifies the optimal number of hidden regimes in data and the (a priori unknown) regime-switching dynamic without employing restrictive parametric assumption about the data-generating process. Within the regime, data is modelled using Maximum Entropy density, where the optimal number of density parameters is inferred via Lasso regularization technique. The resulting non-parametric methodology provides simultaneously the simplest and the least biased description of the data.
IDSIA meeting room @10:00

15 February 2019

Many complex systems are characterized by multi-level properties that make the study of their dynamics and of their emerging phenomena a daunting task. The huge amount of data available in modern sciences can be expected to support great progress in these studies, even though the nature of the data varies. Given that, it is crucial to extract as much as possible features from data, including qualitative (topological) ones. The goal of TOPDRIM project has been to provide methods driven by the topology of data for describing the dynamics of multi-level complex systems. To this end, the project has developed new mathematical and computational formalisms accounting for topological effects. To pursue these objectives, the project brought together scientists from many diverse fields including as topology and geometry, statistical physics and information theory, computer science and biology. The proposed methods, obtained through concerted efforts, covered different aspects of the science of complexity ranging from foundations, to simulations through modelling and analysis, and constituted the building blocks for a new generalized theory of complexity. This seminar introduces the fundamentals behind the topological data analysis and through some applications developed both in the biomedical and financial field, presents the TOPDRIM methodology for going beyond the concept of networks by considering simplicial complexes instead.
Manno, Galleria 1, 2nd floor, room G1-201 @12:00

21 March 2019 - 21 March 2019

Suppose that I give you a square and a collection of rectangles of different shapes. How many rectangles can you pack into the square (so that they do not overlap)? This and related problems are NP-hard. In this talk I will present approximation algorithms to efficiently pack a number of rectangles close to the optimum. The talk is meant to be accessible to non-experts.
Manno, Galleria 1, 2nd floor, room G1-204 @12h00

3 April 2019 - 3 April 2019

A Groebner basis is a set of multivariate polynomials that has desirable algorithmic properties. Every set of polynomials can be transformed into a Groebner basis. This process generalizes three familiar techniques (and more): 1) Gaussian elimination for solving linear systems of equations, 2) the Euclidean algorithm for computing the greatest common divisor of two univariate polynomials, 3) and the Simplex Algorithm for linear programming. In this talk I'll give a gentle introduction to Groebner bases. No prior knowledge is required.
Manno, Galleria 1, 2nd floor, room G1-201 @12:00

11 April 2019

We are experiencing once again a period of enthusiasm in the AI research field, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this talk we draw attention to what we take to be serious problems underlying current views of artificial intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, and illustrate this approach in relation to a specific example in the field of claims management.
Manno, Galleria 1, 2nd floor, room G1-201 @12:00

30 April 2019 - 30 April 2019

Learning technologies are becoming increasingly important in today's education. This includes game-based learning and simulations, which produce high volume output, and MOOCs (massive open online courses), which reach a broad and diverse audience at scale. The users of such systems often are of very different backgrounds, for example in terms of age, prior knowledge, and learning speed. Adaptation to the specific needs of the individual user is therefore essential. In this talk, I will present two of my contributions on modeling and predicting student learning in computer-based environments with the goal to enable individualization. The first contribution introduces a new model and algorithm for representing and predicting student knowledge. The new approach is efficient and has been demonstrated to outperform previous work regarding prediction accuracy. The second contribution introduces models, which are able to not only take into account the accuracy of the user, but also the inquiry strategies of the user, improving prediction of future learning. Furthermore, students can be clustered into groups with different strategies and targeted interventions can be designed based on these strategies.
IDSIA Meeting Room, Galleria 1, Manno

28 May 2019

InferPy is an open-source library for deep probabilistic modeling written in Python and running on top of Edward 2 and Tensorflow. Other existing probabilistic programming languages possess the drawback that they are difficult to use, especially when defining deep neural networks and probability distributions over multidimensional tensors. This means that their final goal of broadening the number of people able to code a machine learning application may not be fulfilled. InferPy tries to address these issues by defining a user-friendly API which trades-off model complexity with ease of use. In particular, this library allows users to: prototype hierarchical probabilistic models with a simple and user-friendly API inspired by Keras; define probabilistic models with complex constructs containing deep neural networks; create computationally efficient batched models without having to deal with complex tensor operations; and run seamlessly on CPUs and GPUs by relying on Tensorflow.
Manno, Galleria 1, 2nd floor, room G1-201 @12:00