Hjem
Institutt for informatikk

Instituttseminar 2013

Vår og haust 2013

Hovedinnhold

October 17

Title: Ragnar Frisch, Sir Isaac Newton, and others

Speaker: Sjur Didrik Flåm, Dept. of Economics, UiB.

Abstract: The distinguished econometrician, Nobel prize laureate Ragnar Frisch (1895-1973) also
played an important role in optimization theory. In fact, he was a pioneer of interior point methods. I
reconsider his contribution, relating it to history and modern developments.

 

Juni 24

Tittel: Algorithmic Trading from a Technologist's Perspective

Speaker: Knut Arild Erstad og Karl Trygve Kalleberg fra KolibriFX (hhv Master i Algoritmer 2002 og PhD i PUT 2007)

Abstract: The highly interwoven and complex international financial system is a fundamental requirement for our ever-growing welfare. Exponential population growth and increased means of communication are but a few drivers for the recent increases in financial transactions; with it follows an insatiable demand for low-latency, high-speed processing. Furthermore, new financial instruments are continually introduced into the markets, increasing the variety and complexity of the individual transactions. In this talk, we will give a short primer on the electronic financial markets, explain the concept of algorithmic trading, and discuss the various information processing challenges present in a domain where the technologist's job is to build systems which process massive amounts of a data, with low-latency, high-throughput and no failures (pick all three).

 

Mai 23

Tittel: Medical image computing from image-guided surgery to physics-based image reconstruction: applications, trends and challenges

Speaker: Dr. Wolfgang Wein, CEO of ImFusion GmbH, München, Germany and Adjunct Prof. at TU München, Germany

Abstract: The talk covers various aspects of this exciting interdisciplinary field based on the presenter's own work. The fundamental requirement to precisely model the underlying imaging physics in order to create powerful computer algorithms is first introduced. Some particular technologies based on this approach are then highlighted. Applications are shown both from the realm of image-guided medical interventions, but also image reconstruction of ultrasonic and X-Ray CT data. The interdisciplinary aspects, in the cross-section of computer science, physics and medicine, are emphasized. The talk concludes with a discussion of the major trends of the field, as well as challenges faced when creating product implementations of new technologies.

 

April 25

Tittel: Stochastic Network Design

Speaker: Stein W. Wallace, Department of Business and Management Science, NHH

Abstract: This talk focuses on whether or not it is important to include operational uncertainty in network design models, or if the handling of uncertainty can be postponed to the operational stage of decision-making. The focus will be on understanding how optimal solutions from stochastic network design models differ from their deterministic counterparts (where random variables are replaced by their means), and to what extent these differences can teach us something about operational flexibility. Finally we ask: If a deterministic solution is really bad in a random environment, can it still be useful?

April 11

Tittel: Chapel: Parallel Programmability from Desktops to Supercomputers

Speaker:  Brad Chamberlain, Cray Inc

Abstract: Chapel is an emerging programming language whose design and development is being led by Cray Inc. with collaboration from academia, government, and industry. Chapel grew out of the DARPA-led High Productivity Computing Systems program (HPCS), with the goal of significantly improving programmer productivity on HPC systems.

 

Chapel's central philosophy is to support a multiresolution feature set in which both higher- and lower-level features are supported, permitting users to select between greater abstraction vs. control as required within a given context. Moreover, Chapel's highest-level features can be defined within the language itself by an end-user, providing a means for defining new parallel loop schedules or data distributions. Chapel is also designed for generality and portability -- in short, its overarching goal is to support any parallel algorithm on any parallel hardware.

 

In this talk, I will provide an overview of Chapel, from context and motivating themes to a high-level walkthrough of its primary features. I'll also mention collaborative efforts and areas for future work.

Om foredragshaldaren: Bradford Chamberlain is a Principal Engineer at Cray Inc. where he works on parallel programming models, focusing primarily on the design and implementation of the Chapel language in his role as technical lead for that project.

Chamberlain is visiting BLDL Wednesday-Friday April 10-12. He will give a tutorial on Chapel Wednesday and a follow-up presentation Friday. For more information on the visit and registration for the tutorial or workshop see

http://bldl.ii.uib.no/2013/13v-chapel_parallel_programming_week.html

 

Februar 21

Tittel: Interplanetary Mission Design: Modelling and a Mixed-Integer Solution Method

Speaker: Jan-J Ruckmann, University of Birmingham.

Abstract: In this lecture we present results on some mixed-integer non-linear (MINLP) space applications and discuss their optimization with a newly developed general purpose software. These aerospace engineering results on modelling and optimization are related to a recent research grant with the European Space Agency (ESA).

 

Februar 14

Tittel: Approximation algorithms for a scheduling and related knapsack problem

Speaker: Klaus Jansen, University Kiel.

Abstract: In the first part of the talk we consider a parallel machine scheduling problem where some jobs are already fixed in the system or intervals of non-availability of some machines must be taken into account. An instance consists of m machines and n jobs with processing times p_1,...,p_n. The first k jobs are fixed via a list (m_1,s_1), ... , (m_k,s_k) given a machine and starting time for each fixed job. The objective is to assign the other n-k jobs non-preemptively and without overlapping to the machines while minimizing the maximum completion time (the makespan) among all jobs. Scharbrodt, Steger, and Weisser proved that there is no approximation algorithm for this scheduling problem with ratio strictly better than 3/2, unless P=NP. Furthermore, they proposed an approximation algorithm with ratio 2+epsilon for this problem. In this talk we present an approximation algorithm for the scheduling problem with ratio 3/2, closing the gap between the best known upper and lower bound. The running time of our algorithm can be bounded by O(n log n + log((n/m) max_j p_j) (n + T_{MKP}(n,1/epsilon)), where T_{MKP}(n,1/epsilon) is the running time of an approximation algorithm for the multiple knapsack problem (MKP) with fixed accuracy epsilon = 1/8. In the second part we study the multiple knapsack problem (MKP); a well-known generalization of the classical knapsack problem. Given a set A of n items and set B of m bins with possibly different capacities, the goal is to find a subset S of A of maximum total profit that can be packed into B without exceeding the capacities of the bins. The decision version of MKP is strongly NP-complete, since it is a generalization of the bin packing problem. Furthermore, MKP does not admit a fully polynomial time approximation scheme (FPTAS) even if the number m of bins is two. Kellerer gave a polynomial time approximation scheme (PTAS) for MKP with identical capacities and Chekuri and Khanna presented a PTAS for MKP with general capacities with running time n^{O(1/epsilon^8 log(1/epsilon))}. In this talk we propose an efficient PTAS (EPTAS) with improved running time T_{MKP}(n,epsilon) = 2^{O(1/epsilon log^4(1/epsilon))} + poly(n) for MKP. This also solves an open question by Chekuri and Khanna. If the modified round-up property for bin packing with different bin sizes is true, the running time can be further improved to T_{MKP}(n,epsilon) = 2^{O(1/epsilon log^2(1/epsilon))} + poly(n).

 

Februar 7

Tittel: Learning from multi-modal data: integration, fusion, and data translation

Speaker: Samuel Kaski, Helsinki Institute for Information Technology HIIT, Aalto University and University of Helsinki.

Abstract: In data analysis tasks, across fields from genomics to multimodal interfaces, one of the most needed operations is data integration or data fusion. For the goal of making sense of the data, the different very high-dimensional data sources give different but complementary information. In a case study in genomics, the sources include gene expression in different diseases and under different treatments, metabolite concentrations, DNA copy number variation etc. Given the large number of data sources with mostly unknown connections, it may be more appropriate to talk about data translation than integration, with the goal being to find, characterize, and utilize the unknown connections between data sources. In machine learning this task has been called unsupervised multi-view machine learning, for which we have introduced Bayesian canonical correlation analysis-based methods, and recently Group Factor Analysis (GFA) which generalizes factor analysis from analysing relationships of univariate variables to analysis of multiple data sources each consisting of multivariate observations. I will discuss the methods and present case studies in metabolomics and in analysing genome-wide effects of drugs.