Ongoing master theses
(Near) Optimal Cable Layout in Offshore Windfarms
For offshore windfarms given by
- GPS coordinates of the wind turbines,
- GPS coordinates of the target point(s) of the supply cable(s),
- GPS coordinates of areas prohibited for cables,
- number of substations and/or GPS coordinates of the substations,
- maximum number of wind turbines per cable string or loop,
- whether the cables should loop to provide redundancy,
- whether substations should be connected by export cable to provide redundancy,
- whether the wind turbines can have more than two neighbours and if so, the cost induced by additional neighbours,
we look for the cable layout minimizing the total cable length.
Advisor: Joanna Bauer
Student: Thomas Amland
Enrolled semester: Spring 2012
Fast methods for solving the pooling problem
In pipeline transportation of natural gas, simple network flow problems are replaced by hard ones when bounds on the flow quality are imposed. The sources, typically represented by gas wells, provide flow of unequal compositions. For instance, some sources may be richer on undesired contaminants, such as CO2, than others. At the terminals, constraints on the relative content of the contaminant may be imposed. Flow streams are blended at junction points in the network, where the relative CO2-content becomes a weighted average of the relative CO2-content in entering streams. To account for the quality bounds at the terminals, the quality therefore must be traced from the sources via junction points to the terminals.
The problem of allocating flow at minimum cost is referred to as the pooling problem when the above-mentioned quality bounds are imposed. It is known that the pooling problem is NP-hard, which means that it is very unlikely that exact solutions can be found in instances of large scale. Some exact methods, based on strong mathematical formulations and intended for instances of small and medium size, have recently been developed. However, the literature offers few approaches to approximation algorithms and other inexact methods dedicated for large pooling problem instances.
In this master project, the candidate will develop and implement fast methods for the pooling problem. It is not generally required that the methods provide the optimal solution, but verification of the solution quality is essential. Such verification will be made experimentally by comparison to the exact solution in instances where the solution is known, and by comparison to solution bounds in instances where the solution itself is not available. Theoretical verification, in terms of e.g. error guarantees, may turn out to be difficult for some of the methods. For the simpler methods, however, the candidate will be encouraged to elaborate also on this.
Because of the experimental nature of this project, a substantial part of the work will be computer implementations, preferably in C++ or Java, of the suggested methods.
Advisor: Dag Haugland.
Student: Anisha Kejriwal
Enrolled semester: Fall 2012
The tracking error problem with a cardinality constraint on the number of assets
When a cardinality constraint restricting the number of assets is introduced in a portfolio, the problem of optimizing the selection of assets becomes NP-hard. Heuristics and possible solutions have been suggested. Woodside, Lucas & Beasley (2011) presents a heuristic algorithm for the cardinality constrained efficient frontier where they consider the mean variance model of Markowitz including the restrictions of buy-in thresholds and cardinality constraints. Coleman (2006) presents a graduated non-convexity method to minimize portfolio tracking error with the total number of assets no greater than a specified integer. These are two of the not so few proposals that have been suggested to solving this problem.
There exist different portfolio management strategies to choose from while making investment decisions. For each of these strategies, managers are faced with a myriad of factors to consider. For the active and passive managers, the focus is on maximizing the returns and for the other is minimizing the risk respectively. Several decisions on investments are made from predictions or calculations from historical data in the hope that they aid in choice of a good portfolio.
Passive fund managers focus on index tracking and they are faced with the challenge of the tracking error (TE). TE is the difference between the manager’s portfolio and the benchmark. Several factors contribute to the TE and these include trading costs, differences in the composition of the portfolio to mention but a few. To minimize the TE involves selecting a portfolio that will track the index at a low cost. Replication increases the TE for it increases the trading costs. It is therefore of high significance to select a portfolio that will result to a minimum TE.
While examining other approaches related to this area of research, using combinatorial optimization techniques and quadratic programming, I will attempt to solve the TE problem with a cardinality constraint on the number of assets to be selected for the portfolio. NP-hardness means that exact solutions are computationally infeasible with increase in the amount of data for implementation thus only the inexact methods are practical. This thesis will seek to balance between computational time and the quality of the results during implementation.
Advisor: Dag Haugland
Student: Purity Mutunge
Enrolled semester: Spring 2012
Rask løsning av Quadratic Programming (QP) Problem
Goodtech og MathConsult har utviklet et verktøy for å beregne leveransepåliteligheten i et masket nettverk, der hver gren har en feilfrekvens. Denne metoden baserer seg på pålitelighetsteori til å plukke ut aktuelle feilsituasjoner for videre analyse, og optimaliseringsteori for å beregne optimal leveranse igjennom nettverket ved hver enkelte feilsituasjoner. Optimaliseringsproblemene blir formulert som QP-problem.
På grunn av at mulige antall feilsituasjoner stiger svært fort i forhold til størrelsen på nettet, så er programmet avhengig av hvert enkelte feilsituasjon blir analysert svært raskt. Igjennom en studie av tilgjengelige QP-løseres, har utviklergruppa komt frem til at CPLEX er den mest effektive løseren for problemstillingen. Men siden CLPEX er dyr og ikke spesialisert for den gjeldende problemstillingen, er Goodtech interessert i å se på muligheten for å få utviklet en egen QP-løser. Håpet er at en spesialtilpasset løser vil bli enda raskere enn CPLEX.
Studenten skal utvikle en QP-løser for effektiv løsing av problemer med følgende karakteristikk:
- Størrelsesorden 100 – 2000 variabler, men med mulighet for å redusere problemstørrelsen igjennom variabeleliminasjon.
- Glisne matriser.
- Positiv semidefinitt kvadratisk ledd i objektfunksjonen.
- QP-problemet skal løses mange ganger (tusenvis) med små variasjoner i beskrankningene.
Goodtech legger ikke føringer på hvilke metoder som skal brukes, men kan stille med erfaring fra tidligere studier av QP-løsere. Goodtech kan også stille med beskrivelse av problemstilling, inngangsdata til testing, veileder i bedrift og arbeidsplass.
Veileder: Dag Haugland.
Student: Andreas Halle
Enrolled semester: Fall 2011
Interior-point methods for network flow problems
The first practical method for solving Linear Optimization (LO) LO-problems was the simplex method, proposed by Dantzig, in 1947. This algorithm explicitly explores the combinatorial structure of the feasible region to locate a solution by moving from a vertex of the feasible set to an adjacent vertex while improving the value of the objective function. Since then, the method has been routinely used to solve problems in business, logistics, economics, and engineering. Klee and Minty, have shown in 1972 that the simplex method goes through 2n − 1 vertices. This shows that the worst-case behavior of the simplex method is exponential.
In 1984, Karmarkar proposed a new polynomial-time method for solving linear programs. This method and its variants that where developed subsequently are now called Interior point methods (IPMs). After an initial controversy it has been established that for very large, sparse problems, subsequent variants of Karmarkar’s method often outperform the simplex method. The study of Interior-Point Methods is currently one of the most active research areas in optimization. The name “interior-point methods” has originated from the fact that the points generated by the algorithms all lie in the interior of the feasible region. This is in contrast with the famous and well-established simplex method where the iterates move along the boundary of the feasible region from one extreme point to another. Nowadays, IPMs provide a powerful tool for solving Linear Optimization (LO) problems. They become an alternative to the simplex based methods and are particularly useful in applications where the size of the problem is very large. Moreover, it turned out that the IPM-approach to LO has a natural generalization to the related field of convex nonlinear optimization, which resulted in a new stream of research and an excellent monograph of Nesterov and Nemirovski 1993.This monograph opened the way into other new subfields of optimization, like semidefinite optimization and second order cone optimization, with important applications in system theory, discrete optimization, and many other areas.
The aim of this project is to design and explore the use of IPM-algorithms (path-following, predictor-corrector) or Infeasible interior point algorithm for network flow problem in practise. The project is a combination of mathematical analysis, numerical optimization and computer science.
We deal with project as follows: Part 1) Design IPMs for special network flow problem (network flow problem that can be formulated as a linear optimization problem). Part 2) Develop a practical variant of an IPM solver for network flow problems that are not linear. Of special interest and practical relevance is the pooling problem, which is a network flow problem that frequently occurs in industrial applications like oil refining and pipeline gas transportation.