Reliable process modelling and optimisation using interval analysis
Abstract
Continuing advances in computing technology provide the power not only to solve
increasingly large and complex process modeling and optimization problems, but also
to address issues concerning the reliability with which such problems can be solved.
For example, in solving process optimization problems, a persistent issue
concerning reliability is whether or not a global, as opposed to local,
optimum has been achieved. In modeling problems, especially with the
use of complex nonlinear models, the issue of whether a solution is unique
is of concern, and if no solution is found numerically, of whether there
actually exists a solution to the posed problem. This presentation
focuses on an approach, based on interval mathematics,
that is capable of dealing with these issues, and which
can provide mathematical and computational guarantees of reliability.
That is, the technique is guaranteed to find all solutions to nonlinear
equation solving problems and to find the global optimum in nonlinear
optimization problems. The methodology is demonstrated using several
examples, drawn primarily from the modeling of phase behavior, the
estimation of parameters in models, and the modeling, using lattice
density-functional theory, of phase transitions in nanoporous materials.
Support Vector machines and related kernel methods
Abstract
Support Vector Machines are a new and very promising approach to
machine learning. They can be applied to a wide range of tasks such as
classification, regression, novelty detection, density estimation,
etc. The approach is motivated by statistical learning theory and the
algorithms have performed well in practice on important applications
such as handwritten character recognition (where they currently give
state-of-the-art performance), bioinformatics and machine vision. The
learning task typically involves optimisation theory (linear, quadratic
and general nonlinear programming, depending on the algorithm used).
In fact, the approach has stimulated new questions in optimisation
theory, principally concerned with the issue of how to handle problems
with a large numbers of variables. In the first part of the talk I will
overview this subject, in the second part I will describe some of the
speaker's contributions to this subject (principally, novelty
detection, query learning and new algorithms) and in the third part I
will outline future directions and new questions stimulated by this
research.
Some properties of thin plate spline interpolation
Abstract
Let the thin plate spline radial basis function method be applied to
interpolate values of a smooth function $f(x)$, $x \!\in\! {\cal R}^d$.
It is known that, if the data are the values $f(jh)$, $j \in {\cal Z}^d$,
where $h$ is the spacing between data points and ${\cal Z}^d$ is the
set of points in $d$ dimensions with integer coordinates, then the
accuracy of the interpolant is of magnitude $h^{d+2}$. This beautiful
result, due to Buhmann, will be explained briefly. We will also survey
some recent findings of Bejancu on Lagrange functions in two dimensions
when interpolating at the integer points of the half-plane ${\cal Z}^2
\cap \{ x : x_2 \!\geq\! 0 \}$. Most of our attention, however, will
be given to the current research of the author on interpolation in one
dimension at the points $h {\cal Z} \cap [0,1]$, the purpose of the work
being to establish theoretically the apparent deterioration in accuracy
at the ends of the range from ${\cal O} ( h^3 )$ to ${\cal O} ( h^{3/2}
)$ that has been observed in practice. The analysis includes a study of
the Lagrange functions of the semi-infinite grid ${\cal Z} \cap \{ x :
x \!\geq\! 0 \}$ in one dimension.
On the robust solution of process simulation problems
Abstract
In this talk we review experiences of using the Harwell Subroutine
Library and other numerical software codes in implementing large scale
solvers for commercial industrial process simulation packages. Such
packages are required to solve problems in an efficient and robust
manner. A core requirement is the solution of sparse systems of linear
equations; various HSL routines have been used and are compared.
Additionally, the requirement for fast small dense matrix solvers is
examined.
A new preconditioning technique for the solution of the biharmonic problem
Abstract
In this presentation we examine the convergence characteristics of a
Krylov subspace solver preconditioned by a new indefinite
constraint-type preconditioner, when applied to discrete systems
arising from low-order mixed finite element approximation of the
classical biharmonic problem. The preconditioning operator leads to
preconditioned systems having an eigenvalue distribution consisting of
a tightly clustered set together with a small number of outliers. We
compare the convergence characteristics of a new approach with the
convergence characteristics of a standard block-diagonal Schur
complement preconditioner that has proved to be extremely effective in
the context of mixed approximation methods.
\\
\\
In the second part of the presentation we are concerned with the
efficient parallel implementation of proposed algorithm on modern
shared memory architectures. We consider use of the efficient parallel
"black-box'' solvers for the Dirichlet Laplacian problems based on
sparse Cholesky factorisation and multigrid, and for this purpose we
use publicly available codes from the HSL library and MGNet collection.
We compare the performance of our algorithm with sparse direct solvers
from the HSL library and discuss some implementation related issues.
Algebraic modeling systems and mathematical programming
Abstract
Algebra based modeling systems are becoming essential elements in the
application of large and complex mathematical programs. These systems
enable the abstraction, expression and translation of practical
problems into reliable and effective operational systems. They provide
the bridged between algorithms and real world problems by automating
the problem analysis and translation into specific data structures and
provide computational services required by different solvers. The
modeling system GAMS will be used to illustrate the design goals and
main features of such systems. Applications in use and under
development will be used to provide the context for discussing the
changes in user focus and future requirements. This presents new sets
of opportunities and challenges to the supplier and implementer of
mathematical programming solvers and modeling systems.
Iterative methods for PDE eigenvalue problems
Abstract
Some complexity considerations in sparse LU factorization
Abstract
The talk will discuss unsymmetric sparse LU factorization based on
the Markowitz pivot selection criterium. The key question for the
author is the following: Is it possible to implement a sparse
factorization where the overhead is limited to a constant times
the actual numerical work? In other words, can the work be bounded
by o(sum(k, M(k)), where M(k) is the Markowitz count in pivot k.
The answer is probably NO, but how close can we get? We will give
several bad examples for traditional methods and suggest alternative
methods / data structure both for pivot selection and for the sparse
update operations.
SMP parallelism: Current achievements, future challenges
Abstract
SMP (Symmetric Multi-Processors) hardware technologies are very popular
with vendors and end-users alike for a number of reasons. However, true
shared memory parallelism has experienced somewhat slower to take up
amongst the scientific-programming community. NAG has been at the
forefront of SMP technology for a number of years, and the NAG SMP
Library has shown the potential of SMP systems.
\\
\\
At the very high end, SMP hardware technologies are used as building
blocks of modern supercomputers, which truly consist of clusters of SMP
systems, for which no dedicated model of parallelism yet exists.
\\
\\
The aim of this talk is to introduce SMP systems and their potential.
Results from our work at NAG will also be introduced to show how SMP
parallelism, based on a shared memory paradigm, can be used to very
good effect and can produce high performance, scalable software. The
talk also aims to discuss some aspects of the apparent slow take up of
shared memory parallelism and the potential competition from PC (i.e.
Intel)-based cluster technology. The talk then aims to explore the
potential of SMP technology within "hybrid parallelism", i.e. mixed
distributed and shared memory modes, illustrating the point with some
preliminary work carried out by the author and others. Finally, a
number of potential future challenges to numerical analysts will be
discussed.
\\
\\
The talk is aimed at all who are interested in SMP technologies for
numerical computing, irrespective of any previous experience in the
field. The talk aims to stimulate discussion, by presenting some ideas,
backing these with data, not to stifle it in an ocean of detail!