Midwest Numerical Analysis Day 2011

Department of Mathematics, Purdue University, May 7-8, 2011

Abstracts (As of 5/4/2011) [PDF version]

 


 

Plenary Talks

 


 

Optimization-Based Computational Modeling, or How to

Achieve Better Predictiveness with Less Complexity

Pavel Bochev, Applied Mathematics and Applications, Sandia National Lab

 

Discretization converts infinite dimensional mathematical models into finite dimensional algebraic equations that can be solved on a computer. This process is accompanied by unavoidable information losses which can degrade the predictiveness of the discrete equations. Compatible and regularized discretizations control these losses directly by using suitable field representations and/or by modifications of the variational forms. Such methods excel in controlling “structural” information losses responsible for the stability and well-posedness of the discrete equations. However, direct approaches become increasingly complex and restrictive for multi-physics problems comprising of fundamentally different mathematical models, and when used to control losses of “qualitative” properties such as maximum principles, positivity, monotonicity and local bounds preservation.


In this talk we show how optimization ideas can be used to control externally, and with greater flexibility, information losses which are difficult (or impractical) to manage directly in the discretization process. This allows us to improve predictiveness of computational models, increase robustness and accuracy of solvers, and enable efficient reuse of code. Two examples will be presented: an optimization-based framework for multi-physics coupling, and an optimization-based algorithm for constrained interpolation (remap). In the first case, our approach allows to synthesize a robust and efficient solver for a coupled multi-physics problem from simpler solvers for its constituent components. To illustrate the scope of the approach we derive such a solver for nearly hyperbolic PDEs from standard, off-the-shelf algebraic multigrid solvers, which by themselves cannot solve the original equations. The second example demonstrates how optimization ideas enable design of high-order conservative, monotone, bounds preserving remap and transport schemes which are linearity preserving on arbitrary unstructured grids, including grids with polyhedral and polygonal cells. This is a joint work with D. Ridzal , G. Scovazzi (SNL) and M. Shashkov (LANL)


 

Gravitational Wave Physics: Challenges and Opportunities

for the Numerical Analyst

Jan Hesthaven, Division of Applied Mathematics, Brown University

 

During the last decade there has been a surge of activity aimed at modeling strong gravitational wave sources such as binary black hole systems or black hole - neutron star dynamics. A central driver for this can be found in the international efforts to develop large experimental facilities aimed at detecting gravity waves. The detection of gravity waves is considered the final confirmation of the validity of Einstein's general theory of relativity in the nonlinear regime and, as such, is one of the places where Nobel prices grow.


However, even with the use of state-of-the-art technology, the detection of gravity waves is at the very limit of what is physically possible and the use of advanced computational models are seen as critical component to enable successful detection. While this has lead to the development of reliable and efficient modeling tools, many challenges, both of a theoretical, computational, and practical nature, remains.

In this presentation we will, after having given a very brief introduction to gravitational wave physics, discuss some of the numerical and computational challenges in this area. By discussing some of the dominating models, we identity some of the many challenges and illustrate how the use of contemporary computational techniques, in this case discontinuous Galerkin methods and reduced basis methods, may offer some interesting opportunities and improvements over dominating existing methods used in this area.

Throughout the talk we will try to identity open questions and challenges, illustrating the richness of problems in this application area - problems than span the entire range from fundamental theoretical problems of interest to the analyst to the need for very large scale computational science to accurately model large astrophysical phenomena and the generation of gravitational waves.


 

FASP Methods for Solving Large Scale Discretized PDEs

Jinchao Xu, Department of Mathematics, Pennsylvania State University

 

In many practical applications, the solution of large scale linear algebraic systems resulted from the discretization of various partial differential equations (PDEs) are still often solved by traditional methods such as Gaussian elimination (and variants) . Mathematically optimal methods, such as multigrid methods, have been developed for decades but they are still not that much used in practice. In this talk, I will report some recent advances in the development of optimal iterative methods that can be applied to various practical problems in a user-friendly fashion. Starting from some basic ideas and theories on multiscale methods such as multigrid and domain decomposition methods, I will give a description of a general framework known as the Fast Auxiliary Space Preconditioning (FASP) Methods and report some applications in various problems including Newtonian and non-Newtonian models, Maxwell equations, Magnetohydrodymics and reservoir (porous media) simulations.


 

Contributed Talks

 


 

Stability and Error Estimates for an Equation Error Method

for Elliptic Equations

Mohammad Al-Jamal, Department of Mathematical Sciences, Michigan Technological University

 

There are a growing number of applications which call for estimating a coefficient (or parameter) in an elliptic boundary value problem from measurements of the forward solution. In this talk, I will present the equation error approach for estimating a spatially varying parameter in an elliptic PDE with Neumann boundary condition. I will show stability and convergence results and derive error estimates. Also, I will present some numerical examples illustrating the method.


 

Self Similar Growth of Single Precipitate in

Inhomogeneous Elastic Medium

Amlan Barua, Department of Applied Mathematics, Illinois Institute of Technology

 

In this talk results from linear analysis will be used to present a new formulation for the self similar diffusional evolution of a single precipitate growing in presence of elastic fields. The precipitate is bounded by matrix having infinite extent. The elasticity problem is isotropic but inhomogeneous. Numerical support for the formulation will be given by performing non linear simulations using boundary integral methods. The non linear method is spectrally accurate in space and in time it is second order.


 

A Superconvergent Local Discontinuous Galerkin Method

for Elliptic Problems

Mahboub Baccouch, Department of Mathematics, University of Nebraska at Omaha
Slimane Adjerid, Department of Mathematics, Virginia Tech

 

In this talk, we develop, analyze and test a superconvergent local discontinuous Galerkin(LDG) method for two-dimensional diffusion and convection-diffusion problems and investigate its convergence properties. Numerical computations suggest that the proposed method yield $O(h^{p+1})$ optimal $\mathcal{L}^2$ convergence rates and $O(h^{p+2})$ superconvergent solutions at Radau points. More precisely, a local error analysis reveals that the leading term of the LDG error for a $p$-degree discontinuous finite element solution is spanned by two $(p+1)$-degree right-Radau polynomials in the $x$ and $y$ directions. Thus, $p$-degree LDG solutions are superconvergent at right-Radau points obtained as a tensor product of the shifted roots of the $(p+1)$-degree right-Radau polynomial. For $p=1$, we discover that the first component of the solution's gradient is $O(h^3)$ superconvergent  at tensor product of the roots of the quadratic left-Radau polynomial in $x$ and right-Radau polynomial in $y$ while the second component is superconvergent at the tensor product of the roots of the quadratic right-Radau polynomial in $x$ and left-Radau polynomial in $y$. We use these results to construct simple, efficient, and asymptotically correct {\it a posteriori} error estimates and present several computational results to validate the theory.


 

Hybridizable Discontinuous Galerkin Methods for

Elliptic Problems

Fatih Celiker, Mathematics, Wayne State University

 

We introduce a family of discontinuous Galerkin methods called {\em hybridizable} discontinuous Galerkin (HDG) methods. The distinctive feature of the methods in this framework is that the only globally coupled degrees of freedom are those of an approximation of the solution defined only on the boundaries of the elements. Since the associated matrix is sparse, symmetric, and positive definite, these methods can be efficiently implemented. We begin with introducing these methods in the simple framework of second order elliptic problem $\Delta u = f$. We then show how to generalize these methods to fourth order elliptic problems, in particular to the biharmonic equation $\Delta^2u = f$. We rewrite the biharmonic
problem as a first order system for separate unknowns $u$, $\nabla u$, $\Delta u$, and $\nabla\Delta u$, then we introduce the HDG method for which the only globally coupled
degrees of freedom are those of the approximation to $u$ and $\Delta u$ on the faces of the elements. We show that a suitable choice of the numerical traces results in optimal convergence for all the unknowns except for the approximation to $\nabla\Delta u$ which converges with order $k+1/2$ when polynomials of degree at most $k$ are used. This is joint work with Bernardo Cockburn and Ke Shi at the University of Minnesota.


 

Efficient Spectral Methods for Systems of Coupled Elliptic

Equations with Applications to Cahn-Hilliard Type Equations

Feng Chen, Department of Mathematics, Purdue University

 

I will talk about how our newly developed spectral method for systems of arbitrary number of coupled elliptic equations in both two and three dimensions. The method is suboptimal in the sense that its complexity is O($N^{d+1}$) with $N$ the cutoff in unidirection and $d$ the dimension of the problem. It can be applied to solve highly nonlinear and high-order evolution equations in various situations, including the isotropic Cahn-Hilliard equation from microstructure evolution, the anisotropic Cahn-Hilliard equation from crystal growth, and some other Cahn-Hilliard type equations from PEM full cell modelling. Furthermore, it serves as a potential solver for phase-field-crystal equations, rotating Navier-Stokes equations, and Schrodinger equations.


 

Numerical Linear Algebra Challenges in

Very Large Scale Data Analysis

Jie Chen, Mathematics and Computer Science Division, Argonne National Laboratory
 

With the ever increasing computing power of supercomputers, nowadays computational sciences and engineering demand numerical solutions for problems in a larger and larger scale. One important problem that finds a wide range of applications in such as physical simulations and machine learning, is in a stochastic process the generation of random data from a prescribed covariance rule, and the inverse question of fitting the covariance rule given experimented data. This problem gives rise to a number of numerical linear algebra challenges, where one needs to deal with dense and irregularly structured covariance matrices of mega-, giga- or even much larger sizes. In this talk, I will illustrate specific encountered challenges, including computing the square root of the matrix, estimating the diagonal, solving linear systems, and preconditioning the matrix. For some of these challenges, we have developed efficient and scalable methods that are capable of dealing with matrices of size at least in the mega-scale, on a single desktop machine. As a natural extension, high performance codes run on supercompters are being developed; however, there remain other unsolved challenging tasks along the line, which call for innovative algorithms as well as theory.


 

Parallel Computation of the Mixed Volume

Tianran Chen, Tsung-Lin Lee, and Tien-Yien Li, Department of Mathematics, Michigan State University
 

Calculating mixed cells which produces mixed volume as a by-product is the vital step in solving systems of polynomial equations by the polyhedral homotopy methods. Our original algorithm for this purpose, implemented in MixedVol-2.0, is highly serial. In this talk, we propose a reformulation of our algorithm, making it much more fine-grained and scalable. It can be readily adapted to both distributed and shared memory computing systems. Remarkably, very high speed-ups were achieved in our numerical results, and we are now able to compute mixed cells of polynomial systems of very large scale, such as VortexAC6 system with mixed-volume 27,298,952 and total degree $2^{30}$ (around 1 billion).


 

Towards Lightweight Projection Methods for Systems

with Multiple Right-hand Sides

Efstratios Gallopoulos, Department of Computer Engineering & Informatics, University of Patras, Greece

 


 

An Efficient and Stable Spectral Method for

Electromagnetic Scattering from a Layered

Periodic Structure

Ying He, Department of Mathematics, Purdue University


The scattering of acoustic and electromagnetic waves by periodic structures plays an important role in a wide range of problems of scientific and technological interest. This contribution focuses upon the stable and high--order numerical simulation of the interaction of time--harmonic electromagnetic waves incident upon a periodic doubly layered dielectric media with sharp, irregular interface. We describe a Boundary Perturbation Method for this problem which avoids not only the need for specialized quadrature rules but also the dense linear systems characteristic of Boundary Integral/Element Methods. Additionally, it is a provably stable algorithm as opposed to other Boundary Perturbation approaches such as Bruno \& Reitich's ``Method of Field Expansions'' or Milder's ``Method of Operator Expansions.'' Our spectrally accurate approach is a natural extension of the ``Method of Transformed Field Expansions'' originally described by Nicholls \& Reitich (and later refined to other geometries by the authors) in the single--layer case.


 

A Finite Element Solver for the Kohn-Sham Equation with

a Mesh Redistribution Technique

Guanghui Hu, Department of Mathematics, Michigan State University


A finite element method is presented with an adaptive mesh redistribution technique for solving the Kohn-Sham equation. The solver consists of two independent iterations. The first one is a self-consistent field (SCF) iteration which generates the self-consistent electron density, while the second one is an iteration which optimizes the distribution of mesh grids in terms of the self-consistent electron density. In the SCF iteration, the Kohn-Sham equation is discretized by the standard finite element method. The electrostatic potential is obtained by solving the Poisson equation, with the algebraic multi-grid (AMG) method as the Poisson solver. The local density approximation (LDA) is adopted to approximate the exchange-correlation potential. Both the all-electron and the Evanescent Core pseudopotential are considered for the external potential. To stabilize the SCF iteration, the linear mixing scheme is introduced for updating the electron density. After the SCF iteration, the distribution of mesh grids is optimized by an adaptive technique, which is based on the harmonic mapping. A monitor function which depends on the gradient of the electron density is proposed to partially control the movement of mesh grids. To further improve the mesh quality, a smoothing strategy which is derived from diffusive mechanism is presented. From the numerical experiments, it can be observed that important regions in the domain such as the vicinity of a nucleus, and between atoms of chemical bonds are resolved successfully with our mesh redistribution technique. Higher numerical accuracy and efficiency of our solver are demonstrated in the numerical experiments.


 

Instant System Availability

Kai Huang, Department of Mathematics, Florida International University


In this talk, I will present our recent work on the instant availability A(t) of a repairable system through integral equation. We will prove some properties of the instant system availability, Numerical algorithm for computing A(t) is proposed. Examples show high accuracy and efficiency of this algorithm. This is a joint work with J. Mi.


 

Efficient Computation of Failure Probability

Jing Li, Department of Mathematics, Purdue University


Evaluation of failure probability of a given system requires sampling of the system response and can be computationally expensive. Therefore it is desirable to construct an accurate surrogate model for the system response and subsequently to sample the surrogate model. In this talk we demonstrate that the straightforward sampling of a surrogate model can lead to erroneous results, no matter how accurate the surrogate model is. We then propose a hybrid algorithm combines the surrogate and sampling approaches and address the robust problem described above. The resulting algorithm is significantly more efficient than the traditional sampling method, and is more accurate and robust than the straightforward surrogate model approach and numerical examples will be presented.


 

The Chebyshev Integral Formulation for Performing

High Spatial Resolution Collocation Simulations

Benson Muite, Department of Mathematics, University of Michigan


We describe an efficient implementation of the Chebyshev integration formulation. The implementation allows for spatially accurate simulations with $O(n\log n)$ computational costs and for the accurate recovery of derivatives. High spatial resolution simulations using the implementation will be demonstrated and factors limiting higher resolution simulations from being done discussed.


 

An Algebraic Multigrid Preconditioner with

Guaranteed Condition Number

Artem Napov, Lawrence Berkeley National Lab
Yvan Notay, Universit Libre de Bruxelles, Belgium

 

We present an algebraic multigrid preconditioner that has a guaranteed condition number for the class of nonsingular symmetric M-matrices with nonnegative row sum. Our main ingredient is a new algorithm for algebraic grid generation (coarsening) by aggregation of unknowns which insures that the two-grid condition number remains below a prescribed (user defined) parameter. This is achieved by using a bound based on the worst aggregates' ``quality''. For a sensible choice of this parameter, it is shown that the recursive use of the two-grid procedure yields a condition number independent of the number of levels, providing that one uses a proper AMLI-cycle. On the other hand, the cost of the preconditioner is of optimal order if the mean aggregates' size is large enough. This point is addressed analytically for the model Poisson problem and, further, numerically through a wide range of numerical experiments, demonstrating the robustness of the method. The experiments are performed on low order discretizations of second order elliptic PDEs in two and three dimensions, with both structured and unstructured grids, some of them with local refinement and/or reentering corner, and possible jumps or anisotropies in the PDE coefficients. This is joint work with Yvan Notay (Universit Libre de Bruxelles, Belgium).


 

Polynomial Interpolation on Arbitrary Nodal Sets

in High Dimensions

Akil Narayan, Department of Mathematics, Purdue University


Motivated by the problem of stochastic collocation in high-dimensional spaces, we present a generalized algorithm for the `least interpolant' method of Carl de Boor and Amos Ron for polynomial interpolation on arbitrary data nodes in multiple dimensions. Our variation on the least interpolant produces an interpolant that can be tailored for various probability distributions.  We present properties of this polynomial interpolant and empirically analyze conditioning of the associated Vandermonde-like matrices. We also present a few examples illustrating utility of the method for interpolation on arbitrary nodal sets in high-dimensional spaces.


 

Galerkin-type Approximation for Stochastic PDE of

Nonlinear Beam with Additive Noise

Henri Schurz, Department of Mathematics, Southern Illinois University

 

The Galerkin-type approximation of strong solutions of some quasi-linear stochastic PDEs with cubic nonlinearity and Q-regular random space-time perturbations is discussed. The SPDE relates to the nonlinear beam problem in engineering and physics. The existence of unique solutions with homogeneous boundary and square integrable initial conditions is shown by truncated Galerkin techniques. For our analysis, we exploit the techniques of Fourier series solutions, Lyapunov-functions and monotone operators. The related Fourier coefficients are computed by nonstandard numerical methods to control the qualitative behavior of associated mean energy functional in a consistent manner. If time permits, we sketch how we prove consistency rates, convergence and stability along total mean energy functional. This presentation is connected to a joint work with Boris Belinskiy (UTC).


 

A Parallel Implementation of an ODE Solver for

a River Basin Model

Scott Small, Department of Mathematics, University of Iowa
Laurent O. Jay, Department of Mathematics, University of Iowa

 

Many factors influence the flow of rivers, including rainfall, properties of the soil, vegetation, and melting snow. We consider a model for the discharge of water from an entire river basin. The model consists of a large-scale system of ODEs defined on a sparse tree structure. We consider using standard Runge-Kutta methods. However, solving the entire system with the same of ODEs stepsize for all equations creates inefficiencies. As a remedy, we propose a parallel asynchronous integration approach to improve efficiency. Numerical results on basins in Iowa will be presented.


 

Nonnegative Sparse Blind Source Separation for NMR

Spectroscopy by Data Clustering, Model Reduction,

and $l_1$ Minimization

Yuanchang Sun, Department of Mathematics, University of California at Irvine


A novel blind source separation (BSS) approach is introduced to deal with the nonnegative and correlated signals arising in NMR spectroscopy of bio-fluids (urine and blood serum). BSS problem arises when one attempts to recover a set of source signals from a set of mixture signals without knowing the mixing process. Various approaches have been developed to solve BSS problems relying on the assumption of statistical independence of the source signals. However, signal independence is not guaranteed in many real-world data like the NMR spectra of chemical compounds. To work with the nonnegative and correlated data, we replace the statistical independence by a constraint which requires dominant interval(s) from each source signal over some of the other source signals in a hierarchical manner. This condition is applicable for many real-world signals such as NMR spectra of urine and blood serum for metabolic fingerprinting and disease diagnosis. Exploiting the hierarchically dominant intervals from the source signals, the method reduces the BSS problem into a series of sub-BSS problems by a combination of data clustering, linear programming, and successive elimination of variables. Then in each sub-BSS problem, an $l_1$ minimization problem is formulated for recovering the source signals in a sparse transformed domain. The method is substantiated by examples from NMR spectroscopy data and is promising towards separation and detection in complex chemical spectra without the expensive multi-dimensional NMR data. This is joint work with Prof. Jack Xin.


 

Error Bound for Numerical Methods for the

Rudin-Osher-Fatemi Image Smoothing Model

Jingyue Wang, Math, University of Georgia


The Rudin-Osher-Fatemi variational model has been extensively studied and used in image analysis. There have been several very successful numerical algorithms developed to compute the numerical solutions. We study the convergence of the numerical solutions to various finite-difference approximation to this model. We bound the difference between the solution to the continuous ROF model and the numerical solutions. These bounds apply to ``typical'' images, i.e., images with edges or with fractal structure.


 

Phase Field Modeling for Mesoscale Materials by

Differential Variational Inequality

Lei Wang, Argonne National Laboratory


The phase field method has recently emerged as a powerful computational approach to modeling the defect and the microstructure dynamics in mesoscale materials. We employ coupled Cahn-Hilliard and Allen-Cahn systems with a double-obstacle free energy potential to simulate the physics. Differential Variational Inequality(DVI) is employed in order to guarantee that the discrete solutions satisfy appropriate constraints. We reformulated the DVIs to a complementarity problem, which allows us to use parallel matrix-free solvers such as PETSc and TAO. Several numerical test cases will be shown.

 


 

A Superfast and Stable Solver for Toeplitz Linear Systems

via Randomized Sampling

Yuanzhe Xi, Department of Mathematics, Purdue University
Jianlin Xia, Department of Mathematics, Purdue University


Toeplitz linear systems have been widely used in scientific computing and engineering. Many stable and fast Toeplitz solvers (roughly $O(n^2)$ cost) have been developed, and there are also many superfast (roughly $O(n)$ cost) solvers which, however, are usually not stable. Here, we propose a new stable and superfast Toeplitz solver. With a displacement structure, a Toeplitz matrix is transformed into a Cauchy-like matrix by FFT. These Cauchy-like matrix have a low-rank property. That is, its off-diagonal blocks have small numerical ranks. By exploiting this property, the Cauchy-like matrix can be further approximated by a rank structured form called hierarchically semiseparable (HSS) matrix. The HSS Cauchy-like system can be quickly solved in $O(n)$ complexity with $O(n)$ storage. In order to construct such an HSS approximation quickly, we use randomized sampling techniques together with fast Toeplitz-vector multiplications. In this way, we only need to compress some much smaller matrices after the multiplications of the Cauchy-like matrix with some Gaussian random matrices. Further structured operations are also used during the HSS matrix construction and factorization. All the steps are conducted qukcly and stably. Numerical results demonstrate the efficiency of our solver, and show that, when $n=2\times 10^4$, it is already about $10$ times faster than an existing stable and superfast structured solver by Chandrasekaran, Gu, Xia, et al. [SIAM J. Matrix Anal. Appl., 2007].


 

Fast Finite Element Solver Development for

a Nonlocal Dielectric Continuum Model

Dexuan Xie, Department of Mathematical Sciences, University of Wisconsin-Milwaukee


The nonlocal continuum dielectric model is an important extension of the classical Poisson dielectric model but much more expensive to be solved numerically. In this talk, I will introduce one commonly-used nonlocal dielectric model and demonstrate its great promise in the calculation of free energies with a much higher accuracy than the Poisson model. I then will report our recent work on  the development of fast finite element solvers for this model. This project is a joined work with Prof. Ridgeway Scott at the University of Chicago, and partially supported by by NSF grant \#DMS-0921004.

 


WENO Divergence-Free Reconstruction-Based Finite

Volume Scheme for Solving Ideal MHD Equations

on Triangular Meshes

Zhiliang Xu, Applied and Computational Mathematics and Statistics Department, University of Notre Dame


In this talk, I will present our recent work on developing a 3rd order accurate finite volume schemes for solving MHD equations on triangular meshes.  We advance the magnetic field with a constrained transport scheme to preserve the divergence-free condition of the magnetic field. A high order divergence-free reconstruction method is proposed for the magnetic field that use the cell edge values. This reconstruction as well as the reconstruction for flow variables are based on WENO finite volume method. Numerical examples are given to demonstrate efficacy of the new schemes.


 

Fast Fourier--Jacobi Methods for the Fokker--Planck

Equation of  FENE Dumbbell Fluids

Haijun Yu, Department of Mathematics, Purdue University


FENE Dumbbell model is one of the most simple mathematical models that predict basic properties of Non-Newtonian Fluids. However the dynamics of  FENE dumbbell fluids are described by a high-dimensional Fokker--Planck equation which needs very fast computer to simulate.  Most of the existing numerical algorithms involve factorization of a non-sparse matrix thus are not suitable for discretizations with large degree of freedom. In this talk, we will present a fast spectral Galerkin method using real Fourier series and Jacobi polynomials as bases. This new algorithm has several virtues: 1. The Galerkin approximation bases on a proper weighted weak formulation, in which the numerical moments have spectral accuracy; 2. The numerical approximation leads to linear sparse (banded indeed) system, thus can be solved with linear computational cost; 3. The numerical  algorithm can be easily extended to solve the Fokker--Planck equation arising in non-homogeneous systems, or the Navier--Stokes Fokker--Planck coupled system.


 

Sequential Monte Carlo Sampling in Hidden

Markov Models of Nonlinear Dynamical Systems

Xiaoyan Zeng, Mathematics and Computer Science Division,  Argonne National Laboratory


We investigate the issue of which state functionals can have their uncertainty estimated efficiently in dynamical systems  with uncertainty.  Because of the high dimensionality and complexity of the problem, sequential Monte Carlo (SMC) methods  are used.  We prove that the variance of the SMC method is bounded linearly in the number of time steps when the  proposal distribution  is truncated normal distribution. We also show  that for a moderate large number of steps the error produced by approximation of dynamical systems linearly accumulates on the condition that the logarithm of the density function of noise is Lipschitz continuous. This finding is significant because the uncertainty in many dynamical systems, in particular,  in chemical engineering systems that can be assumed to have this nature. 

 


 

A Lowest Order Divergence-free Finite Element

on Rectangular Grids

Shangyou Zhang, University of Delaware
Yunqing Huang, Xiangtan University


It is shown that the conforming $Q_{2,1;1,2}$-$Q_1'$ mixed element is stable, and provides optimal order of approximation for the Stokes equations on rectangular grids. Here $Q_{2,1;1,2}=Q_{2,1}\times Q_{1,2}$ and $Q_{2,1}$ denotes the space of continuous piecewise-polynomials of degree $2$ or less in the $x$ direction but of degree $1$ in the $y$ direction. $Q_1'$ is the space of discontinuous bilinear polynomials, with spurious modes filtered. To be precise, $Q_1'$ is the divergence of the discrete velocity space $Q_{2,1;1,2}$. Therefore, the resulting finite element solution for the velocity is divergence-free pointwise, when solving the Stokes equations.


 

A Piecewise Constant Enriched Continuous Galerkin

 Method for Problems with Discontinuous Solutions

Shun Zhang, Division of Applied Mathematics, Brown University


For problems with discontinuous solutions, the solutions are usually very smooth in most regions except locations with shocks/contact discontinuities.  We propose a new approximation space, a space of "continuous (which can be high-order/spectral) elements and piecewise constants",  to approximate those solutions. The continuous part of the space can approximate the smooth part of the solution well, and the piecewise constant enrichment can be used to approximate the discontinuous phenomena. We hope the method can combine the good properties of high-order methods and discontinuous Galerkin methods.


 

Eigen-based High Order Basis for Spectral Elements

Xiaoning Zheng, Department of Mathematics, Purdue University
Steven Dong, Department of Mathematics, Purdue University


We present an efficient high-order expansion basis for the spectral element approach. This belongs to the category of modal basis, but it is not hierarchical. The interior modes are constructed by solving a small generalized eigenvalue problem, while the boundary modes are constructed based on such eigen functions in lower dimensions. We compare this expansion basis with the commonly-used Jacobi polynomial-based expansion basis, and demonstrate the considerably superior numerical efficieny of the new basis in terms of conditioning and the number of iterations to convergence for iterative solvers.


 

Contacts

Please contact the conference organizers for more information:

Jie Shen (Chair), Peijun Li, and Jianlin Xia at mwna@math.purdue.edu.