# Math department colloquium archive

This is an archive of some of the abstracts of the Math department colloquium.

## Schedule for 2019–2020

#### Fall 2019

##### September 10 – Aykut Satici, Boise State University, Mechanical Engineering

Title: Exploiting sum-of-squares programming for the analysis and design of robotic manipulators

Abstract: The main theme of this work is the use of a certain type of convex optimization to analyze and design robotic manipulators. On the analysis front, we provide a general framework to determine inner and outer approximations to the singularity free workspace of robotic manipulators. A similar framework is utilized for optimal dimensional synthesis of robotic manipulators with respect to their kinematic and dynamic properties. This framework utilizes the sum-of-squares optimization technique, which is numerically implemented by semidefinite programming. In order to apply the sum-of-squares optimization technique, we convert the trigonometric functions in the kinematics of the manipulator to polynomial functions with an additional constraint. Sum-of-squares programming is shown to promise advantages as it can provide globally optimal results up to machine precision and scales better with respect to the number of design variables than other methods which can obtain globally optimal solutions.

##### September 17 – Nicholas J. Horton, Amherst College

Title: Multivariate thinking and the introductory statistics and data science course: preparing students to make sense of a world of observational data

Abstract: We live in a world of ever expanding “found” (or observational) data. To make decisions and disentangle complex relationships, students need a solid background in design and confounding. The revised Guidelines for Assessment and Instruction in Statistical Education (GAISE) College Report enunciated the importance of multivariate thinking as a way to move beyond bivariate thinking. But how do such learning outcomes compete with other aspects of statistics knowledge (e.g., inference and p-values) in introductory courses that are already overfull. In this talk I will offer some reflections and guidance about how we might move forward, with specific implications for introductory statistics and data science courses.

##### September 24 – Donna Calhoun, Boise State University, Mathematics

Title: The Serre-Green-Naghdi equations for modeling shallow, dispersive geophysical flows

Abstract: The depth-averaged shallow water wave equations are commonly used to model flows arising from natural hazards. The GeoClaw code, developed by D. George, R. J. LeVeque, M. J. Berger, K. Mandli and others is one example of a depth-averaged flow solver now widely used for modeling tsunamis, overland flooding, debris flows, storm surges and so on. Generally, depth averaged flow models show excellent large scale agreement with observations and can thus be reliably used to predict whether tsunamis will reach distant coast lines, and if, so can give vital information about arrival times. However, for other types of flows, dispersive effects missing from the SWE model can play an important role in determining localized effects such as whether waves will overtop seawalls, or whether a landslide entering a lake will trigger tsunami-like behavior on the opposite shore. Because of the importance of these dispersive effects, several depth averaged codes include dispersive corrections to the SWE. One set of equations commonly used to model these dispersive effects are the Serre-Green-Naghdi (SGN)equations.

We will present our work to include dispersive correction terms into the GeoClaw extension of ForestClaw, a parallel adaptive library for Cartesian grid methods. One formulation of the SGN equations stabilizes higher order derivatives by treating them implicitly. As a result, a key component of an SGN solver is a variable coefficient Poisson solver. We will discuss our current work in developing both an iterative solver, based on multi-grid preconditioned BiCG-STAB solver (Scott Aiton, Boise State), and a direct solver based on the Hierarchical-Poincaré-Steklov (HPS) method developed by Gillman and Martinsson (2014). We will describe the SGN equations and provide an overview of their derivation, and then show preliminary results on uniform Cartesian meshes. Comparisons with the SGN solver in Basilisk (S. Popinet) and BoussClaw (J. Kim et al) will also be shown to verify our model.

##### October 1 – Katherine E. Stange, University of Colorado, Boulder

Title: Cryptography in the face of quantum computers

Abstract: When quantum computers are engineered to scale, quantum algorithms will be able to break our current cryptographic standards. So what do we replace them with? I’ll discuss two of the front-runners: ring-learning-with-errors (based on lattices in number fields) and isogeny-based cryptography (based on elliptic curves). I’ll describe the fundamental “hard problems” (without assuming much background in number theory) which we believe quantum (or classical) computers cannot solve efficiently. I’ll explain a little about why we might believe that, and what we can do if we do.

##### October 8 – James P. Keener, University of Utah

Title: The mathematics of life: making diffusion your friend

Abstract: Diffusion is the enemy of life. This is because diffusion is a ubiquitous feature of molecular motion that is constantly spreading things out, destroying molecular aggregates. However, all living organisms, whether single cell or multicellular have ways to use the reality of molecular diffusion to their advantage. That is, they expend energy to concentrate molecules and then use the fact that molecules move down their concentration gradient to do useful things.

In this talk, I will show some of the ways that cells use diffusion to their advantage, to signal, to form structures and aggregates, and to make measurements of length and size of populations. Among the examples I will describe are signalling by nerves, cell polarization, bacterial quorum sensing, and regulation of flagellar molecular motors. In this way, I hope to convince you that living organisms have made diffusion their friend, not their enemy.

##### October 15 – William A. Bogley, Oregon State University

Title: Combinatorial Group Theory Treasures from the Non-aspherical Realm

Abstract: In the 1950s it was shown that there are significant limits to what can be discovered about groups using algorithmic means alone. For families of groups described in terms of generators and relations, it can be difficult to work out answers to elementary questions such as which members of the family are finite, or which are trivial. I will discuss the role of planar and spherical diagrams along with the concept of “asphericity,” which provides a practical filter that can be used to isolate interesting cases. I will also report on recent work by Matthias Merzenich, who has employed computer-based recursive methods to construct previously unseen objects.

##### October 22 – Frank Giraldo, Naval Postgraduate School

Title: Efficient Time-Integration Strategies for Non-hydrostatic Atmospheric Models

Abstract: The Non-hydrostatic Unified Model of the Atmosphere (NUMA) is a compressible Navier-Stokes solver that sits inside of the U.S. Navy’s NEPTUNE weather model based on high-order continuous and discontinuous Galerkin (CG/DG) methods. Therefore, it is imperative that NUMA runs as efficiently as possible without sacrificing accuracy and conservation. One of the last places to squeeze out more performance is in the time-integration strategy used in the model. In this talk, I will review the various time-integration strategies currently available in NUMA such as: fully explicit methods with large stability regions, fully implicit methods, implicit-explicit methods, and multirate methods. With multirate methods, the idea is to partition the processes with different speeds (or stiffness) in some hierarchical way in order to use time-steps commensurate with the wave speed of each process. However, to gain all the benefits of this approach requires fully embracing this idea at the code level which then means that the time-integrators and spatial discretization methods have to be fully aware of each other (complicates the code). In this talk, I will discuss our recent results on time-integrators and possible preconditioning strategies for the implicit solvers and our experience shows that IMEX time-integrators are only competitive with explicit time-integrators if and only if the Schur complement is used.

##### October 29 – Peter Schmid, Imperial College London

Title: Koopman analysis and dynamic modes

Abstract: Koopman analysis is a mathematical technique that embeds nonlinear dynamical systems into a linear framework based on a sequence of observables of the state vector. Computing the proper embeddings that result in a closed linear system requires the extraction of the eigenfunctions of the Koopman operator from data. Dynamic modes approximate these eigenfunctions via a tailored data-matrix decomposition. The associated spectrum of this decomposition is given by a convex optimization problem that balances data-conformity with sparsity of the spectrum. The Koopman-dynamic-mode process will be discussed and illustrated on physical examples.

##### November 5 – Lynn Schreyer, Washington State University

Title: Developing a Deterministic Model for Refugee Migration (or Mammal Migration) based on Physics of Fluid Flow through Porous Media

Abstract: Here we discuss how we developed a model for the movement of refugees by adopting a model for fluid flow through porous media. The result is a model that can also be applied for mammal (or more specifically, ungulates – hooved mammals) migration. This model accounts for characteristics that have not yet been captured by current deterministic (or stochastic) differential equation models such as terrain characteristics and that people (and herds) prefer to stay at an “equilibrium density” – not too far nor too close to each other. To develop such a model, we started with a second-order parabolic differential equation, then modified the governing equation to a forward-backward parabolic differential equation, and currently we are using a fourth-order differential equation known as the Cahn-Hilliard equation. In this talk we go through the evolution of our model, and will discuss some of the trials and tribulations of obtaining our current model.

##### November 12 – Chris Herald, University of Nevada, Reno

Title: Reduced Khovanov homology and the Fukaya category of the pillowcase

Abstract: In this talk, I’ll discuss the pillowcase, which is a 2-sphere with 4 “corners,” and an algebraic construction known as its Fukaya category. This is related to curves in the surface, and polygons formed by the curves.

I will describe some recent work with Matthew Hogancamp, Matthew Hedden and Paul Kirk relating this Fukaya category to a knot invariant called reduced Khovanov homology. Given a knot or link K in R^3 or S^3, separated into two 2-tangles by a 2-sphere, I will describe an algebraic construction in the Fukaya category of the pillowcase, from which one can obtain the reduced Khovanov homology of K.

##### November 19 – Daniel Appelö, University of Colorado, Boulder

Title: WaveHoltz: Parallel and Scalable Solution of the Helmholtz Equation via Wave Equation Iteration

Abstract: We introduce a novel idea, the WaveHoltz iteration, for solving the Helmholtz equation inspired by recent work on exact controllability (EC) methods. As in EC methods our method make use of time domain methods for wave equations to design frequency domain Helmholtz solvers but unlike EC methods we do not require adjoint solves. We show that the WaveHoltz iteration we propose is symmetric and positive definite in the continuous setting. We also present numerical examples, using various discretization techniques, that show that our method can be used to solve problems with rather high wave numbers.

This is joint work with, Fortino Garcia, University of Colorado, Boulder, USA and Olof Runborg, Royal Institute of Technology, Stockholm, Sweden.

#### Spring 2020

##### February 25 – Matthew Ferguson, Boise State University, Physics

Title: The Living Genome: Quantifying Gene Expression and Regulation in Living Cells by Fluorescence Fluctuation Imaging

Abstract: Mechanisms of transcription and translation take the information encoded in the genome and make it “work” in cells, through the production of proteins defined by nucleic acid coding regions. This involves the coordination of many multi-subunit complexes about which most knowledge is inferred from ensemble and/or in vitro assays, giving a detailed but static picture. How these macromolecular machines coordinate in living cells remains unknown but recent advances in the application of fluctuation analysis to time resolved multi-color fluorescence imaging can now give an unprecedented level of dynamic in vivo information. My talk will describe recent results on a study of RNA transcription and splicing in human cancer cell lines. By two color, in vivo, RNA fluorescent labeling, we visualize the rise and fall of the intron and exon during transcription of a single gene in human cells. By cross- correlation analysis, we determine the speed of transcription, co- transcriptional splicing and termination of the RNA transcript discerning correlation between elongation, splicing, cleavage and their relation to the chromatin environment.

##### March 3 – Elina Robeva, University of British Columbia

Title: Estimation of Totally Positive Densities

Abstract: Nonparametric density estimation is a challenging problem in theoretical statistics — in general a maximum likelihood estimate (MLE) does not even exist! Introducing shape constraints allows a path forward. In this talk I will discuss non-parametric density estimation under total positivity (i.e. log-supermodularity) and log-concavity. I will first show that though they possess very special structure, totally positive random variables are quite common in real world data and possess appealing mathematical properties. Given i.i.d. samples from a totally positive distribution, we prove that the maximum likelihood estimator exists with probability one assuming there are at least 3 samples. We characterize the domain of the MLE and show that it is in general larger than the convex hull of the observations. If the observations are 2-dimensional or binary, we show that the logarithm of the MLE is a tent function (i.e. a piecewise linear function) with “poles” at the observations, and we show that a certain convex program can find it. Instead of using a maximum likelihood estimator, we discuss the possibility of using kernel density estimation. This new estimator raises an abundance of theoretical questions.

##### March 10 – Natasha Dobrinen, Denver University

Title: Ramsey Theory for Infinite Structures and Set Theoretic Methods

Abstract: Ramsey’s Theorem for the natural numbers says that given any positive integer k and any coloring of the k-sized sets of natural numbers into finitely many colors, there is an infinite subset on which all the k-sized subsets have the same color. Extensions of this theorem to infinite structures have posed extra challenges. In this talk, we will provide background on previous analogues of Ramsey’s theorem to infinite structures. Then we will introduce the method of “strong coding trees” which the speaker invented to develop Ramsey theory for Henson graphs, analogues of the Rado graph which omit k-cliques. The method is turning out to have broader implications for other types of homogeneous structures. Central to these results is the use of forcing methods to do unbounded searches for finite objects, building on previous work of Harrington.

## Schedule for 2018–2019

**Doug Bullock**, Department of Mathematics, Boise State University

Title: The Boise State Calculus Project

Date: September 06 (Thursday), 2018

Time: 3:00–4:00 PM

Room: ILC 303

Abstract: I will describe the origins, motivation, implementation, results and impacts of a multi-year project to reshape the Calculus sequence at Boise State. The talk will be structured as a narrative, with research results presented at the points where they naturally emerged from the evolving project. Research results will fall into two categories:

- Institutional transformation and change management issues addressed during initial implementation and across subsequent project evolution.
- Natural experiments emerging from the project, measuring the project’s impact on success in Calculus, longitudinal performance in later courses, and retention at BSU or in major.

Restructuring of the traditional Calculus content and curriculum were a major part of the project. I will describe general themes of the curriculum changes, illustrated with specific examples. Although the project is effectively concluded, sustainability and future evolution will be discussed, time permitting.

**Joy Wright Whitenack**, Virginia Commonwealth University

Title: Coaching Middle School Mathematics Teachers: A Case for Emergent Professional Learning Communities

Date: October 18 (Thursday), 2018

Time: 3:00–4:00 PM

Room: MB 139

Abstract: In this presentation, I used mixed research methods to highlight the important role that coaches can play in supporting teacher learning and how these supports, in turn, correlated with student learning as measured by state achievement assessments. During the presentation, I will report results from statistical methods to address how teachers’ beliefs and students’ learning were positively affected because of their work with mathematics specialists. Against this backdrop, I will present findings of one case study of one specialist’s work with teachers to reconstruct the communities of practice that emerged as she and teachers engaged in this daily work. I then use these findings to argue for how particular practices sustained and contributed to a community that both support and engender teachers’ professional learning.

**Caroline Uhler**, Massachusetts Institute of Technology

Title: From Causal Inference to Gene Regulation

Date: January 31, 2019 (Thursday)

Time: 4:00–5:00 PM

Room: MB 139 (Remote broadcast)

Abstract: A recent break-through in genomics makes it possible to perform perturbation experiments at a very large scale. The availability of such data motivates the development of a causal inference framework that is based on observational and interventional data. We first characterize the causal relationships that are identifiable from interventional data. In particular, we show that imperfect interventions, which only modify (i.e., without necessarily eliminating) the dependencies between targeted variables and their causes, provide the same causal information as perfect interventions, despite being less invasive. Second, we present the first provably consistent algorithm for learning a causal network from a mix of observational and interventional data. This requires us to develop new results in geometric combinatorics. In particular, we introduce DAG associahedra, a family of polytopes that extend the prominent graph associahedra to the directed setting. We end by discussing applications of this causal inference framework to the estimation of gene regulatory networks.

**Carl Pomerance**, Dartmouth College

Title: Primality Testing: Then and Now

Date: February 20, 2019 (Wednesday)

Time: 3:00–4:00pm

Room: MPCB 108

Abstract: The task is simply stated. Given a large integer, decide if it is prime or composite. Gauss wrote of this algorithmic problem (and the twin task of factoring composites) in 1801: “the dignity of science itself

seems to require that every possible means be explored for the solution of a problem so elegant and so celebrated.” Though progress with factoring composites has been steady and substantial, I think Gauss would be

especially pleased with the enormous progress in primality testing, both in practice and in theory. In fact, one of the latest developments strangely and aptly employs a construct Gauss used to deal with ruler and compass constructions of regular polygons! This talk will present a survey of some of the principal ideas used in the prime recognition problem starting with the 19th century work of Lucas, to the 21st century work of Agrawal, Kayal, and Saxena, and beyond.

**Elisabeth Larsson**, Uppsala University

Title: Exploring the radial basis function partition of unity method

Date: March 04, 2019 (Monday)

Time: 3:00–4:00pm

Room: ILC 202

Abstract: Radial basis function (RBF) methods are a relatively new class in the field of numerical methods for partial differential equations (PDEs). Researchers working with more established methods often ask if there really is a need to develop a new method. We can’t provide the definite answer to this question until we have tried successfully. In either case, the process of developing a method is a tool for gaining new knowledge, and this in itself is valuable. RBF methods provide a number of properties that make them attractive for PDEs, such as meshlessness, high order convergence rates for smooth problems, and ease of implementation. In this talk, we will follow the process of developing the radial basis function partition of unity method (RBF-PUM) with the objective of having these benefits. We demonstrate the success of the method through theory and experiments. Finally, we show recent and preliminary results from a project aimed at simulating the human respiratory system.

**Stephan Rosebrock**, Paedagogische Hochschule Karlsruhe

Title: Finite topological spaces and simplicial complexes

Date: March 07, 2019 (Thursday)

Time: 3:00–4:00pm

Room: MB 135

Abstract: Each partially ordered set X may be equipped with a topology induced by the partial order. It is possible to assign a finite simplicial complex K(X) to such an X, such that X and K(X) are weak homotopy equivalent. Conversely one can assign to each finite simplicial complex K a partially ordered set X such that X and K are weak homotopy equivalent. The talk starts with an introduction into finite topological spaces as developped by McCord, Stong, Barmak and others. We analyze when finite spaces are homotopy equivalent and introduce a notion of expansion and collapses for finite spaces. In the second part of the talk it will be shown how finite topological spaces may be used to prove results about simplicial complexes. I will show how long standing open questions in low dimensional topology and homotopy theory can be approached via finite topological spaces. The Andrews-Curtis conjecture and the Whitehead conjecture are reformulated for finite spaces and classes of examples are given for both conjectures which are not counterexamples.

**Kim Laine**, Microsoft Research

Title: Homomorphic Encryption today: Schemes, Implementations, Applications

Date: March 13, 2019 (Wednesday)

Time: 3:00–4:00pm

Room: MPCB 118

Abstract: Homomorphic encryption is a powerful cryptographic technique that allows computation to be done directly on encrypted data. Since the first fully homomorphic encryption scheme was invented in 2009, the field has come a long way in terms of theory, implementations, and applications. Today there are multiple open-source implementations of fully homomorphic encryption available, including Microsoft SEAL, PALISADE by NJIT/Duality Cloud, and HElib by IBM Research. In this talk I will give a thorough overview of the state of homomorphic encryption: the audience will learn what homomorphic encryption is, why it is interesting, what works well, and what the main challenges are today.

**Steven Bleiler**, Portland State University

Title: Probability, Stochasticity, and Repeated Games from the Quantum Viewpoint

Date: April 18, 2019 (Thursday)

Time: 3:00–4:00pm

Room: ILC 403

Abstract: The analysis of games played in the coming quantum computation environment is an exciting new area of research that spans the traditional areas of mathematical game theory, quantum mechanical physics and computer science. Fundamental to the study is the analysis of the possible advantage to players provided by the higher order of randomization in quantum mechanical systems as compared to that of classical systems. In this talk we will examine this phenomena in both finite and infinite stage repeated games, first in the context of history dependence where we’ll present a quantized version of the classical Parrando effect, i.e. where two losing games are combined via randomization to form a winning one. This will be followed by a brief overview of current research on the notion of quantum probability and the proper quantization of certain classical state dependent stochastic games, including the corresponding Markovian dynamics.

This talk is self contained and no previous knowledge of quantum mechanics or the game theory of history or state dependent games on the part of the audience will be assumed.

**David Gleich**, Purdue University

Title: Higher-order clustering of complex networks

Date: April 25, 2019 (Thursday)

Time: 5:00–6:00 PM

Room: MB 139 (Remote broadcast)

Abstract: Spectral clustering is a well-known way to partition a graph or network into clusters or communities with provable guarantees on the quality of the clusters. This guarantee is known as the Cheeger inequality and it holds for undirected graphs. We’ll discuss a new generalization of the Cheeger inequality to higher-order structures in networks including network motifs. This is easy to implement and seamlessly generalizes spectral clustering to directed, signed, and many other types of complex networks. In particular, our generalization allow us to re-use the large history of existing ideas in spectral clustering including local methods, overlapping methods, and relationships with kernel k-means. We will illustrate the types of clusters or communities found by our new method in biological, neuroscience, ecological, transportation, and social networks.

## Schedule for 2017–2018

**Rama Mishra**, IISER Pune

Title: Some interesting spaces associated to polynomial knots

Date: May 8 (Tuesday), 2018

Time: 3:00–4:00pm

Room: MB 126

Abstract: In this talk we discuss the topology of three different kinds of spaces associated to polynomial knots of degree at most d, for d ≥ 2. We denote these spaces by O_d, P_d and Q_d. For d ≥ 3, we show that the spaces O_d and P_d are path connected and the space O_d has the same homotopy type as S^2. Considering the space P = ∪ O_d of all polynomial knots with the inductive limit topology, we prove that it too has the same homotopy type as S^2. For the space Q_d we show that if two polynomial knots are path equivalent in Q_d, then they are topologically equivalent. Furthermore, the number of path components in Q_d are in multiples of eight.

**Brian Harbourne**, University of Nebraska

Title: Rational amusements for a winter afternoon

Date: March 13 (Tuesday), 2018

Time: 3:00–4:00pm

Room: ILC 403

Abstract: In 1821 John Jackson published ”Rational amusement for winter evenings or, A collection of above 200 curious and interesting puzzles and paradoxes”. At least one of these puzzles has led to open problems in combinatorics about line arrangements, open problems which have recently become relevant to a growing body of work in commutative algebra and algebraic geometry. I will describe some of this work and its history and discuss how it relates to an old open problem in algebraic geometry, called the Bounded Negativity Conjecture, and to a newer problem in commutative algebra called the Containment Problem. No background in algebraic geometry, commutative algebra or combinatorics will be assumed.

**Yuanzhe Xi**, University of Minnesota

Title: Fast and stable algorithms for large-scale computation

Date: February 21 (Wednesday), 2018

Time: 3:00–4:00pm

Room: ILC 201

Abstract: Scientific computing and data analytics have become the third and fourth pillars of scientific discovery. Their success is tightly linked to a rapid increase in the size and complexity of problems and datasets of interest. In this talk, I will discuss our recent efforts in the development of novel numerical algorithms for tackling these challenges. In the first part, I will present a stochastic Lanczos algorithm for estimating the spectrum of Hermitian matrix pencils. The proposed algorithm only accesses the matrices through matrix-vector products and is suitable for large-scale computations. This algorithm is one of the key ingredients in the new breed of “spectrum slicing”-type eigensolvers for electronic structure calculations. In the second part, I will present our newly developed fast structured direct solvers for kernel systems and its applications in accelerating the learning process. By exploiting intrinsic low-rank property associated with the coefficient matrix, these structured solvers could overcome the cubic solution cost and quadratic storage cost of standard dense direct solvers and provide a new framework for performing various matrix operations in linear complexity.

**Jonah Reeger**, Air Force institute of technology

Title: Numerical quadrature over bounded smooth surfaces

Date: February 16 (Friday), 2018

Time: 3:00–4:00pm

Room: ILC 201

Abstract: This talk describes a high order accurate method to calculate integrals over smooth surfaces with boundaries. Given data locations that are arbitrarily distributed over the surface, together with some functional description of the surface and its boundary, the algorithm produces matching quadrature weights. This extends on our earlier methods for integrating over the surface of a sphere and over arbitrarily shaped smooth closed surfaces by now considering domain boundaries. The core approach consists of combining RBF-FD (radial basis function-generated finite difference) approximations for curved surface triangles, which together make up the full surface. A discussion of proposed applications is included.

**Michal Kopera**, UC Santa Cruz and Naval postgraduate school

Title: Adaptive high-order continuous/discontinuous Galerkin model of the ocean with application to Greenland fjords

Date: February 13 (Tuesday), 2018

Time: 3:00–4:00pm

Room: ILC 403

Abstract: Interactions of ocean and glaciers inside Greenland’s fjords is one of the key outstanding challenges in modeling studies of climate change. Several-fold increase in the mass discharge through marine-terminating glaciers has been observed; however, the underlying physical mechanisms are yet to be fully understood1. The leading hypothesis is that those changes are driven mainly by increased incursions of warm seawater into the fjords. However, due to orders of magnitude difference in spatial scales between open ocean (~1000km) and fjord (<1km), as well as complex bathymetry and coastline, present-day ocean models are not able to resolve fine-scale processes in the fjords and at the ice/ocean interface.

In this talk, I will present a new computational approach to this problem using high-order continuous/discontinuous Galerkin methods on flexible, adaptive meshes, developed in the NUMO project (Nonhydrostatic Unified Model of the Ocean). To address the problem of icesheet/ocean interactions, circulation within the fjord, and exchanges with the open ocean, I have designed a non-hydrostatic ocean model based on the three-dimensional incompressible Navier-Stokes equations. An unstructured mesh is used to realistically represent the geometry of the fjord, while in the areas of particular importance (i.e., glacier front) the resolution is increased by local non-conforming mesh refinement. NUMO aims to mitigate the cost of non-hydrostatic ocean simulations by leveraging modern high-performance computing systems.

I will present model validation results, and discuss outstanding challenges in realistic simulations of the Sermilik Fjord. The long-term goal is to simulate all Greenland fjords and adjacent coastal ocean and couple this simulation to regional or global Earth System Models.

**Varun Shankar**, University of Utah

Title: A high-order meshfree framework for solving PDEs on irregular domains and surfaces

Date: February 9 (Friday), 2018

Time: 3:00–4:00pm

Room: ILC 201

Abstract: We present meshfree methods based on Radial Basis Function (RBF) interpolation for solving partial differential equations (PDEs) on irregular domains and surfaces; such domains are of great importance in mathematical models of biological processes. First, we present a generalized high-order RBF-Finite Difference (RBF-FD) method that exploits certain approximation properties of RBF interpolants to achieve significantly improved computational complexity, both in serial and in parallel. Like all RBF-FD methods, our method requires stabilization when applied to solving PDEs. Consequently, we present a robust and automatic hyperviscosity-based stabilization technique to rectify the spectra of RBF-FD differentiation matrices. The amount of hyperviscosity is determined quasi-analytically in two stages: first, we develop a novel mathematical model of spurious solution growth, and second, we use simple 1D Von Neumann analysis to analytically cancel out these spurious growth terms. The resulting expressions for hyperviscosity are a generalization of formulas from both RBF-FD and classical spectral methods. The resulting stabilized RBF-FD method serves as a high-order meshfree framework for solving PDEs on irregular domains. Finally, we present a powerful new RBF-FD technique that allows for the solution of PDEs on surfaces using scattered nodes and Cartesian coordinate systems. In all cases, our methods achieve O(N) complexity for N nodes.

**Tomáš Tichý**, Technical University Ostrava

Title: Recent innovations in portfolio selection strategies

Date: November 9 (Thursday), 2017

Time: 3:00–4:00pm

Room: MB 124

Abstract: The work presents some recent innovations in portfolio strategies reached by the authors. It is well known that the returns of financial assets generally do not follow the Gaussian law, which also implies that the Pearson measure of linear correlation is not suitable to correctly describe dependecies among random variables. We first focuse on possible usage of different correlation measures in portfolio problems. We characterize especially semidefinite positive correlation measures consistent with the choices of risk-averse investors. Moreover, we propose a new approach to portfolio selection problem, which optimizes the correlation between the portfolio and one or two market benchmarks. We also discuss why one should use correlation measures to reduce the dimensionality of large scale portfolio problems. Next, a so called stochastic alarm, which should allow us to predict market periods of systemic risk and price drawdowns is utilized. Moreover, we edtend the analysis by considering options written on stocks included in the portfolio and present a so called timing and hedging strategies. Finally, through an empirical analysis using US data, we show the impact of different correlation measures on portfolio selection problems and on dimensionality reduction problems.

**Aaron Bertram**, University of Utah

Title: Complex vs Tropical Algebraic Geometry

Date: October 12 (Thursday), 2017

Time: 3:00–4:00pm

Room: MB 124

Abstract: The tropical numbers are an arithmetic in which addition and multiplication are maximum and (ordinary) addition, respectively. Just as a polynomial has the right number of complex roots, a tropical polynomial also has the right number of tropical roots, when ”root ̈ıs correctly interpreted. There are a lot of parallels between complex and tropical geometry that are still poorly understood, though tropical geometry seems to be the right setting for some very deep conjectures at the interface between algebraic geometry and quantum physics. In this talk, we will explore some of these parallels and give an attempt at explaining how tropical geometry figures in mirror symmetry.

**Bengt Fornberg**, University of Colorado

Title: Numerical Solutions of the Painlevé Equations

Date: September 15 (Friday), 2017

Time: 1:30–2:30pm

Room: ILC 401

Abstract: The six Painlev ́e equations P I to P VI have been the subject of extensive investigations for about a century. During the last few decades, their range of applications have increased to the point that their solutions now are ranked among the key special functions of applied mathematics. The solutions to these nonlinear equations typically feature extensive pole fields in the complex plane, posing challenges for both analytical and numerical methods. Unusually for important special functions, big gaps remained in our knowledge about them. This situation changed just over five years ago, with the development of a numerical Pole field Solver (PFS), permitting fast and accurate numerical solutions also across dense pole fields. In the particularly important case of solutions that are real-valued along the real axis, the complete solution spaces for the P I, P II and P IV equations have now been exhaustively surveyed. Recently, the PFS has been applied also to the remaining three Painlev ́e equations, for which the solutions in general no longer are single valued. These calculations show that a variety of phenomena can occur on their different Riemann sheets.

The present work has been carried out in collaborations with Andr Weideman (University of Stellenbosch), Jonah Reeger (US Air Force Institute of Technology), and Marco Fasondini (University of the Free State).

**Laurie Cavey**, Boise State University

Title: Supporting Prospective Secondary Mathematics Teacher Learning About Student Reasoning: Rationale, Goals, and Strategies

Date: September 14 (Thursday), 2017

Time: 1:00–2:00pm

Room: ILC 404

Abstract: Recent work associated with developing video-based online learning modules to support prospective secondary mathematics teacher (PSMT) learning about secondary students mathematical reasoning will be shared. The current project is a continuation of work conducted over the last two years and involves developing and piloting additional video-based online modules at Boise State, then revising and implementing the modules at nine other institutions. Each module incorporates short video clips of middle or high school students working on a mathematical task and is designed for use in undergraduate mathematics courses. Video-based modules make it possible to purposefully select episodes of students expressing their thinking (i.e. case studies) around important ideas, something that cannot be controlled in a typical field experience. But why incorporate the learning of secondary students mathematical ideas into math classes for future teachers? And what is the goal of doing such? These questions will be addressed using samples from the modules and an illustration of the module development process. Ultimately, the aim is to iteratively improve the design of the modules, and in turn PSMT learning, through a cycle of design experiments. Conjectures associated with PSMTs completion of the module series will be shared, along with preliminary results of analysis.

This work is an ongoing collaboration with the following individuals: Patrick Lowenthal (Educational Technology), Michele Carney (Curriculum & Instruction), Tatia Totorica (IDoTeach), and Jason Libberton (Regional Math Specialist).

**Jonathan Woody**, Mississippi State University

Title: A Statistical Analysis of Daily Snow Depth Trends in North America

Date: September 8 (Friday), 2017

Time: 1:30–2:30pm

Room: ILC 201

Abstract: Several attempts to assess regional snow depth trends across various portions of North America have been previously made. These studies estimated trends by applying various statistical methods to snow depths, new snowfalls, or their climatological proxies such as water equivalents. In most of these studies, inhomogeneities (changepoints) were not taken into account. This paper presents a detailed statistical methodology to estimate trends of a time series of daily snow depths from a given data set that accounts for changepoint features.

The methods are demonstrated on a scientifically accepted 1◦×1◦ gridded data set covering parts of the United States and Canada. Average daily snow depth trends across all stations are increasing at 0.321 cm/Century when changepoints are ignored; this figure drops to 0.1519 cm/Century when changepoints are taken into account. While average are increasing, more than half the grids report declining snow depth trends, with and without changepoint effects.

## Schedule for 2016–2017

**Rama Mishra**, IISER Prune

Title: Real Rational Knots

Date: May 9 (Tuesday), 2017

Time: 3:00–4:00pm

Room: MB 139

A Real rational knot of degree d is an embedding of RP^1 to RP^3 defined by $[t,s]\to[p_0(t,s), p_1(t,s), p_2(t,s), p_3(t,s)]$ where p_i(t,s) are homogeneous polynomials of same degree d, that do not vanish simultaneously. It is easy to see that all knots in RP3 are isotopic to some real rational knot. Real rational knots can be categorized in two groups: the one that lie completely in R^3 and the one that intersect a plane at infinity. we call the first one as affine knots and the other one as projective knots. Real rational affine knots are same as our classical knots. Real rational knots are projective closure of maps R to R^3 given by $t\to(r_0(t),r_1(t),r_2(t),r_3(t))$ where r_i(t) are rational functions. This talk will present a technique to construct a real rational knot of reasonably low degree which is ambient isotopic to a given affine knot. We will generalize it to obtain real rational knots isotopic to any projective knot.

**Klaus Volpert**, Villanova

Title: On the Mathematics of Income Inequality

Date: April 20 (Thursday), 2017

Time: 3:00–4:00pm

Room: MB 135

Abstract: The Gini-Index based on Lorenz Curves of income distributions has long been used to measure income inequality in societies. This single-valued index has the advantage of allowing comparisons among countries, and within one country over time. However, being a summary measure, it does not distinguish between intersecting Lorenz curves, and may not detect certain sociological and economic trends over time. We will discuss a new two-parameter model for the Lorenz curve, essentially the product of two `Pareto` distributions. This allows us to split the Gini-Index in two: One for the upper end and one for the lower end of the income ladder. This in turn allows us to observe phenomena in American history that are obscured when just viewing the Gini-Index alone. The talk will be accessible to anyone with just basic knowledge of calculus.

**Joe Champion**, Boise State University

Title: Do We Really Need More Standards? Considering the New Standards for Preparing Teachers of Mathematics

Date: April 7 (Friday), 2017

Time: 4:00–5:00pm

Room: MB 107

Abstract: Somewhat surprisingly, the new Standards for Preparing Teachers of Mathematics (Association of Mathematics Teacher Educators, 2017) is the first publication to present a comprehensive vision for both (1) the knowledge, skills, and dispositions well-prepared beginning teachers should have and (2) how teacher education programs can ensure their students meet those standards. We will discuss how these new standards might inform courses, programs, and policy at Boise State University. Students, faculty, and stakeholders are all welcome to join the conversation.

**Nick Trefethen**, University of Oxford

Title: Cubature, Approximation, and Isotropy in the Hypercube

Date: March 16 (Thursday), 2017

Time: 3:00–4:00pm

Room: ILC 402

Abstract: The hypercube is the standard domain for computa1on in higher dimensions. We explore two respects in which the anisotropy of this domain has prac1cal consequences. The first is the maXer of axis-alignment in low-rank compression of mul1variate func1ons. Rota1ng a func1on by a few degrees in two or more dimensions may change its numerical rank completely. The second concerns algorithms based on approxima1on by mul1variate polynomials, an idea introduced by James Clerk Maxwell. Polynomials defined by the usual no1on of total degree are isotropic, but in high dimensions, the hypercube is exponen1ally far from isotropic. Instead one should work with polynomials of a given “Euclidean degree.” The talk will include numerical illustra1ons, a theorem based on several complex variables, and a discussion of “Padua points”.

**Lilian Calderón-Garcidueñas**, University of Montana (Joint colloquium with the School of Nursing)

Title: Air pollution and your brain: the bad, the ugly and the expected!

Date: March 2 (Thursday), 2017

Time: 3:00–3:50pm

Room: MB 135

Abstract: The emerging picture for Mexico City children with high exposures to both fine particulate matter (PM2.5) and ozone shows systemic inflammation, immunodysregulation at both systemic and brain levels, oxidative stress, neuroinflammation, small blood vessel pathology, and an intrathecal inflammatory process, along with the early neuropathological hallmarks for Alzheimer and Parkinson’s diseases. Exposed brains are briskly responding to their harmful environment and setting the bases for structural and volumetric changes, cognitive, olfactory, auditory and vestibular deficits and long term neurodegenerative consequences. Multidisciplinary research is needed to improve our understanding of the PM pediatric short and long term CNS impact. Public health benefit can be achieved by integrating interventions that reduce fine PM levels and pediatric exposures and establishing preventative screening programs targeting pediatric populations that are most at risk. We have a 50-year window of opportunity between the pediatric brain changes associated with air pollution exposures and the time when the patient with mild cognitive impairment, dementia or tremor will show up at the neurologist’s door. Facing the current pediatric clinical and pathology evidence is imperative if we are aiming our efforts to identify and mitigate environmental factors that influence pediatric brain damage and AD/PD pathogenesis.

This Seminar will review neurological injuries caused by exposure to air pollution including structural brain abnormalities, neurocognitive deficits, early neurodegenerative changes similar to those seen in Parkinson’s and Alzheimer’s disease present in children and young adults with high exposures to PM2.5 and ozone. It will provide information about strategies for educating and counseling parents, expectant mothers, and exposed populations of all ages to reduce their exposure to air pollutants.

**Bob Palais**, Utah Valley University

Title: Math and DNA-based Medicine

Date: October 27 (Thursday), 2016

Time: 3:00–3:50pm

Room: MB 124

Abstract: DNA is the medium on which the operating system of the cells of all living organisms is written in the base 4 genetic code. Developing methods to analyze variations in DNA sequence, quantity, and activity, and their medical consequences, leads to interesting mathematical problems, whose solutions positively impact diagnostics and therapeutics. Some examples we will discuss include rapid accurate economical thermodynamic tests for Ebola and other pathogens, and identifying and quantifying mutations, using methods from calculus, linear algebra, probability, and differential equations; and mining big data for genes associated with aggressive tumors that suggest therapies to suppress them.

**Jaechoul Lee**, Boise State University

Title: Trend Estimation for Climatological Extremes

Date: September 14 (Wednesday), 2016

Time: 3:00–3:50pm

Room: MB 126

Abstract: Extreme climatological events have profound societal, ecological, and economic impacts. Studying trends in climatological extreme data is crucial for our life. This talk presents trend estimation methods for climatological extremes, focusing on trend estimation for monthly maximum and minimum temperature time series observed in the conterminous United States. Previous authors have suggested that minimum temperatures are warming faster than maximum temperatures in the United States; such an aspect can be rigorously investigated via the methods discussed in this study. Here, statistical models with extreme value and changepoint features are used to estimate trends and their standard errors. The results show that monthly maximum temperatures are not often greatly changing — perhaps surprisingly, there are many stations that show some cooling. In contrast, the minimum temperatures show significant warming. Our methods are also applied to extreme precipitation data products: MERRA, CPC, and USHCN. New findings will be discussed.

**Liljana Babinkostova**, Boise State University

Title: On games, latin squares and anomalous numbers

Date: September 7 (Wednesday), 2016

Time: 3:00–3:50pm

Room: ILC 303

Abstract: This talk will survey some of our recent results in the areas of games, combinatorics and algebra. In the area of selection principles we survey some of our results on game-theoretic properties of selection principles related to weaker forms of the Menger and Rothberger properties as well as of screenability and of strong screenability. Some of these selection principles are, in appropriate spaces, characterized in terms of a corresponding game. But for other selection principles, though equivalent in even nice spaces like metric spaces, their game theoretic versions are not equivalent.

In the area of combinatorics we survey some combinatorial problems involving latin squares and transversals that emerged from our research on hash functions. The question about existence of transversals in latin squares over finite groups is far from being resolved and is an area of active investigation. Among other results we present the notion of k-near transversal and give a closed formula for the number of (p^r−2)-transversals over GF(p^r).

In the area of algebra we survey some of our recent results regarding Type I elliptic Korselt numbers and their connection to anomalous primes, generalizing some previous results. We also found that under the Tijdeman-Zagier conjecture there are infinitely many such numbers.

## Schedule for 2015–2016

**Samuel Coskey**, Boise State University

Title: The complexity of classification problems

Date: Wednesday, 27 April 2016

Time: 3:00–3:50pm

Room: MB 124

Abstract: Much of mathematics is devoted to classifying the objects we study: groups or graphs up to isomorphism, metric spaces up to isometry, symmetries up to conjugacy, and so on. But of course, some classifications are harder than others. Borel complexity theory is an area of set theory that helps us rigorously make such comparisons. In this talk we will survey some recent applications of this theory to problems from group theory, model theory, graph theory, and functional analysis.

**Bruce Reznick**, University of Illinois

Title: The secret lives of polynomial identities

Date: Thursday, 21 April 2016

Time: 1:30–2:20pm

Room: RFH 312

Abstract: Polynomial identities can reflect deeper mathematical phenomena. In this talk, I will discuss some of the stories behind four identities (and their relatives). The stories involve algebra, analysis, number theory, combinatorics, geometry and numerical analysis. The identities, which don’t fit well in plain text, involve polynomials in two, three and four variables being taken to powers ranging from the third to the fourteenth. The earliest is due to Vi ́ete, and dates to the 1590’s. Felix Klein and Srinivas Ramanujan show up.

**Ellen Veomett**, Saint Mary’s College of California

Title: Coloring Geometrically Defined Graphs

Date: Thursday, 14 April 2016

Time: 4:00–4:50pm

Room: ILC 302

Abstract: This talk will take us through a journey of graph coloring. We’ll start with some basic definitions and the well-known four and five color theorems. We’ll also discuss the fascinating question of the chromatic number of the plane. Finally, we’ll talk about new results on box graphs, which are graphs defined using blocks and their intersections. This talk will be extremely accessible, while at the same time including some modern research topics.

**Joe Champion**, Boise State University

Title: Factors Affecting High School Calculus Completion Rates

Date: Friday, 8 April 2016

Time: 3:00–3:50pm

Room: ILC 403

Abstract: Numerous reports, including the recent National Study of Calculus I, have highlighted calculus as a major stumbling block for college students who aspire to earn a STEM degree, especially for those from historically underrepresented groups. This presentation will focus on the large and increasing role of high school calculus in preparing students for post-secondary mathematics-intensive degree programs. The presentation will include an original analysis of the High School Longitudinal Study (HSLS:09) data set, including proportional flow diagrams of course taking patterns and logistic regression analysis of the likelihood of students earning credit for calculus in high school. The statistical results will highlight differences in calculus completion associated with non-malleable student characteristics such as race, sex, and socioeconomic status (SES), as well as malleable student characteristics, such as knowledge of mathematics in 9th grade, the level of mathematics course they take in 9th grade, and self-efficacy. Implications for higher education will also be considered.

**Liang Peng**, Georgia State University

Title: Statistical Inference for the Lee-Carter Mortality Model

Date: Friday, 18 March 2016

Time: 3:00–3:50pm

Room: MB 139

Abstract: Longevity risk concerns pension funds and insurance companies as people live longer. For hedging longevity risk, modeling mortality rates plays an important role. A widely employed mortality model is the so-called Lee-Carter model. In this talk we will revisit the two-step inference procedure proposed by Lee and Carter (1992) and further propose a new unit root test for the Lee-Carter model. Simulation and data analyses will be presented too.

**Varun Shankar**, University of Utah

Title: Radial Basis Function Methods for Meshfree Transport on the Sphere and Other Surfaces

Date: Thursday, 10 March 2016

Time: 3:00–3:50pm

Room: RFH 102-B

Abstract: Advection is a mechanism by which a substance is transported from one location to another by the bulk motion of a fluid. The transport of proteins and other molecules on cell membranes and other surfaces is of increasing interest in the biological and material sciences. In this talk, we present several novel numerical methods for simulating transport on surfaces. These methods are built on Radial Basis Function (RBF) interpolation, a powerful tool for scattered data interpolation on irregular domains and surfaces. As is typical of RBF methods, our methods work purely with Cartesian coordinates, avoiding any coordinate singularities associated with intrinsic coordinate systems on manifolds. However, while current RBF methods for transport require careful tuning for stability, our new methods are self-stabilizing. We present results showing high orders of spatial convergence for transport on the sphere, and demonstrate the ability of our methods to handle transport on other surfaces.

**Partha Sarathi Mukherjee**, Boise State University

Title: Image denoising using local pixel clustering

Date: Thursday, 3 March 2016

Time: 3:00–3:50pm

Room: ILC 302

Abstract: With rapid growth of imaging applications in many disciplines, preservation of the details of image objects while removing noise becomes an important research area. Images often contain noise due to imperfections of the image acquisition techniques. Noise in images should be removed so that the details of the image objects e.g., blood vessels or tumors in human brain are clearly seen, and the subsequent image analyses are reliable. Most image denoising techniques in the literature are based on certain assumptions about the image intensity function which are often not reasonable in case the image resolution is low. If there are lots of complicated edge structures, then these denoising methods blur those structures. I will present an image denoising method which is based on local pixel clustering framework. The challenging task of preserving image details including complicated edge structures is accomplished by performing local clustering and adaptive smoothing. Numerical studies show that it works well in many applications. I will not assume any background knowledge on imaging from the audience. I will provide a brief introduction to gray scale images and the audience should be able to follow the big picture with little or no advanced background in statistics.

**Ludger Overbeck**, Gießen University

Title: Multivariate Markov Families of Copulas

Date: Thursday, 18 February 2016

Time: 3:00–3:50pm

Room: MB 139

Abstract: For the Markov property of a multivariate process, a necessary and sufficient condition on the multi-dimensional copula of the finite-dimensional distributions is given. This establishes that the Markov property is solely a property of the copula, i.e., of the dependence structure. This extends results by Darsow, W., Nguyen, B., and Olsen, E. (1992) from dimension one to the multivariate case. In addition to the one-dimensional case also the spatial copula between the different dimensions has to be taken into account. Examples are also given.

**Hirotachi Abo**, University of Idaho

Title: Jordan canonical forms: from a commutative algebra point of view

Date: Thursday, 11 February 2016

Time: 3:00–3:50pm

Room: ILC 302

Abstract: Every square matrix is similar to the so-called Jordan canonical form, which is an upper triangular matrix of a particular form. The purpose of this talk is to provide a commutative algebra viewpoint on Jordan canonical forms. More precisely, I discuss how the conditions for a non-zero vector to be an eigenvector of a matrix can be expressed by homogeneous polynomials and show how the numeric data of the Jordan canonical form of the matrix are encoded in the ideal generated by these polynomials.

**Alex Townsend**, MIT

Title: Continuous analogues of matrix factorizations

Date: Thursday, 4 February 2016

Time: 3:00–3:50pm

Room: ILC 302

Abstract: A fundamental idea in matrix linear algebra is the factorization of a matrix into simpler matrices, such as orthogonal, tridiagonal, and triangular. In this talk we extend this idea to a continuous setting, asking: “What are the continuous analogues of matrix factorizations?” The answer we develop involves functions of two variables, an iterative variant of Gaussian elimination, and sufficient conditions for convergence. This leads to a test for non-negative definite kernels, a continuous definition of a triangular quasimatrix (a matrix whose columns are functions), and a fresh perspective on a classic subject. This is work is with Nick Trefethen.

**Donna Calhoun**, Boise State University

Title: Parallel, adaptive finite volume methods for mapped, multi-block domains

Date: Tuesday, 15 September 2015

Time: 3:00–3:50pm

Room: ILC 401

Abstract: Finite volume methods are widely used in applications for which the numerical discretization of the underlying model equations should exactly conserve mass, momentum, energy and other physical quantities whose evolution is governed by conservation laws. Discretizations of finite volume schemes on smooth, logically Cartesian grids is straightfoward, and can have superior accuracy properties over more general unstructured meshes. However, uniform Cartesian meshes can be prohibitively expensive, especially in situations where the interesting dynamics occurs only in a fraction of the domain. In these cases, we wish to locally adapt the uniform grids to regions of interest.

In this talk, I will describe the ForestClaw project, a parallel adaptive code for finite volume methods in mapped, multiblock domains. Unlike traditional patch-based algorithms for mesh adaptation, ForestClaw uses adaptive quadtrees and can therefore take advantage of the regularity of the quadtree layout, making applications much simpler to develop. We can also take advantage of the high performance capabilities of our underlying quadtree code, making it possible to run on thousands of processors. I will describe the underlying parallel, multi-rate time stepping algorithm developed for this quadtree approach and provide examples from areas of natural hazards modeling, including the transport of volcanic ash, and tsunami modeling using the shallow water wave equations. Comparisons with an existing adaptive code AMRCLaw (LeVeque, et al.) based on the traditional patch-based algorithm will also be shown.

**Grady Wright**, Boise State University

Title: Numerically solving partial differential equations on surfaces using kernels

Date: Friday, 11 September 2015

Time: 12:00–12:50pm

Room: ILC 204

Abstract: Kernel approximation methods, such as radial basis functions, are advantageous for a wide-range of applications that involve analyzing/synthesizing “scattered data, or numerically solving partial differential equations (PDEs) on geometrically complex domains. Although originally considered for Euclidean domains, there has been much interest in extending kernel methods to problems defined on more general mathematical objects, which include, among many others, surfaces in Rd (e.g., the unit two-sphere). In the case of d = 3, these domains arise in many applications from the geophysical and biological sciences, as well as computer graphics.

Initial work in this area has focused on global approximations using kernels and promising theoretical results have been proven. However, the computational cost of these methods can be quite high due to their global nature, which limit their use in large-scale applications. Hence, many current efforts are focusing on reducing this cost with local kernel methods such as radial basis function generated finite differences.

In this talk, we survey some recent results of both theoretical and practical interest for kernel-based methods for numerically solving PDEs on surfaces and discuss applications to biological pattern formation.

**Sasha Wang**, Boise State University

Title: Identifying Similar Polygons: Comparing Pre-service Teachers’ Geometric Discourse with a Mathematician’s

Date: Thursday, 10 September 2015

Time: 3:00–3:50pm

Room: ILC 401

Abstract: In this presentation, I will share the results of pre-service teachers and mathematicians ways of identifying similar triangles and hexagons by analyzing their geometric discourse. For example, my findings suggest that visual recognition is a common approach for the mathematician and pre-service teachers. However, when asked for justification, the mathematicians and pre-service teachers approaches of identifying similar polygons and ways of communicating their geometric thinking diverged. I will also discuss the implication my findings have on the use of classroom discourse practices to enhance pre-service teachers communication and reasoning skills while learning geometric concepts such as similarity.

## Schedule for 2014–2015

**Stefano De Marchi**, University of Padua

Title: Multivariate Christoffel functions and hyperinterpolation

Date: Thursday, 19 March 2015

Time: 3:00–3:50pm

Room: ILC 404

Abstract: We obtain upper bounds for Lebesgue constants (uniform norms) of hyperinterpolation operators via estimates for (the reciprocal of) Christoffel functions, with different measures on the disk and ball, and on the square and cube. As an application, we show that the Lebesgue constant of total-degree polynomial interpolation at the Morrow-Patterson minimal cubature points in the square has an O(n^3) upper bound, explicitly given by the square root of a sextic polynomial. We will also present the extension of hyperinterpolation to kernel-based spaces.

This work has been doen with the collaboration of M. Vianello, A. Sommariva and G. Santin (University of Padova), R. Schaback (University of Goettingen).

**Carsten Burstedde**, University of Bonn

Title: Recent Developments in Forest-of-octrees AMR

Date: Thursday, 12 March 2015

Time: 3:00–3:50pm

Room: ILC 404

Abstract: Forest-of-octrees AMR [adaptive mesh refinement] offers both geometric flexibility and parallel scalability and has been used in various finite element and finite volume codes for the numerical solution of partial differential equations. Low and high order discretizations alike are enabled by parallel node numbering algorithms that encapsulate the semantics of sharing node values between processors. More general applications, such as semi-Lagrangian and patch-based methods, require additional AMR functionalities. In this talk, we present algorithmic concepts essential for recently developed adaptive simulations.

**Michael Dorff**, Brigham Young University

Title: Analytic functions, harmonic functions, and minimal surfaces

Date: Monday, 23 February 2015

Time: 3:00–3:50pm

Room: ILC 303

Abstract: Complex-valued harmonic mappings can be regarded as generalizations of analytic functions and are related to minimal surfaces which are beautiful geometric shapes with intriguing properties. In this talk we will provide background material about these harmonic mappings, discuss the relationship between them and minimal surfaces, present some new results, and pose a few open problems.

**Jarosław Buczyński**, Institute of Mathematics of Polish Academy of Sciences

Title: Constructions of k-regular maps using finite local schemes

Date: Thursday, 6 November 2014

Time: 3:00–3:50pm

Room: ILC 304

Abstract: A continuous map R^m→R^N or C^m→C^N is said to be k-regular, if the image of any k distinct points is linearly independent. The problem we address in this talk is: given m and k, what is the minimal value of N such that a k-regular map as above exists. Topological methods provide lower bounds on such N. In this talk we present an upper bound on such N by explicitly constructing k-regular maps. We use parameter spaces of local Gorenstein schemes to construct such maps. (Joint work with Tadeusz Januszkiewicz, Joachim Jelisiejew, and Mateusz Michalek.)

**Jennifer Halfpap Kacmarcik**, University of Montana

Title: What is a Singular Integral Operator

Date: Thursday, 30 October 2014

Time: 3:00–3:50pm

Room: ILC 304

Abstract: Recall that a function of two real variables is harmonic if its Laplacian is zero. Dirichlets Problem for the unit disc in R^2 asks: Given a continuous function f on the unit circle, does there exist a function u harmonic on the open unit disc and continuous on the boundary such that u = f on the unit circle? The answer is Yes, and one can obtain u from f by taking the convolution of f with the Poisson kernel P_r(t) = (1-r^2)/(1-2r cos(t)+r^2).

Motivated by this example, we study other operators T on function spaces for

which T f = f * K for some kernel K or, more generally, operators T for which Tf(x) = Integral[ f(y)K(x,y)dy].

The goal is always to connect properties of the operator to properties of the kernel K, and we allow ourselves to consider kernels K that have singularities so that the resulting integrals are improper. In this talk, we discuss some of the basic theorems about such singular integral operators and discuss ongoing research in the area.

**Gregory G. Smith**, Queen’s University

Title: Nonnegative sections and sums of squares

Date: Thursday, 23 October 2014

Time: 3:00–3:50pm

Room: ILC 304

Abstract: A polynomial with real coefficients is nonnegative if it takes on only nonnegative values. For example, any sum of squares is obviously nonnegative. For a homogeneous polynomial with respect to the standard grading, Hilbert famously characterized when the converse holds, that is when every nonnegative homogeneous polynomial is a sum of squares. After reviewing some history of this problem, we will examine this converse in more general settings. This line of inquiry has unexpected connections to classical algebraic geometry and leads to new examples in which every nonnegative homogeneous polynomial is a sum of squares. This talk is based on joint work with Grigoriy Blekherman and Mauricio Velasco.

**Jennifer Halfpap Kacmarcik**, University of Montana

Title: Sums of Squares Problems in Several Complex Variables

Date: Thursday, 25 September 2014

Time: 3:00–3:50pm

Room: ILC 301

Abstract: Hilbert’s Seventeenth Problem asks: Given a non-negative polynomial in several real variables, can it be written as a sum of squares of rational functions? Although this question was answered in the affirmative by Artin in the 1920’s, there is still considerable interest in variations on this question. In several complex variables, for example, researchers like Catlin and D’Angelo have considered what they call the Hermitian analogues of this question in which one studies non-negative polynomials in several complex variables and asks whether they can be written as a quotient of sums of squared moduli of holomorphic polynomials. In this talk, we discuss sums of squares problems in general, how they arise in the theory of functions of several complex variables, and how techniques from commutative algebra can be brought to bear on these questions.

## Schedule for 2013–2014

**Martino Lupini**, York University

Title: An invitation to sofic groups

Date: Tuesday, 22 April 2014

Time: 2:00–2:50pm

Room: ILC 204

Abstract: The class of countable discrete groups known as sofic groups has drawn in the last ten years the attention of an increasing number of mathematicians in different areas of mathematics. Many long-standing conjectures about countable discrete groups have been settled for sofic groups. Despite the amount of research on this subject, several fundamental questions remain open, such as: Is there any group which is not sofic? In my talk I will give an overview of the theory of sofic groups and its applications.

**Rama Mishra**, IISER Pune

Title: Some numerical knot invariants through polynomial knots

Date: Wednesday, 25 September 2013

Time: 12:00–12:50pm

Room: B 207

Abstract: Polynomial knots were introduced to represent knots in 3 space by simple polynomial equations. In this talk we will discuss how these equations and their degree can be used in deriving information of some important numerical knot invariants.

**Randall Holmes**, Boise State University

Title: The consistency problem for New Foundations

Date: Tuesday, 10 September 2013

Time: 3:00–3:50pm

Room: ILC 204

Abstract: I will explain the nature of the long-standing problem of the consistency of the set theory New Foundations proposed by the philosopher W. v. O. Quine in 1937 both in the prior context of the development of set theory, its usefulness in mathematics, and the problem of the ”paradoxes” of set theory, and in the posterior context of partial solutions to the consistency problem and related results. I do claim to have solved this problem (this is not generally agreed yet) but I am not going to talk about that on this occasion. The talk should be accessible to a general audience of mathematicians; I hope that a graduate student or mature undergraduate would get something out of it too.

**Zach Teitler**, Boise State University

Title: Recent advances in Waring rank and apolarity

Date: Thursday, 5 September 2013

Time: 3:00–3:50pm

Room: ILC 303

Abstract: Waring rank is a measure of complexity of polynomials related to sums-of-powers expressions and to a number of applications such as interpolation problems, blind source separation problems in signal processing, mixture models in statistics, and more. We review some recent advances having in common the use of apolarity, a sort of reversed version of differential equations in which one considers the set of differential equations that have a given function as a solution.

1. Apolarity is applied to describe criteria for a polynomial to be expressible as a sum of functions in separate sets of variables, possibly after a change of coordinates. This is related to separation-of-variables techniques in differential equations and to topology (criteria for a manifold to be decomposable as a connected sum). This is joint work with Buczynska, Buczynski, and Kleppe.

2. The set of sum-of-powers decompositions of a monomial is described. A corollary is a necessary and sufficient condition for a monomial to have a unique such decomposition, up to scaling the variables. This is joint work with Buczynska and Buczynski.

3. One generalization of monomials is the family of polynomials that completely factor as products of linear factors, geometrically defining a union of hyperplanes. Waring ranks of hyperplane arrangements are determined in the case of mirror arrangements of finite reflection groups satisfying a technical hypothesis which includes many cases of interest. This is joint work with Woo.

If time permits, ongoing work will be described, including geometric lower bounds for generalized Waring rank, apolarity of general hyperplane arrangements, and a number of other open questions.

## Schedule for 2012–2013

**Robert Floden**, Michigan State University

Title: Improving the Preparation of STEM Teachers: Improving Both Content Preparation and Teaching Practice

Date: Wednesday, 8 May 2013

Time: 4:00–5:30pm

Room: Student Union Building, Bergquist Lounge

Abstract: One key to improving STEM education in the US is strengthening programs for initial teacher preparation. Improvements are called for both in prospective teachers’ opportunities to deepen their content knowledge and in prospective teachers’ opportunities for gaining proficiency in classroom practice. Recent developments in policy and research have implications for changes in current practices in STEM departments and in schools of education. Among these developments are the recent IEA international comparative study of mathematics teacher preparation (TEDS-M), the widespread adoption of the Common Core State Standards for Mathematics (and associated forthcoming assessments), and the newly released Next Generation Standards for Science Education. Insights can also be gained from recent initiatives, such as Teachers for New Era, that bring together STEM faculty with educators in higher education and in K-12 schools.

**Rodrigo Platte**, Arizona State University

Title: Algorithms for recovering smooth functions from equispaced data and the impossibility theorem

Date: Thursday, 2 May 2013

Time: 12:00–12:50pm

Room: ILC 404

Abstract: The recovery of a function from a finite set of its values is a common problem in scientific computing. It is required, for instance, in the reconstruction of surfaces from data collected by 3D scanners, and is one of the main underlying problems in the numerical solution of partial differential equations. This talk focuses on the special case of approximating functions from values sampled at equidistant points.

It is known that polynomial interpolants of smooth functions at equally spaced points do not necessarily converge, even if the function is analytic. Instead one may see wild oscillations near the endpoints, an effect known as the Runge phenomenon. Associated with this phenomenon is the exponential growth of the condition number of the approximation process. Several other methods have been proposed for recovering smooth functions from uniform data, such as polynomial least-squares, rational interpolation, and radial basis functions, to name but a few. It is now known that these methods cannot converge at geometric (exponential) rates and remain stable for large data sets. In practice, however, some methods perform remarkably well. In this talk we compare many of these schemes using detailed numerical experiments. Moreover, we use a Hermite-type contour integral on the complex plane and potential theory to analyze how convergence rates of different methods depend on singularity locations of the function being recovered.

**Samiran Sinha**, Texas A&M University

Title: Conditional logistic regression analysis when a covariate is measured with errors

Date: Thursday, 14 March 2013

Time: 3:00–3:50pm

Room: ILC 302

Abstract: In many clinical and or epidemiological studies the predictors/covariates are not measured accurately. If we ignore these measurement errors, we may end with a biased parameter estimates. In the first half of my talk I will discuss this issue and some possible remedies. In the second half of my talk, I will describe a method for handling this issue in the conditional logistic regression analysis. Finally, I will discuss results from a simulation study, and in the end, I will illustrate the method using the data from the NIH-AARP Diet and Health study.

**Derrick Stolee**, University of Illinois

Title: Uniquely Kr-Saturated Graphs—Infinite Families Using Cayley Graphs

Date: Tuesday, 22 January 2013

Time: 3:00–3:50pm

Room: ILC 203

Abstract: Given a set of constraints on a graph, it is not always obvious that graphs exist with those properties. Frequently, algebraic constructions are used to build infinite families. We discuss a relatively new family of uniquely saturated graphs. A graph G is uniquely Kr-saturated if it contains no clique with r vertices andif for all edges e in the complement, G + e has a unique clique with r vertices. Previously, few examples of uniquely Kr-saturated graphs were known, and littlewas known about their properties. After finding new examples using a new algorithm, we found a pattern for two new infinite families using Cayley graphs over Zn. I will highlight some features of our proof, which uses a new form of discharging argument. This is joint work with Stephen G. Hartke.

**Holly Swisher**, Oregon State University

Title: Ramanujan type supercongruences

Date: Thursday, 13 December 2012

Time: 12:00–12:50pm

Room: ILC 304

Abstract: In the early 1900’s Ramanujan listed several infinite series representations of 1/π. These were later used to calculate decimal expansions for π to great precision. Surprisingly, van Hamme discovered analogues of several of these formulas modulo primes and formulated 13 conjectures. Some of these have been proved and others remain open questions. There are a number of interesting topics that come into play, including elliptic curves and hypergeometric series.

**Alex Woo**, University of Idaho

Title: Some local properties of Schubert varieties

Date: Thursday, 8 November 2012

Time: 1:30–2:20pm

Room: B101

Abstract: Nineteenth century geometers were interested in problems such as counting the number of lines which met four specified lines in space. Schubert varieties are geometric objects devised to help answer such questions. I will discuss some geometric properties and questions about which Schubert varieties satisfy these properties.

Algebra enters the picture in two important ways. First, because Schubert varieties can be defined by polynomial equations, these geometric properties can be formulated as properties of finitely generated commutative rings. Secondly, many answers to such questions are in terms of the combinatorics of the symmetric group or other Coxeter groups.

**Colleen Robles**, Texas A&M University

Title: Schubert varieties as variations of Hodge structure

Date: Thursday, 4 October 2012

Time: 1:30–2:20pm

Room: MG 120

Abstract: I will introduce variations of Hodge structure (VHS), and discuss the setting they form for a remarkable confluence of geometry, representation theory and number theory. Especially distinguished by the geometric structure are the distinguished class of Schubert varieties, and we will see that together representation theory and the system of PDE characterizing VHS provide the machinery to describe (infinitesimally) all VHS in terms of the Schubert varieties.

**Laurie Cavey**, Boise State University

Title: Understanding Mathematical Definitions: What we are Learning from Students and Mathematicians

Date: Friday, 14 September 2012

Time: 12:00–12:50pm

Room: MG 120

Abstract: The purpose of this talk is to share connections between the results of two research studies focused on the ways in which individuals think about and work with mathematical definitions. In the first study, undergraduate students were interviewed as they worked in pairs to evaluate a list of statements about prime numbers. The analysis of these data focused on the types of evidence students used to decide whether or not a statement could be used as a definition for prime number. The results from this study revealed several categories of evidence used, including properties, examples, and the structure of the statement. In the second study, mathematicians were interviewed to gain insight into their perspectives on developing understanding of new definitions (either for themselves or for their students). The analysis of these data focused on the types of examples mathematicians referenced and their descriptions of how these examples were used. Findings indicate that mathematicians use a range of example-types to support their own as well as their students understanding of definitions to serve a variety of purposes. Examining the results of both studies suggests particular strategies that might be employed to support student understanding of mathematical definitions.

## Schedule for 2011–2012

**Frank Stenger**, University of Utah

Title: Approximating indefinite convolutions

Date: Friday, 4 May 2012

Time: 1:40–2:30pm

Room: ILC 402

Abstract: The two integrals integrals defined on (a,b) ⊆ R

(1) p(x) = Integral[f(x-t)g(t),a,x,dt], q(x) = Integral[f(t-x)g(t),x,b,dt]

arise in many areas of analysis and applications, such as control theory, Volterra integral equations, fractional integrals, integro–differential equations, etc. The speaker has obtained another expression for these integrals using the operator notations

(2) J^+g(x) = Integral[g(t),a,x,dt], J^-g(x) = Integral[g(t),x,b,dt]

and the “Laplace transform” formula

(3) F(s) = Integral[exp(-t/s)f(t),0,c,dt], Re(s)>0, c≥b-a

namely,

(4) p = F(J^+)g, q = F(J^-)g .

This talk presents some new identities made possible via (4), such as Laplace transform inversion, f = (J^+)^(-1) F(J^+)1, the Hilbert transform, Hg = (log J^− – log J^+)g, and for solving Wiener-Hopf equations, f(x) + Integral[k(x-t)f(t),0,infty,dt] = g(x), x∈(0,∞).

These identities and the formulas (4) can be accurately approximated using methods for approximating the integrals (2). The talk also presents a new way of evaluating (4), and the use of the identities (4) together with methods for approximating indefinite integrals to enable approximating of the above processes, as and also, for solving differential and convolution–type integral equations in one or more dimensions.

**Hirotachi Abo**, University of Idaho

Title: Counting the number of complete subgraphs in the Paley graph

Date: Thursday, 8 March 2012

Time: 1:40–2:30pm

Room: ILC 204

Abstract: Let p be a prime number congruent to 1 modulo 4 and let Fp be a finite field with p elements. The Paley graph with p vertices is defined as the graph having the vertex set F_p, where two vertices are adjacent if and only if their difference is a square in F_p \ {0}.

In this talk, I will discuss the problem of counting the number of complete subgraphs in the Paley graph. The goal of this talk is to show that determining the number of such subgraphs can be reduced to counting the number of points on the geometric object defined by a certain system of equations over F_p.

This talk will be accessible for students familiar with modular arithmetic (all other needed concepts will be explained in the talk).

**Jacek Kierzenka**, MathWorks

Title: Developing BVP solvers for MATLAB

Date: Monday, 5 March 2012

Time: 2:40–3:30pm

Room: ILC 401

Abstract: Problem Solving Environments (PSEs) became important tools for numerical computing but they often rely on codes that were developed for general scientific computing. In this talk I will describe a line of codes for solving Boundary Value Problems (BVPs) for ordinary differential equations. The codes were designed specifically for use within the MATLAB environment and I will explain how working in that PSE affected the choice of algorithms and implementation decisions. I will also discuss some theoretical results about the underlying numerical methods and demonstrate how they were used to devise robust schemes for error estimation and mesh selection. Taking advantage of features offered by MATLAB resulted in computationally efficient codes that are strikingly easy to use and capable of solving large class of problems.

**Lawrence Washington**, University of Maryland

Title: Manipulating Encrypted Data

Date: Thursday, 1 March 2012

Time: 1:40–2:30pm

Room: ILC 301

Abstract: How can we determine the fair selling price of a commodity, where the buyers and sellers submit their bidding strategies in encrypted form? How can Alice find out if her friend Bob is in the neighborhood without revealing her location to him or to the central server? Solutions to these problems belong to the developing field of performing mathematical operations on encrypted data. I’ll discuss these examples along with possible future developments.

**Michael Starbird**, University of Texas

Title: Geometric Gems: Appreciating the Timeless Beauty of Mathematics

Date: Thursday, 15 September 2011

Time: 4:00–5:00pm

Room: SUB Lookout Room

Abstract: Plain plane (and solid) geometry contains some of the most beautiful proofs ever–some dating from ancient times and some created by living mathematicians. This talk will include some of my favorites from an incredibly clever way to see that a plane intersects a cone in an ellipse to a method for computing areas under challenging curves developed by a living mathematician, Momikan Mntsakanian and many more. Geometry provides many treats!

**Andres Caicedo**, Boise State University

Title: Sets and Games

Date: Thursday, 1 September 2011

Time: 1:40–2:30pm

Room: MG 120

Abstract: A game between two players is determined if one of the players has a winning strategy. The study of determined games of infinite length has provided one of the key directions of research in set theory in the last half century. Among other features, the study of determinacy is closely connected to the analysis of regularity properties of sets of reals (such as Lebesgue measurability) and to the theory of so-called large cardinals.

In this talk I present several recent results on the structure of models of set theory where determinacy plays a key role.

## Schedule for 2010–2011

**Christine Escher**, Oregon State University

Title: Classifying families of manifolds

Date: Thursday, 14 April 2011

Time: 2:40–3:30pm

Room: ILC 302

Abstract: A fundamental and often deep problem in mathematics is the classification of the objects of a given type up to some equivalence. The tools for the classification of this talk come from algebraic topology, but the interest and motivation come from differential geometry and theoretical physics. Certain objects, namely homogeneous spaces, became very important to geometers and physicists but their classification was not known. In this talk I will give a complete classification up to homeomorphism and diffeomorphism of a specific family of seven dimensional manifolds, the so-called generalized Einstein-Witten manifolds.

**Thomas Foster**, University of Cambridge

Title: Church-Oswald Models for Set Theory

Date: Thursday, 7 April 2011

Time: 2:40–3:30pm

Room: MG 108

Abstract: Church’s back-burner interest in set theory with a universal set resulted in a late paper in 1974. His motivation seems to have been to make the point that the universe (unlike the Russell class, whose nonexistence can be proved in first-order logic with no set-theoretic assumptions) is not a paradoxical object.

Church’s construction is simple (it was discovered at about the same time by Oswald—hence the title), elegant and effective, but it has never attracted much attention (three Ph.D. theses in 57 years), so we don’t yet really know what limits there are on what can be done with it.

This will be an introductory talk intended for an audience that knows its way around ZF.

**Uwe Harlander**, Brandenburg University of Technology Cottbus

Title: The differentially heated rotating annulus as a laboratory analogue of the atmospheric circulation

Date: Thursday, 17 March 2011

Time: 2:40–3:30pm

Room: MG 108

Abstract: The priority program “Metstrm” is an initiative that brings together engineers, mathematicians, and meteorologists to unify (theoretical and numerical) concepts of technical and meteorological fluid dynamics. The MetStrm projects are absorbed in one of the following three topics: 1. Large-scale Dynamics, 2. Turbulence/LES, and 3. Multiphase flows. Each of these topics has at least one reference experiment that gathers benchmark data for numerical modelling. The reference experiment for the topic “large-scale dynamics” is the thermally driven rotating annulus.

The thermally driven rotating annulus can be seen as a simple laboratory experiment of atmospheric baroclinic instability. This instability generates a highly complex and nonlinear flow that shows many similarities with irregular atmospheric flows. Besides our aim of providing benchmark data for MetStrm, several not well understood wave phenomena can be investigated by conducting the experiment. For example wave instability, chaotic behaviour wave-vortex interactions, multiple-scale flows, to name a few.

In my talk I will give an overview of the project and I will address issues of data assessment and data post processing that is important for the present and future work.

**Lyudmyla Barannyk**, University of Idaho

Title: Spatially averaged dynamics, closure method and dimension reduction for discrete models of heterogeneous continua

Date: Monday, 7 February 2011

Time: 2:40–3:30pm

Room: MG 124

Abstract: We introduce a dimension reduction strategy for discrete microstructural models. Unlike many existing methods, our approach is geared towards developing fast simulations of hydrodynamic averages at intermediate spatial scales. The first step is spatial averaging in the spirit of Noll and Murdoch–Bedeaux. This yields exact equations of balance for average density, momentum and energy. To ensure computational savings we propose a closure method based on regularized deconvolution, specifically on iterative algorithms for solving integral equations of the first kind. The simplest zero-order closure, applied to a chain of nonlinear oscillators modeling granular acoustics, produces a coarse scale approximation of the average stress that is in good agreement with the exact stress produced by direct microscopic simulations. Finally, we discuss restrictions on dynamics that permit the use of zero-order closure as opposed to more accurate higher-order regularizations.

**Dik Dummar**, Idaho National Laboratory

Title: Uncertainty Quantification of a chemical kinetic process internship at Idaho National Laboratory

Date: Thursday, 27 January 2011

Time: 4:40–5:30pm

Room: MG 120

Abstract: An overview of my project on uncertainty quantification of a chemical kinetic process and my overall experience at Idaho National Laboratory.

**Michael Pernice**, Idaho National Laboratory

Title: Modeling and Simulation at Idaho National Laboratory

Date: Thursday, 27 January 2011

Time: 3:40–4:30pm

Room: MG 120

Abstract: An overview of modeling and simulation topics at Idaho National Laboratory will be given.

**Jim Wolper**, Idaho State University

Title: Computational Complexity of Quadrature Rules

Date: Thursday, 27 January 2011

Time: 2:40–3:30pm

Room: MG 124

Abstract: How hard is it to compute Integral[f(x),0,1,dx]? This is really three questions: (1) What do we mean by f? (2) What do we mean by hard? (3) What do we mean by compute? More precisely, how hard is it to compute an estimate for the integral given only a table of values for f? How good are nondeterministic (random) methods? Looking at these questions in terms of information theory and computational complexity leads to surprising characterizations of the standard Newton–Cotes Rules (eg, Simpson’s Rule) as the simplest possible. This perspective leads to new estimates for these integrals, as well as new methods for estimating integrals over higher dimensional domains.

**Timothy Barth**, NASA Ames

Title: A Survey of Techniques for Uncertainty Propagation with Application to Computational Fluid Dynamics

Date: Wednesday, 17 November 2010

Time: 2:40–3:30pm

Room: ILC 213

Abstract: Numerical simulations of complex physical systems are often rife with sources of uncertainty. Some example sources include uncertain parameters associated with initial and boundary data specification, geometry shape specification, turbulence models, chemistry models radiation models, catalysis models, and many others. Given these source of uncertainty, a major task at hand is to propagate this statistical uncertainty throughout simulations thus quantifying the statistical behavior of derived outputs. This task becomes computationally intractable as the number of simultaneous sources uncertainty is increased.

In this presentation, we briefly survey both statistical sampling techniques (e.g. Monte Carlo and Quasi-Monte Carlo) and deterministic techniques (e.g. numerical stochastic PDEs) for the propagation of uncertain model parameters arising in numerical approximations of nonlinear systems of conservation laws. Each technique offers certain advantages and disadvantages depending on the number of input sources of uncertainty, the number of computed outputs, and the solution/output behavior. Of keen interest are nonlinear conservation laws that admit discontinuities in both physical and random variable dimensions. The presence of discontinuities in random variable dimensions makes the use of standard high-order chaos polynomial approximation spaces problematic and motivates the use of adaptive piecewise polynomial approximation spaces.

Model problem calculations will be utilized throughout the presentation to compare various methods. In addition, more realistic calculations of compressible Navier-Stokes flow over 2-D and 3-D aerodynamic bodies will be presented to illustrate the propagation of parameter uncertainty arising from (1) PDE turbulence models and (2) gas mixtures with finite-rate chemical kinetics.

**J. Scott Carter**, University of South Alabama

Title: An Introduction to Quandles

Date: Thursday, 11 November 2010

Time: 2:40–3:30pm

Room: MG 124

Abstract: A quandle is a set that has a binary operation for which each element is idempotent, and the set acts upon itself as automorphisms. The axioms were described independently in the early 1980s by David Joyce and Sergei Matveev. Quandles algebraically model the Reidemeister moves for knot diagrams. Recent (since 1998) developments of applications have been found using quandle cocycles.

In this talk, I will give the definitions and basic examples, describe the knot quandle, and sketch the definition of the cocycle invariants. Towards the end of the talk, I plan to give a list of a number of applications. I won’t assume much intimacy with topology or knot theory. The talk should be accessible to advanced undergraduates and beginning graduate students.

**Piotr Kokoszka**, Utah State University

Title: Two sample inference in functional linear models

Date: Thursday, 14 October 2010

Time: 2:40–3:30pm

Room: MG 124

Abstract: After discussing the basic concepts of functional data analysis, a new and growing field of statistics, I will focus on a method of comparing two functional linear models in which explanatory variables are functions (curves) and responses can be either scalars or functions. The objective is to test the null hypothesis that the regression kernels are the same in the two samples. The test will be illustrated by application to egg-laying curves of Mediterranean flies and to data from terrestrial magnetic observatories. I will attempt to make the talk accessible to non–statisticians.

**Grady Wright**, Boise State University

Title: Geophysical Modeling on the Sphere with Radial Basis Functions

Date: Thursday, 09 September 2010

Time: 2:40–3:30pm

Room: MG 124

Abstract: Modeling data on the sphere is fundamental to many problems in the geosciences. Classical approaches to these problems are based on expansions of spherical harmonics and/or approximations on latitude/longitude based grids. The former are quite algorithmically complex, while the latter suffer from the notorious pole problem. Additionally, neither of the methods can be easily generalized to other manifolds. Radial Basis functions (RBFs), on the other hand, are algorithmically simple, suffer from no coordinate singularities, and generalize to arbitrary geometries. Since RBFs do not depend on any grid and require no meshing, they can be naturally used in concert with so called optimal node configurations. We discuss three recent and non-trivial geophysical applications of RBFs on spherical domains with optimal node sets. The first is on the approximation and decomposition of tangent vector fields on the sphere (e.g. horizontal winds in the atmosphere). The second is the simulation of unsteady nonlinear flows on the sphere described by the shallow water equations. The third and final application is the simulation of thermal convection in a 3-D spherical shell, a situation of interest in modeling the Earth’s mantle.

**Liljana Babinkostova**, Boise State University

Title: Games and Dimension

Date: Thursday, 02 September 2010

Time: 2:40–3:30pm

Room: MG 124

Abstract: In 1911 Lebesgue introduced covering dimension for topological spaces. For finite dimensional real vector spaces this dimension is equal to the algebraic dimension. There are several ways of extending Lebesgue covering dimension to infinite dimensional spaces (i.e. spaces that are not finite dimensional). Until recently none of these provided a covering dimension value for a class of spaces such as the infinite dimensional separable Banach spaces.

In this talk I will introduce by means of a game a covering dimension function that assigns an ordinal dimension to each topological space. In metric spaces, finite Lebesgue covering dimension coincides with game the game dimension. Moreover, the Continuum Hypothesis is equivalent to the statement that the algebraic dimension of R^N is equal to its game dimension.

## Schedule for 2009–2010

**Diarmuid Crowley**, Hausdorff Institute

Title: The Mapping Class Groups of (n-1)-connected 2n-Manifolds, and an Introduction to the Manifold Atlas Project

Date: Tuesday, 11 May 2010

Time: 3:30–4:45pm

Room: MG 118

Abstract: Part I. Based on results of Wall and Cerf, Kreck gave a pair of exact sequences to compute the mapping class group of certain (n − 1)-connected 2n-manifolds. This leaves an extension problem and the main result of this talk is to present a partial solution of the extension problem for the manifolds M_r = ♯_r(S^p × S^p), p = 3,7. As time permits I will discuss the mapping class groups of other similar manifolds.

Part II. The Manifold Atlas is a scientific Wiki about manifolds with plans to become a sort of on-line journal. In this short talk I will introduce the Atlas by outlining it’s goals, its structure and some aspects of using the Atlas.

**Gary Hagerty**, Boise State University

Title: Using Technology to Reach Out to the Individual Student Creating Success in Mathematics for All

Date: Monday, 12 April 2010

Time: 2:40–3:30pm

Room: ILC 404

Abstract: In 1991, Geoffrey Moore wrote the text, “Crossing the Chasm”. The text describes the process of bringing a new technology to the market. The chasm represents the rift between innovators using the new technology and the technology becoming a mainstream technology. Some technologies make it across the chasm (VCR’s until replaced with DVD’s) while others do not (Betamax).

In 2000, new technologies of online homework for mathematics found their ways into the hands of innovators. Initially, there were both success and failures in using technologies. Over time, a sufficient and growing number of innovators were obtaining remarkable results of 70-80% (20-30% higher than traditional textbook courses) passing rate in entry level math courses. Such results helped push online homework systems over the chasm around 2008. Today there is exponential growth in the usage as these systems enter the mainstream.

This presentation will examine the cultural changes that are necessary to create a successful transition from textbook based assignments to online assignments. Educational theories, which support the move to a more successful process using online technologies, will be presented. Myths, which developed over time in a textbook era and were required, will be debunked. The presentation will conclude with a vision that online homework systems present the opportunity for building a mathematical environment which is richer for all students. (And even possibly removing the term “math anxiety” from the spoken language!)

**Xinyu Sun**, Xavier University of Louisiana

Title: Several Games and Things we can learn from them

Date: Monday, 17 March 2010

Time: 2:40–3:30pm

Room: ILC 201

Abstract: We are to discuss a few simple, but interesting games, and find the winning strategies for them. With the help of the examples, we will develop the basic theory on Nim-like games, and use the theory to solve a wide range of seemingly unrelated problems. More importantly, we will discuss why we want to study those games.

**Scott MacLachlan**

Title: Fast Solvers for Geodynamic Flows

Date: Tuesday, 16 March 2010

Time: 2:00–3:00pm

Room: tba

Abstract: Geodynamic flows, such as the convection within the Earth’s mantle, are characterized by the extremely viscous nature of the flow, as well as the dependence of the viscosity on temperature. As such, a PDE-based approach, coupling the (stationary) Stokes Equations for viscous flow with a time-dependent energy equation, offers an accurate mathematical model of these flows. While the theory and practice of solving the Stokes Equations is well-understood in the case of an isoviscous fluid, many open questions remain in the variable-viscosity case that is relevant to mantle convection, where large jumps occur in the fluid viscosity over short spatial scales. I will discuss recent progress on developing efficient parallel solvers for geodynamic flows, using algebraic multigrid methods within block-factorization preconditioners.

**Zach Teitler**, Texas A&M University

Title: Ranks of Polynomials

Date: Monday, 15 March 2010

Time: 2:40–3:30pm

Room: MG 226

Abstract: The Waring rank of a polynomial of degree d is the least number of terms in an expression for the polynomial as a sum of dth powers. The problem of finding the rank of a given polynomial and studying rank in general has been a central problem of classical algebraic geometry, related to secant varieties; in addition, there are applications to statistics, signal processing and computational complexity. Other than a well-known lower bound for rank in terms of catalecticant matrices, there has been relatively little progress on the problem of determining or bounding rank for a given polynomial (although related questions have proved very fruitful). I will describe new upper and lower bounds, with especially nice results for some examples including monomials and cubic polynomials. This is joint work with J.M. Landsberg.

**Peter Scheiblechner**, Purdue University

Title: A Quick Tour through Algebra and Geometry

Date: Friday, 12 March 2010

Time: 2:40–3:30pm

Room: MG 226

Abstract: We start with efficient parallel algorithms for linear algebra and show how these can be used to solve certain problems in algebra and geometry over the complex numbers. In particular, we consider the problems of deciding whether a system of polynomial equations is feasible, counting the irreducible factors of a multivariate polynomial, and counting the connected components of an algebraic variety. Ultimately, the computation of the topological Betti numbers of smooth projective varieties can also be reduced to linear algebra.

**Jens Harlander**, Boise State University

Title: Groups as Geometric Objects

Date: Friday, 11 September 2009

Time: 2:40–3:30pm

Room: MG 120

Abstract: A 3-manifold is a topological space that locally looks like R^3. An important example is the one we live in. What is the global nature of our universe? The first classification question concerning 3-manifolds is the Poincare Conjecture, a problem that has driven low dimensional topology for a century. In 2003, Grigori Perelman proved Thurston’s Geometrization Conjecture, which provides a classification of 3-manifolds in terms of the eight 3-dimensional geometries, the most important one being hyperbolic geometry.

Geometric group theory studies analogies between 3-manifold groups and arbitrary discrete groups. Parts of Thurston’s classification program can be carried over to groups and hyperbolic groups have emerged as objects of central interest.

In my talk I will explain what hyperbolic groups are and report on work done with one of my graduate students on the construction of hyperbolic virtual link groups.

**Tevian Dray and Corinne A. Manogue**, Oregon State University

Title: Bridging the Gap between Mathematics and the Physical Sciences

Date: Friday, 4 September 2009

Time: 2:40–3:30pm

Room: MG 120

Abstract: As with Britain and America, mathematicians are separated from other scientists by a common language. Casual discussions with those in other disciplines suggest far more agreement than exists in fact. In a nutshell, mathematics is about functions, but science is about physical quantities. This has far-reaching implications not only for the teaching of lower-division mathematics ”service” courses, but also for the training of mathematicians.

For the last decade, we have led an NSF-supported effort to bridge this gap at the level of second-year calculus. The unifying theme we have discovered is to emphasize geometric reasoning, not (just) algebraic computation. In this talk, we will illustrate the language differences between mathematicians and physicists in particular, and what this implies for the teaching of mathematics in general, and vector calculus in particular.

## Schedule for 2008–2009

**Inanc Senocak**, Boise State University

Title: Rapid-Response Simulation of Forward and Inverse Problems in Atmospheric Transport and Dispersion

Date: Thursday, 19 March 2009

Time: 3:40–4:30pm

Room: MG 139

Abstract: Environmental sensors have been deployed in various cities for early detection of contaminant releases into the atmosphere. Rapid-response contaminant dispersion and event reconstruction capabilities are needed to backtrack the dispersion source and then project the contamination extent with quantified uncertainty. To enable rapid-response simulations on small-footprint desktop supercomputers, we have taken a major computational effort to develop a computational fluid dynamics code for contaminant dispersion in urban environments using the GPU computing paradigm. GPUs that are traditionally designed for graphics rendering have emerged as massively-parallel “co-processors” to the Central Processing Unit (CPU). Small-footprint desktop supercomputers with hundreds of stream processors that can deliver several teraflops peak performance at the price of conventional workstations have been realized. In this talk, we describe the development of a novel Cartesian grid CFD code for urban environments using the NVIDIA CUDA programming model on desktop supercomputers with up to four GPUs. Harnessing the full compute-potential of GPUs from NVIDIA requires a clear understanding of the fundamentally new CUDA programming models, device architectures and memory-access patterns. Our results have confirmed the tremendous compute-potential of the GPU computing paradigm with two orders of magnitude speedup over a serial CFD code executed on a conventional CPU.

The second part of the seminar focuses on the inverse modeling problem and presents a stochastic event reconstruction capability that can process information from an environmental sensor network. The inference is based on the Bayesian paradigm with Markov chain Monte Carlo (MCMC) sampling. Given the observations, simple approximate dispersion models are substantially enhanced by introducing stochastic variables in turbulent diffusion parameterization schemes and estimating them within the Bayesian inference framework. Additionally, parameters in the event reconstruction model are estimated in a principled way using data and prior probabilities to avoid tuning in the overall method. The event reconstruction method is successfully validated for both real and synthetic dispersion problems, and posterior distributions of the model parameters are used to generate probabilistic plume envelopes with specified confidence levels to aid emergency decisions.

**Longin Jan Latecki**, Temple University

Title: Multiscale Random Fields with Application to Contour Grouping

Date: Friday, 5 December 2008 (cancelled)

Time: 2:40–3:30pm

Room: MG 120

Abstract: We introduce a new interpretation of multiscale random fields (MSRFs) that admits efficient optimization in the framework of regular (single level) random fields (RFs). It is based on a new operator, called append, that combines sets of random variables (RVs) to single RVs. We assume that a MSRF can be decomposed into disjoint trees that link RVs at different pyramid levels. The append operator is then applied to map RVs in each tree structure to a single RV. We demonstrate the usefulness of the proposed approach on a challenging task involving grouping contours of target shapes in images. It provides a natural representation of multiscale contour models, which is needed in order to cope with unstable contour decompositions. The append operator allows us to find optimal image segment labels using the classical framework of relaxation labeling. Alternative methods like Markov Chain Monte Carlo (MCMC) could also be used.

**Charles Livingston**, Indiana University

Title: A survey of knot invariants

Date: Friday, 21 November 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: A knot invariant is simply a function that assigns to each knot in 3-space a value, usually numeric or algebraic. Basic examples include the crossing number and the Jones polynomial. Recent years have seen the introduction of many new invariants that are offering us deep insights into the nature of knotting. In this talk I will survey knot invariants of current research interest, describe surprising connections between these invariants, and mention some of their relationships to other areas of geometric topology. I will also present some of the long standing, easily stated, problems in knot theory that remain open.

**Uwe Kaiser**, Boise State University

Title: Three-manifold Topology after Perelman

Date: Friday, 14 November 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: The recently proved Poincare and Geometrization theorems of Perelman are now considered to be part of the established theory of 3-manifolds. We discuss some of its impact on our understanding of the role of the fundamental group of 3-manifolds. In 2007 Calegari, Freedman and Walker applied the Geometrization theorem to show that universal topological quantum field theory captures the classification of 3-manifolds. This is very interesting because the corresponding statement in all dimensions greater or equal to 4 is known to be false. The talk will try to give an idea of the spirit of the Calegari-Freedman-Walker theorem.

**Rosemary A. Renaut**, Arizona State University

Title: Statistical properties of the regularized least squares functional and a hybrid LSQR Newton method for finding the regularization parameter: application in image deblurring and signal restoration

Date: Wednesday, 29 October 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: Image deblurring or signal restoration can be formulated as a data fitting least squares problem, but the problem is found to be ill-posed and regularization is needed, hence introducing the need to find a regularization parameter. We study the properties of the regularized least squares functional

||Ax−b||^2_{W_b} + ||D(x−x_0)||^2_{W_x}

for the solution of discretely ill-posed systems of equations. It was recently shown to follow a χ2 distribution when the a priori information x_0 on the solution is assumed to represent the mean of the solution x. Of course for image deblurring no prior information is available, but possibly information on mean value of the right hand side b is available, which yields again a χ2 distribution, but one that is non-central. These results can be used to design a Newton method, using t a hybrid LSQR approach, for the determination of the optimal regularization parameter λ when the weight matrix is W_x = λ^2 I. Numerical results using test problems demonstrate the efficiency of the method, particularly for the hybrid LSQR implementation. Results are compared to another statistical method, the unbiased predictive risk (UPRE) algorithm. We also illustrate the results for image deblurring and a real data seismic signal deblurring problem.

**Barbara Zubik-Kowal**, Boise State University

Title: Numerical solutions of thalamo-cortical systems

Date: Friday, 17 October 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: A new algorithm for accurate and efficient solutions of thalamo-cortical systems will be introduced. The idea of the new algorithm is based on the properties of the kernels, which are applied to the systems. According to these properties, a moderate value t_0>0 is determined and the interval [0,T] of the time integration of the systems is divided into two parts, [0,t_0] and [t_0,T]. Large values of T are investigated with T significantly larger than t0. Classical methods are applied to solve the systems in the first short subinterval [0,t_0]. Then, new iterative schemes are applied on the remaining longer subinterval [t_0,T].

The new algorithm is fast and brings two additional advantages: (1) the length of the interval [t_0,T] can be arbitrarily large, and (2) the thalamo-cortical systems can be efficiently integrated in a parallel computing environment. Error bounds for the new iterative schemes are derived and their rapid convergence on long time intervals [t_0,T] is demonstrated by numerical experiments.

**Mike Hitchman**, College of Idaho

Title: Cosmic topology

Date: Friday, 10 October 2008

Time: 2:40–3:30pm

Room: MG 124

Abstract: What is the shape of the universe? In this talk, aimed at undergraduate math and physics majors, we investigate this question, its ties to geometry and topology, and current research programs in cosmic topology designed to answer it.

**Jeffrey Boerner**, University of Iowa

Title: An introduction to knot invariants

Date: Friday, 3 October 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: A fundamental problem in knot theory is to determine when two knots are the same. This is often effectively done by using knot invariants. An interesting knot invariant called the Jones polynomial was discovered in 1984. In 1999 this invariant was improved by the discovery of Khovanov homology. I will introduce a simple invariant and then I will construct Khovanov homology from the ground up. If time allows I will compute a simple example.

**Stefan Geschke**, Boise State University

Title: Extensions of Ramsey’s Theorem

Date: Friday, 26 September 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: Ramsey’s Theorem says that given a partition of all n-element subsets of an infinite set X into two classes, there is an infinite subset H of X which is homogeneous in the sense that all n-element subsets of H lie in the same class of the partition.

I will discuss extensions of Ramsey’s Theorem that not only talk about possible sizes of homogeneous sets but also take into account whether the homogeneous sets can be chosen so that they are in some sense close to the original set X.

We will see strong uncountable versions of the theorem that deal with separable complete metric spaces and we will also see countable and even finite versions for graphs.

**Andres Caicedo**, Boise State University

Title: Intersecting families and definability

Date: Friday, 19 September 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: A family F of sets is called intersecting if any two sets in F have nonempty intersection. In extremal set theory one investigates the largest size that such a family can have, subject to certain restrictions. Motivated by questions in descriptive set theory, we investigate this problem when the restriction comes from definability conditions in the sense of mathematical logic. This is joint work with John Clemens, Clinton Conley, and Benjamin Miller.

**Kyungduk Ko**, Boise State University

Title: Wavelet-based Bayesian estimation of partially linear regression models with long memory errors

Date: Friday, 12 September 2008

Time: 2:40–3:30pm

Room: MG 124

Abstract: In this talk we focus on partially linear regression models with long memory errors and propose a wavelet-based Bayesian procedure that allows the simultaneous estimation of the model parameters and the nonparametric part of the model. Employing discrete wavelet transforms is crucial in order to simplify the dense variance-covariance matrix of the long memory error. We achieve a fully Bayesian inference by adopting a Metropolis algorithm within a Gibbs sampler. We evaluate the performances of the proposed method on simulated data. In addition, we present an application to Northern hemisphere temperature data, a benchmark in the long memory literature.

**Jaechoul Lee**, Boise State University

Title: A Reformulation of Weighted Least Squares Estimators in Autocorrelated Regressions

Date: Friday, 5 September 2008

Time: 2:40–3:30pm

Room: MG 120

Abstact: This paper studies weighted, generalized, least squares estimators in simple linear regression with serially correlated errors. Closed-form expressions for the weighted least squares estimation are presented for a given inverse variance-covariance matrix with general stationary covariance structure. With the linear trend plus autoregressive moving-average error regression model, the presented formulations produce further explicit expressions of the weighted least squares trend estimator and variance. As an application of these reformulated expressions, a new weighted least squares computation method that reduces the effort of inverting the covariance matrix but produces an equivalent estimate and variance is developed. A new sufficient and necessary condition that the weighted least squares estimators are the same as the ordinary least squares estimators is also provided in a closed form.

**Stephan Rosebrock**, Paedagogische Hochschule Karlsruhe

Title: The Whitehead Conjecture-An Overview

Date: Friday, 29 August 2008

Time: 2:40–3:30pm

Room: MG 120

Abstract: A space is called aspherical if all its higher homotopy groups vanish, that is it does not have high dimensional holes. The 3-space we inhibit does not have any holes. A donut does contain a 1-dimensional hole but no higher dimensional ones. In the early 20th century Max Dehn showed that knot complements are aspherical, but his proof contains a serious gap. Whitehead noted in the 1930s that one could establish the asphericity of knot complements by proving that asphericity is a hereditary property for 2-comlexes, which seemed almost obvious from a combinatorial viewpoint. Asphericity for knot complements was finally shown to hold by Papakyriakopoulos in 1957, but Whiteheads question whether a subcomplex of an aspherical 2-complex is aspherical remains unanswered to this day. In my talk I intend to survey the mathematics that grew out of Whiteheads fundamental question.

## Schedule for 2007–2008

**William A. Bogley**, Oregon State University

Title: The Stallings Proof on the Grusko-Neumann Theorem on Free Products of Groups

Date: Friday, 4 April 2008

Time: 3:40–4:30pm

Room: MG 108

Abstract: In his 1959 PhD dissertation, John R Stallings gave a topological proof of the Grusko-Neumann theorem about generating sets of free products of groups. Specifically, any generating set can be transformed into a very simple and transparent one by a sequence of simple moves called Nielsen transformations. The theorem, which was originally proved using delicate cancellation arguments, does not specify exactly what those moves might be and so has an existential character to it. When Stallings published his proof in 1965, he noted that his topological approach had an ‘effective’ character to it. With apologies to Professor Stallings, this talk will describe his approach in an attempt to isolate what is effective about it.

**Edward J. Fuselier**, West Point

Title: Customized Approximation with Radial Basis Functions

Date: Thursday, 20 March 2008

Time: 3:50–4:40pm

Room: MG 139

Abstract: Radial basis functions (RBFs) have been gaining popularity in recent years. They were used initially to reconstruct unknown “target functions” given data at a few points. However, RBFs turn out to be very flexible and have a broad range of applications. Today they are used in topography, medical imaging, and artificial intelligence, to name a few.

Recently it has been shown that radial basis functions can be “customized” so that the RBF approximant has similar physical properties as that of the underlying target function, such as being divergence-free or curl-free. Further, these ideas can be extended to manifolds. We will give a brief introduction to RBFs, see how they can be applied to vector fields, and discuss a few simple examples. This talk should be accessible to a general audience.

**Bart Kastermans**, University of Wisconsin

Title: Maximal Cofinitary Groups

Date: Friday, 7 March 2008

Time: 2:40–3:30pm

Room: MG 107

Abstract: Cofinitary Groups are subgroups of the symmetric group on the natural numbers in which all elements other than the identity have at most finitely many fixed points. In the study of these groups there is a definite need for set theoretical methods. Analogies between these groups and certain set theoretical objects suggest many interesting questions. We will explain this need for set theory, and some of the questions suggested by the analogy. Then we will explain what we know about the answers so far (e.g., possible isomorphism types, orbit structure, and cardinal numbers associated with these groups). We will also explain some of the combinatorics that goes into working with these groups.

**Jennifer Brown**, College of William and Mary

Title: Finding topologically interesting points in ß(X) under the continuum hypothesis

Date: Wednesday, 5 March 2008

Time: 3:40–4:30pm

Room: MG 107

Abstract: Let X be a completely regular topological space. A compactification of X is a compact space Y together with a homeomorphic embedding c : X → Y

such that c[X] = Y. The Stone-Cech compactification βX of X has the

following property: whenever f : X → [0, 1] is a continuous function from X to the closed unit interval, f continuously extends to a function f : βX → [0,1]. βX contains a homeomorphic copy of X, plus many new points that are added in the construction of βX. The collection of these new points is called the remainder, or growth, and is denoted βX \ X.

Beginning in the 1950’s, topologists were trying to understand the structure of the remainder. For example, they asked: is βX \ X homogeneous? (That is, given two points p, q ∈ βX \ X, is there a homeomorphism taking p to q?) The complex structure of βX for even quite simple spaces X meant that answers to such questions often depended on the continuum hypothesis (CH). W. Rudin proved in 1956 that, assuming CH, βX \ X is not homogeneous for a certain class of spaces X. In 1967, Z. Frol ́ık was able to prove Rudin’s result without invoking CH. However, Frol ́ık’s proof that βX \ X is not homogeneous was “noneffective” in the sense that it did not produce any topologically interesting points.

Remote points are one example of such topologically interesting points. A remote point of X is a point p ∈ βX \ X such that p is not in the closure (in βX) of any nowhere-dense subset of X. van Douwen in 1981 gave an effective proof, using remote points, of the non-homogeneity of βX \ X (for another class of topological spaces X). It is an open question whether CH implies that every non-pseudocompact space with the countable chain condition (ccc) has remote points. We have shown that under CH, a ccc, nonpseudocompact product of ccc spaces each of weight ω_2 has remote points. In this talk we outline our proof, which uses elementary submodels and an analysis of ccc Boolean algebras to impose a hierarchy on the basic open sets in a space.

**Dawn Teuscher**, University of Missouri

Title: Integrated versus single-subject paths in High School: what and how students learn

Date: Wednesday, 5 March 2008

Time: 1:40–2:30pm

Room: MG 113

Abstract: Two types of mathematics curricula (integrated and single-subject) are currently used in high school classrooms. The difference in organizational structure and expectations of integrated and single-subject mathematics curricula raises the question: What are the similarities and differences in student understanding of mathematics after completing either an integrated or single-subject curriculum path? The key question is what mathematical knowledge will students develop from these two different curriculum paths? Specifically, is there a difference in what and how students learn in each curriculum?

This presentation will focus on results from a research study conducted with 505 high school students who studied from either the integrated or the single-subject mathematics curricula. Results will be presented on how well students performed on calculus readiness concepts. Findings will be focused on errors and misconceptions that students have after completing four years of college preparatory mathematics (integrated or single-subject).

**Grady Wright**, Boise State University

Title: A Colloquium in Commemoration of Prof. Gene Golub. Probability, linear algebra, and numerical analysis: the mathematics behind Google’s PageRank(TM) algorithm

Date: Friday, 29 February 2008

Time: 2:40–3:30pm

Room: MG 108

Abstract: PageRank(TM) is a scoring system used by Google(TM) for determining the “importance” of a webpage. As stated on their website, RageRank(TM) is “the heart of [Google’s] software … and continues to provide the basis for all of [their] web search tools”. We survey the mathematics associated with the basic PageRank model and discuss how it is computed. This latter topic was a recent focus of Prof. Golub’s research. If time permits, we also discuss more advanced topics such as acceleration methods for computing PageRank, personalizing PageRank, and ideas for improving a webpage’s PageRank score. This talk should be accessible to undergraduates in mathematics, engineering, and science.

**Andres Caicedo**, Boise State University

Title: Forcing axioms and inner models

Date: Friday, 27 February 2008

Time: 3:40–4:30pm

Room: MG 139

Abstract: Since the results of Goedel in 1930, we know that no axiomatization of set theory can capture all its truths. Goedel himself suggested a program for systematically extending the standard list of axioms. Goedel’s suggestion led to what I call the first generation of extensions, by means of large cardinal hypotheses, that postulate that the universe is “large.” The second generation, that only now we begin to understand, postulates that the universe is “wide,” and consists of forcing axioms.

I will discuss some recent results in set theory about the relationship between the universe of sets and its “close” inner models. These results were motivated by a series of conjectures in the theory of forcing axioms.

**Gunter Fuchs**, University of Muenster

Title: Souslin Trees in Topology, Forcing and Algebra

Date: Friday, 25 February 2008

Time: 3:40–4:30pm

Room: MG 107

Abstract: I will describe the role of Souslin trees in the three areas mentioned in the title. In topology, the existence of Souslin trees is equivalent to the failure of Souslin’s hypothesis, i.e., to the existence of complete, dense linear orders which satisfy the countable chain condition but are not separable. Rigidity properties of the trees correspond to rigidity properties of the associated lines. The study of strong rigidity degrees of Souslin trees gives rise to subtle forcing techniques for adding automorphisms of trees with minimal disturbance of the universe. If time permits, I will sketch an application of these techniques in the context of the automorphism tower problem from algebra.

**Zdzislaw Jackiewicz**, Arizona State University

Title: Numerical Solution of Problems with Functional Dependence in Medicine and Biology

Date: Friday, 22 February 2008

Time: 3:40–4:30pm

Room: MG 106

Abstract: Many problems in medicine and biology are modeled by equations with functional dependence, i.e., equations where the right hand side depends not only on the present state of the system but also on the history of the solution and/or the history of the derivative of the solution. Such equations fall into the general class of ordinary or partial functional differential equations and include as special cases delay-differential equations and Volterra integral and integro-differential equations.

The equations with functional dependence are quite diverse and different numerical techniques appropriately designed for specific types of models are required. Our research on new efficient methods for these equations is motivated by specific problems in applied sciences which we came across cooperating with colleagues working in medicine and mathematical biology. Some of these problems include:

1. Threshold models in the theory of epidemics and population dynamics. We consider a model for the spread of infection in which an individual becomes infectious at time t after accumulated dosage of infection reaches a known threshold. This model can be described by the system of delay-differential equations, where the delay function is not known explicitly and must be determined from appropriate threshold conditions as the integration progresses from step to step.

2. Integro-differential equations modeling neural networks. This problem can be described by the initial-value problem for the integro-differential equation of convolution type where the kernel is assumed to be a nonnegative integrable function defined on real line. This problem is a continuous analog of a discrete Voltage Controlled Oscillator Neuron (VCON) model of transmission line in neural networks.

3. Calcium-mediated dendritic branch model. We present a new numerical method for the simulation of calcium-mediated dendritic branch model. Using this method we will illustrate the impact of time-dependent changes in spine density and spine shape on the input-output properties of the dendritic branch.

All these problems require nonstandard algorithms for their efficient numerical solution and some of them will be described in this talk.

**Jenny Salls**, University of Nevada

Title: Do Professional Development Practices Impact the Implementation of a Reform Mathematics Curriculum as Measured by Student Achievement?

Date: Tuesday, 5 February 2008

Time: 2:40–3:30pm

Room: MG 107

Abstract: Reform in the teaching of K-12 mathematics has resulted in the development of new curriculum materials emphasizing mathematical concepts and understandings as well as problem solving skills. Implementation of reform involves changing teacher beliefs as well as practices. Reformers suggest such changes require long-term, intensive professional development situated within the context of the school. Providing this professional development requires financial and human resources not typically available to schools. This study sought to identify whether typical professional development provided during textbook adoption does impact the implementation of a reform mathematics curricula as measured by student achievement.

Fourth and fifth grade teachers in the second year of implementing a reform curriculum were surveyed regarding their professional development experiences during the previous five years. Backward regression identified no teaching practices or professional development experiences that were related to gain in test scores. Additional analyses indicated differences between teachers in Title 1 and non-Title 1 schools, suggesting increased professional development or increased attention to academic standards may support implementation of a reform mathematics curriculum.

**Jessica Strowbridge**, Oregon State University

Title: Middle School Teachers’ Formative Use of a Feedback Guide

Date: Monday, 28 January 2008

Time: 3:40–4:30pm

Room: MG 107

Abstract: As part of a professional development program focused on mathematics problem solving, middle school teachers were introduced to a feedback guide intended to help them provide feedback to students and make instructional decisions. The teachers’ use of this feedback guide is the focus of this talk. I will discuss the extent to which teachers use the guide reliably, as well as the evidence of the teachers’ use of the feedback guide to inform follow-up instruction. Although the subjects of the study were middle school teachers, the discussion about instructional planning has implications for all levels of instruction.

**Robert D. Guy**, UC Davis

Title: Modeling Fibring Gel Formation: Continuous to Discrete

Date: Tuesday, 18 December 2007

Time: 2:00–3:00pm

Room: MG 115

Abstract: Hemostasis is the normal physiological response to blood vessel injury and is essential to maintaining the integrity of the vascular system. It consists of two interacting processes: platelet aggregation and coagulation. The first involves cell-cell adhesion resulting in a platelet aggregate, and the second involves an enzyme network that leads to the formation of a fibrin gel. Though both processes contribute to the formation of blood clots, those formed at high shear rates are composed primarily of platelets and clots formed at low shear rates are composed predominantly of fibrin gel. In order to understand this phenomenon, a simple mathematical model of chemically-induced monomer production, polymerization, and gelation under shear flow is presented. The model is used to explore how the shear rate and other parameters control the formation of fibrin gel. The results show that the thrombin inhibition rate, the gel permeability, and the shear rate are key parameters in regulating the height of the clot. Experiments show that the gel permeability depends on the chemical environment in which it was made. However, the reasons for these structural differences are unclear. Discrete, Monte Carlo simulations of fibrin polymerization are used to explore what factors determine the microstructure of the gel.

**Rosemary A. Renaut**, Arizona State University

Title: Determining the Regularization Parameters for the Solution of Ill-posed Inverse Problems

Date: Friday, 30 November 2007

Time: 2:40–3:30pm

Room: MG 139

Abstract: Determining the solution of some overdetermined systems of equations Ax = b, A ∈ R^(m×n), x ∈ R^n and b ∈ R^m, may not be a well-posed problem. Specifically, this means that in some cases small changes in the right hand side vector b can lead to relatively larger changes in the solution vector x. Problems for which this occurs are called ill-posed. For example the deblurring of an image or the restoration of a signal from its blurred and noisy data typically yields an ill-posed problem. In such cases, a standard approach is to include a regularization term which constrains the obtained solution with respect to some expected characteristics of the solution. This approach, however, raises a new question on the relative weights of the regularization term and the measure of how well the obtained x fits the system of equations. In this talk, I will illustrate the problem of ill-posedness for signal restoration, and show how the solution obtained depends on the regularization term and its relative weight. I will review typical approaches that have been used for finding the weighting of the regularization, the regularization parameter, such as the L-curve and cross-correlation methods. I will also then introduce a new method, based on a technique introduced by Mead (2007) in which the regularization weighting may be found assuming a statistical result. This yields an optimization problem using the observation that the cost functional follows a χ2 distribution with n degrees of freedom, where n is the dimension of the data space. I will discuss the development of an algorithm which uses this result, and also provides best possible confidence intervals on the parameter estimates, given the covariance structure on the data. Experiments to show the validity of the new model, and a practical application from seismic signal restoration will be presented.

## Schedule for 2006–2007

**Rama Mishra**, Boise State University

Title: Polynomial Knot theory

Date: Friday, 27 April 2007

Time: 3:40pm

Room: MG 106

Abstract: Polynomials are the easiest functions to work with. If we have a space curve parametrized by polynomials which is an embedding of the real numbers in 3-space then its one point compactification will be a smooth embedding of the unit circle in the unit three sphere which is nothing but a knot in the classical sense. On the other hand it can esaily be seen that if we take an open knot K, then, up to equivalence, we can find a polynomial embedding from the real numbers to 3-space that can represent K. A knot represented by a polynomial embedding is referred to as a polynomial knot. Polynomial representations for equivalent knots are connected by a one parameter family of polynomial embeddings. Thus, there is a bijection between equivalence classes of knots and equivalence classes of polynomial knots. In this talk we show some estimates on the degree of the polynomials to represent a given knot type. We also discuss that polynomial knot theory may be employed to compute some known knot invariants.

**Nathan Geer**, Georgia Institute Of Technology

Title: Multivariable quantum invariants of links arising from Lie superalgebras

Date: Friday, 9 March 2007

Time: 3:40pm

Room: MG 139

Abstract: There are deep connections between quantum algebra and knot theory. Every representation of a semisimple Lie algebra gives rise to a quantum group invariant of knots. The Jones, Kauffman, and HOMFLY knot invariants are all examples of such invariants. Invariants arising from Lie algebras can be extended to Lie superalgebras. These new invariants are more powerful than invariants arising from Lie algebras and have interesting new properties. In this talk I will speak about multivariable invariants of links arising from finite dimensional modules of Lie superalgebra of classical type. In particular, I will start with a gentle introduction into knot theory. Then I will give the construction of the standard Reshetikhin-Turaev quantum group invariant of links and discuss how to modify this construction (in the case of Lie superalgebras) in order to define a non-trivial invariant of links. Finally, I will touch on how these invariants are related to other well known invariants including the multivariable Alexander polynomial and Kashaev’s quantum dilogarithm invariants of links. I plan on making this talk accessible to a general mathematical audience.

**Jens Harlander**, Western Kentucky University

Title: Cells, Collisions, Curvature: an Introduction to Combinatorial Topology

Date: Friday, 2 March 2007

Time: 3:40pm

Room: MG 139

In 1993 the Russian mathematician Anton Klyachko observed the following property which he described as “suitable for a school mathematics tournament”: Given a tesselated 2-sphere, i.e. a subdivision of the surface of the ball into regions, let a car drive around the boundary of each region in an anti-clockwise direction. The cars travel at arbitrary speed, never stop and visit each point on the boundary infinitely often. Then there must be at least two places on the sphere where complete crashes occur. He used this result to prove the Kervaire Conjecture for torsion-free groups (which had been open for 30 years). In my talk I will discuss Klyachko’s Car Crash Lemma and other properties of the 2-sphere and give applications to Combinatorial Topology and Group Theory.

**Alexander Felshtyn**, Boise State University

Title: Periodic orbits and Knots

Date: Friday, 23 February 2007

Time: 3:40pm

Room: MG 139

The lecture will discuss several interesting phenomena (topological, algebraic, analytic and arithmetic) which are observed in the study of periodic orbits of a dynamical system. The main topics are: 1. Poincare-Hopf theorem and Euler characteristic. 2. Lorenz knots.

3. Sharkovsky ordering. Period 3 implies Chaos. Julia sets. 4. Reidemeister torsion, dynamical zeta functions andcounting periodic orbits.

**Andreas Zastrow**, University of Gdansk

Title: Generalized covering spaces

Date: Monday, 29 January 2007

Time: 3:40pm

Room: MG 139

Abstract: This talk will be devoted to presenting a concept of generalizing the theory of covering spaces. I and Hanspeter Fischer (Ball State University, Muncie, Indiana) have been working about it in the past years. Classical covering space has proven to be a very useful concept for semilocally simply connected spaces. It allows to present a space as a quotient of a simply connected space and gives a natural presentation for its fundamental group. Our covering spaces are constructed with the idea, to rescue these properties for a wider class of spaces. Provided that the natural homomorphism from the fundamental group into the shape group is an embedding, we obtain a simply connected universal covering space X’ together with a natural projection from X’ to X such that the group of p-equivariant autohomeomorphisms will be naturally isomorphic to the fundamental group of X. However, p will have weaker properties than in the classical case, but it still will have the pathand homotopy-lifting property. Our work covers also other aspects, like a universal property, intermediate covering spaces, and the weakening of the criteria if the fundamental group is countable or if the base space is first countable. These results and the particular phenomena and difficulties of this covering construction will be illustrated at a number of examples. If time suffices the talk might compare our concept with some of the other (recent and less recent) attempts of generalizing covering spaces that I am aware of.

**Andres Caicedo**, California Institute of Technology

Title: Point set topology and determinacy

Date: Thursday, 16 November 2006

Time: 12:40pm

Room: MG 108

Abstract: Determinacy is the assertion that all infinite perfect information games on integers are determined. In these games, players I and II alternate playing integers, and an analysis of the sequences so obtained decides the winner. Such a game is determined if either player has a winning strategy, a way of choosing its integers that guarantees victory. Continuing work of A. Hogan, we study the structure of topological spaces under the assumption of determinacy, with an emphasis on metric spaces of the least well-ordered uncountable size.

**Razvan Gelca**, Texas Tech University

Title: On the physics of the Jones polynomial of a knot

Date: Friday, 3 November 2006

Time: 3:40pm

Room: MG 108

Abstract: The Jones polynomial is a knot invariant that is easy to compute but difficult to study. It has profound links to various areas of physics and geometry, among which: quantum field theory, statistical mechanics, and quantum groups. In the present talk we will discuss a surprising relation with the Weyl quantization procedure.

**Shawn Martin**, Sandia National Laboratories

Title: Molecular Design Using Chemical Fragments

Date: Thr. 19 October 2006

Time: 12:40pm

Room: MG 108

Abstract: I will describe a mathematical framework developed for the design of molecular structures with desired properties. This method uses fragments of molecular graphs to predict chemical properties. Linear Diophantine equations with inequality constraints are then used to re-organize the fragments into novel molecular structures. The method has been previously applied to problems in drug and materials design, including LFA-1/ICAM-1 inhibitory peptides, linear homopolymers, and hydrofluoroether foam blowing agents. I will provide a complete description of the method, including a new approach to overcome previous limitations due to combinatorial complexity. The new approach uses the Fincke-Pohst algorithm for lattice enumeration, implemented using the PARI/GP computer algebra library.

**Leming Qu**, Boise State University

Title: Determination of regularization parameter using L-curve by the LARS-LASSO algorithm

Date: Friday, 15 September 2006

Time: 3:40pm

Room: MG 115

Abstract: Regularization is a common technique to obtain reasonable solutions to ill-posed problems. In Tikhonov regularization, both the data-fitting and the penalty terms are in L2 norm. The L-curve is a plot of the size of the regularized solution versus the size of the corresponding residual for all valid regularization parameters. It is a useful tool for determining a suitable value of the regularization parameter in Tikhonov regularization. LASSO replaces the L2 norm by L1 norm for the penalty term. The LARS algorithm computes the whole path of the LASSO with a computational complexity in the same magnitude as the ordinary least squares. Thus, the L-curve for LASSO can be very efficiently obtained by the LARS-LASSO algorithm. The tuning point of the L-curve is chosen as the value of the regularization parameter. We compare L-curve method with the existing cross-validation method. The simulation suggests a better performance for the L-curve method.

## Schedule for 2005–2006

**Liljana Babinkostova**, Boise State University

Title: Selection principles and infinite games

Date: Monday, 1 May 2006

Time: 2:40pm

Room: MG 106

Abstract: Cantor’s diagonal argument is one of the classical tools of set theory. In classical literature several measure-like properties, basis properties or covering properties have been defined in terms of diagonalization processes. The area of selection principles unifies these studies by showing that each of these properties can be characterized by a typical diagonalization process, called a selection principle. Selection principles have natural infinite games associated with them. These games are powerful tools for developing the theory of these selection principles. In this talk we survey the selection principles S1(A,B), Sfin(A,B) and Sc(A,B) and their associated games. We show how they are related to basis properties, measure like properties, and also to Lebesgue’s covering dimension. We present some recent results in connection with the questions whether the Sierpinski basis property implies the Rothberger property, whether the product of a strictly o-bounded group and an o-bounded group is o-bounded and when finite powers of Haver spaces are Haver spaces.

**Scott MacLachlan**, University of Minnesota

Title: A Variational Approach to Upscaling Heterogeneous Media

Date: Monday, 24 April 2006

Time: 11:40am

Room: MG 121

Abstract: Sufficient resolution of fine-scale variations in material properties is often needed to achieve the high levels of accuracy demanded of computational simulation in biological and geophysical applications. In many cases, this variation is resolved on a scale that is finer than is practical to use for computation, requiring mathematical tools for coarsening (or upscaling) the medium or the model. The mathematical tools of homogenization address the question of determining effective properties of the medium on a coarser scale, but are, in general, not justified for media that arise in nature. In this talk, I discuss a new approach for upscaling PDE models with heterogeneous media based on variational principles. The variational framework used is based on that of Galerkin finite element discretizations and is closely related to that used in robust multilevel solvers, such as multigrid. As in robust geometric and algebraic multigrid methods, the coarsening procedure is induced by the fine-scale model itself. In this way, we construct a hierarchy of models that resolve the effects of the fine-scale structure at multiple scales. This research is in collaboration with J. David Moulton from Los Alamos National Laboratory.

**Gary Gruenhage**

Title: The double arrow space

(Canceled)

**Stanislav Jabuka**, University of Nevada

Title: Smooth 4-manifolds and Heegard Floer homology

Date: Friday, 14 April 2006

Time: 3:40pm

Room: MG 107

Abstract: This talk is a survey of important recent results from the theory of smooth 4-manifolds. Dimension 4 lives on threshold between low dimensions (1,2,3) and higher dimensions (5 and above) and as such exhibits phenomena not encountered in other dimensions. This makes the study of 4-manifolds particularly cumbersome and the main tools for manifold study—invariants of the smooth structure—have been very hard to construct. One such invariant, the Heegard Floer homology, has been discovered in 2001 by P. Ozsvath and Z. Szabo. It is the first example of a 3+1 dimensional TQFT and holds great potential to answer questions which have not been accessible via previously existing theories.

**Alexander Felshtyn**, Boise State University

Title: Dynamical Zeta Functions, Nielsen Theory and Reidemeister Torsion

Date: Friday, 7 April 2006

Time: 3:40pm

Room: MG 107

Abstract: The study of dynamical zeta functions is part of the theory of dynamical systems, but is also intimately related to algebraic geometry, number theory, topology and statistical mechanics. In the talk I will discuss the Reidemeister and Nielsen zeta functions. These zeta functions count periodic points of dynamical systems in the presence of fundamental group. Arithmetical congruences for Reidemeister numbers will be described. I will explain how dynamical zeta functions give rise to the Reidemeister torsion. This is an important topological invariant, which has useful applications in topology, quantum field theory and dynamical systems. The connection between symplectic Floer homology for surfaces and Nielsen fixed point theory will be described.

**Boaz Tsaban**, Weizmann Institute of Science

Title: A taste of infinite-combinatorial real analysis

Date: Tuesday, 21 March 2006

Time: 2:40pm

Room: MG 107

Abstract: Cantor’s diagonalization argument was invented and used to obtain a beautifully elegant proof of the existence of transcendental real numbers. Since then, the diagonalization method went a long way, and formed the field of set theory. There are still questions about the real line that are best treated using this approach. We give an overview of a subfield of set theory dealing with the real line from this point of view. The rapid development of this field in recent years owes much to the works of the mathematicians here at Boise. We will limit ourselves to a selected choice of topics, and introduce, step by step, all that is needed in order to open a window to this fascinating field of mathematics.

**Lin Wang**, University of Victoria

Title: Competition in the chemostat

Date: Tuesday, 28 February 2006

Time: 2:40pm

Room: MG 107

Abstract: In this talk, a chemostat model with general nonmonotone response functions is considered. The nutrient conversion process involves time delay. It is shown that under certain conditions, when several species with differential removal rates compete in the chemostat for a single resource that is allowed to be inhibitory at high concentrations, the competitive exclusion principle holds. In addition, a local stability analysis is provided that includes sufficient conditions for the bistability of the single species survival equilibrium and the washout equilibrium, thus showing initial condition dependent outcome is possible. Some related questions suitable for undergraduate students and graduate students will also be presented.

**Junling Ma**, McMaster University

Title: Why does influenza come back every winter?

Date: Tuesday, 21 February 2006

Time: 2:40pm

Room: MG 107

Abstract: A key characteristic of Influenza epidemics is that they occur in the winter. Traditionally, this seasonality is thought to arise from seasonal changes in transmission rates. However, fitting a seasonally forced transmission model to influenza mortality time series reveals that the periodic introduction of new flu variants may also play a fundamental role. In fact, we can fit the mortality curve very well with no seasonal variation in transmission rates. In this talk, we will see that flu-like cyclic dynamics can emerge from the coupling of the epidemic process (described by a deterministic compartmental model) and the viral mutation process (described by a nonhomogeneous Poisson process). While not required to generate periodicity, seasonal forcing ensures that the average period between epidemics is exactly one year. The results that I will describe suggest a variety of ways to develop tractable mathematical models that can further increase our understanding of influenza dynamics and evolution.

**Grady Wright**, University of Utah

Title: Recent developments in radial basis functions interpolation with applications to the geosciences

Date: Friday, 17 February 2006

Time: 2:40pm

Room: MG 106

Abstract: Radial basis functions (RBFs) are a powerful tool for interpolating/approximating scattered data. They easily generalize to multiple dimensions, handle arbitrarily scattered data, and can be spectrally accurate both for interpolation and for numerically solving partial differential equations (PDEs). Since their discovery in the early 1970s, both the knowledge about RBFs and their range of applications have grown tremendously. Some of these more recent applications include geophysics, neural networks, pattern recognition, and graphics and imaging. We will first review the basic properties of RBF interpolation and briefly discuss some recent computational algorithms for the resulting linear systems. We will then focus on two new RBF approaches for numerically solving PDEs. The first is a spectral collocation method for PDEs arising in climate modeling on the surface of a sphere. The second is on a local finite difference-type technique for PDEs on irregularly shaped domains.

**Jozef Przytycki**, George Washington University

Title: From Khovanov homology to Hochschild homology and back in 50 Minutes

Date: Friday, 4 November 2005

Time: 3:40pm

Room: MG 115

Abstract: We start this talk by describing the Tait construction of link diagrams from signed plane graphs, and conversely, the construction of signed plane graphs from link diagrams. In 1999 M. Khovanov introduced a homology theory which categorifies the Jones polynomial of links. We use the Tait construction to argue that one can understand Khovanov homology of links by describing first graph homology. Hochschild homology is the older theory, developed in 1945 to analyze rings and algebras. We show that Khovanov homology and Hochschild homology share common structure. In fact they overlap: Khovanov homology of a (2,n)-torus link can be interpreted as Hochschild homology of the algebra underlining the Khovanov homology. In the classical case of Khovanov homology we prove the concrete connection. In the general case of Khovanov-Rozansky, sl(n), homology and their deformations we conjecture the connection. The best framework to explore our ideas is to use a comultiplication free version of Khovanov homology for graphs developed by Y. Rong and L. Helme-Guizon. In this framework we prove that for any unital algebra A the Hochschild homology of A is isomorphic to graph homology over A of a polygon. We expect that this observation (that two theories meet) will encourage a flow of ideas in both directions betwewen Hochschild/cyclic and Khovanov homology theories.

**Peter Zvengrowski**, University of Calgary

Title: Riemann and his Zeta Function

Date: Friday, 7 October 2005

Time: 3:40pm

Room: MG 108

Abstract: This talk is intended as an elementary introduction to the Riemann zeta function and the related Riemann Hypothesis. Much of it will be from an historical perspective, e.g. we will attempt to ask and (possibly) answer questions such as ” how much did Riemann actually know about the RH?” and “did he consider it very important?” The developments since Riemann’s 1859 paper will also be discussed, as well as some recent research of the author.

**Fan Chung**, UC San Diego

Title: Random Graphs and Internet Graphs

Date: Monday, 19 September 2005

Time: 10:40am

Room: MEC 106

Abstract: Many very large graphs that arise in Internet and telecommunications applications share various properties with random graphs (while some differences remain). We will discuss some recent developments and mention a number of problems and results in random graphs and algorithmic design suggested by the study of these “massive” graphs.

**Ron Graham**, UC San Diego

Title: Searching for the Shortest Network

Date: Monday, 19 September 2005

Time: 3:40pm

Room: MP 106

Abstract: Suppose you are given some set of cities and you would like to connect them all together with a network with the shortest possible length. How hard is it to find such a short network? This classical problem has challenged mathematicians for nearly two centuries, and today has great relevance in such diverse areas as telecommunication networks, design of VLSI chips and molecular phylogenetics. In this talk we will summarize past accomplishments, present activities and future challenges in this fascinating topic.

## Schedule for 2004–2005

**Christina Hayes**, Montana State University

Title: A Generic Property of the Infinite Population Genetic Algorithm

Date: Monday, 2 May 2005

Time: 3:40pm

Room: MG 108

Abstract: Genetic Algorithms (GAs) are a class of stochastic search algorithms based on the idea of natural selection. I will give a brief introduction to GAs, as well as a dynamical systems model of GAs acting on an infinite population. We will then study an infinite population model for genetic algorithms, where the iteration of the algorithm corresponds to an iteration of a map G. The map G is a composition of a selection operator and a mixing operator, where the latter models effects of both mutation and crossover. We examine the hyperbolicity of fixed points of this model. We show that for a typical (generic) mixing operator all the fixed points are hyperbolic.

**Xiao-Song Lin**, UC Riverside

Title: An unfolding problem for polygonal arcs in the 3-space

Date: Friday, 29 April 2005

Time: 3:40pm

Room: MG 106

Abstract: Motivated by the protein folding problem in molecule biology, we will propose a mathematical problem about the unfolding of a certain kind of polygonal arcs in the 3-space. And we will discuss the possibility of extending the method developed in the recent solution of the classical carpenter’s ruler problem in the plane to this unfolding problem.

**Michael McLendon**, Washington College

Title: Studying 3-manifolds using knots

Date: Friday, 22 April 2005

Time: 3:40pm

Room: MG 108

Abstract: The skein module of a 3-manifold is an algebraic object formed by the types of knots and links that the manifold can contain. In the words of Jozef Przytycki, skein theory is “algebraic topology based on knots.” We will look at the skein module of a 3-manifold when the manifold is defined via a Heegaard splitting, M=H_0 U_F H_1. Here H_0 and H_1 are two solid handlebodies glued together to form M along their common boundary surface F. Using this Heegaard splitting of M in conjunction with the skein modules of H_0, H_1, and F, we can define an indexed list of modules called the Hochschild homology of the Heegaard splitting. The zeroth Hochschild homology recovers the information in the skein module of the manifold and the higher Hochschild homology modules may provide additional information about the manifold.

**Stefan Geschke**, Freie Universitat Berlin

Title: Convex Geometry, Continuous Colorings and Metamathematics

Date: Tuesday, 15 March 2005

Time: 2:40pm

Room: MG 118

I will give an overview over a number of results obtained by Kojman, Kubis, Schipperus and myself concerning certain cardinal invariants arising in convex geometry. For a subset S of a real vector space we consider the *convexity number* γ(S), the least cardinality of a family F of convex subsets of S which covers S. We are mainly interested in uncountable convexity numbers of closed subsets of R^n. In R^1 the situation is simple. For every closed subset S of R^1 either γ(S) is countable or there is a nonempty perfect subset P of S such that every convex subset of S intersects P in at most 2 points. In the latter case γ(S)=|R|. The situation is more complicated in R^2. For every closed subset S of R^2 exactly one of the following two statements holds: (1) There is a nonempty perfect subset P of S such that every convex subset of S intersects P in at most 3 points (and hence γ(S)=|R|). (2) There is a forcing extension of the set-theoretic universe in which γ(S)<|R| (and hence there is no set P as in (1)). The convexity numbers of closed sets satisfying (2) turn out to have a combinatorial characterization as so-called *homogeneity numbers* of continuous pair colorings on the Baire space $N^N$. The metamathematical issues involved in statement (2) will be discussed briefly. The dichotomy for closed subsets of R^2 cannot be generalized to higher dimensions. I will mention some results that are provable in higher dimensions.

**Bernhard Koenig**, Boise State University

Title: More than the sum of its parts

Date: Thursday, 10 March 2005

Time: 2:40pm

Room: MG 108

Abstract: We consider a couple of interesting phenomena in combinatorial set theory concerning trees (a tree is a partial ordering with the property that the predecessors of every point form a linear wellordering). We ask the following question: assume we are given two trees S and T such that all proper initial segments of S are isomorphic to proper initial segments of T and vice versa (we call S and T “locally isomorphic” in this case). Does this mean that S and T are isomorphic? It seems paradoxical to have two trees S and T that are locally isomorphic but not isomorphic, since this would mean that they are constructed using the very same building blocks, yet they would be different. We present a couple of results (some are classical, some are more recent results of the speaker) that show that the question can have different answers, depending on the height of the trees and on the axioms of set theory.

**Andres Caicedo**, Kurt Goedel Research Center

Title: Stationary subsets of ω_{1} and models of set theory

Date: Tuesday, 22 February 2005

Time: 1:40pm

Room: MG 113

Abstract: A set X has size ω_{1} if it is uncountable, and every infinite subset of X has either the size of the natural numbers, or else it has the size of X, i.e., the version of the Continuum Hypothesis for X holds. Fixing such a set X, among its uncountable subsets a notion of ‘size’ can be introduced from which a rich combinatorial theory can be developed. A stationary subset of X is one that is ‘medium sized’ with respect to this notion. A model of set theory is a collection of sets that satisfies the axioms of set theory (in the same way that ‘a model of group theory’ is a collection M of objects that satisfies the axioms of group theory, i.e., M is a group). Given a model of set theory V and a submodel M, some element of M may be ‘stationary in M’ but not be ‘stationary in V’. We present two results that investigate these notions. The first says that there is always some preservation, i.e., certain stationary sets in M are stationary in V. The second studies restrictions in what M can be if an additional assumption (a so-called forcing axiom, related to preservation of stationary sets) is assumed of both M and V.

**Amir Togha**, George Washington University

Title: What initial segments do automorphisms fix?

Date: Thursday, 17 February 2005

Time: 2:00pm

Room: MG 108

Abstract: An automorphism of a model M is simply a permutation of M’s universe that preserves the structure of M. In this talk we discuss certain models of Arithmetic and Set Theory and their automorphisms. The notion of “initial segment” will be introduced for these models and the properties of the *largest* initial segments that are fixed by automorphisms will be investigated. The goal is to give a characterization of such initial segments for sufficiently rich models of Set Theory.

**Barbara Zubik-Kowal**, Boise State University

Title: How do elements of semi-discrete systems affect convergence of waveform relaxation?

Date: Thursday, 27 January 2005

Time: 1:40pm

Room: MG 108

Abstract: Waveform relaxation is an iterative method. It has an advantage that it can be applied in parallel computing environments. A further advantage is that implementation of implicit ODE solvers applied to the resulting waveform relaxation schemes is straightforward. No algebraic systems need to be solved in any time step, unlike the situation where waveform relaxation is not applied. These advantages are useful when solving both linear and nonlinear differential systems. We can replace nonlinear systems of ODEs by sequences of linear problems which can be effectively integrated by A(α)-stable backward differentiation methods or A-stable implicit Runge-Kutta methods. This allows for much larger time steps than those used for explicit methods. In this talk we present a new approach to the analysis of convergence of waveform relaxation. In our approach we investigate magnitutes of elements of differential systems. New results about relations between the elements and convergence of waveform relaxation are presented. Our theoretical results are new for both delay and non-delay linear and nonlinear differential equations. The results are confirmed by numerical experiments.

**Justin Moore**, Boise State University

Title: The L space problem, the oscillation function, and its applications

Date: Friday, 19 November 2004

Time: 3:40pm

Room: MG 106

Abstract: This talk will introduce the oscillation function and some of its applications in mathematics. A recent development in this area has been the use of this function to define a nonseparable topological space with no uncountable discrete subspaces (an L space), answering a problem asked by Hajnal and Juhasz in 1968. Examples of such spaces have long been known to be consistent with the usual axioms of mathematics (a Souslin Line is an L space), but this construction requires no additional axiomatic assumptions. The L space also has an interpretation as a coloring a bipartite graph. In particular, there is an edge coloring of an uncountable complete bipartite graph with infinitely many colors so that all colors appear on any uncountable bipartite subgraph (here uncountable bipartite means that both “camps” of vertices in the graph are uncountable). The oscillation function has also had application to the continuum problem. Forcing Axioms—certain strengthenings of Baire’s Category Theorem—often impose considerable restrictions on the cardinality of the real line and frequently imply that it is the second uncountable cardinal. Two of these proofs make crucial use of the oscillation function and its properties.

**Cynthia Hernon**, Northern Arizona University

Title: Teacher Exploration of Instructional Strategies to Promote Algebraic Thinking

Date: Monday, 15 November 2004

Time: 3:00pm

Room: MG 108

Abstract: The research study investigates the influence of teacher participation in a graduate course fostering the development of algebraic thinking for K-8 students on teacher understanding of the nature of algebraic thinking and on the incorporation of the teaching of algebraic thinking, guided by student discourse, into practice. This study explores how three elementary teachers introduce the algebraic concepts of equivalence, relational thinking, and the development and justification of conjectures to first and third grade students. The research is framed against the examination of teacher change in practice within the context of a professional development experience. The qualitative case study of these three elementary teachers is focused on the personal, situational, and institutional factors that are conducive to effecting this change in practice.

The constant comparative analysis of the data collected from interviews, classroom observations, journal reflections, and survey responses revealed six common themes across the cases. All three teachers possess a high level of interest in teaching mathematics, believe that traditional teaching strategies are not working for their students, demonstrate ambiguity about the definition of algebraic thinking, cite a lack of curriculum resources to support the teaching of algebraic thinking, desire collaboration with like-minded teachers, and are committed to continuing the teaching of algebraic thinking. These teachers successfully added to their mathematics content knowledge and either incorporated new pedagogy into their teaching or refined an existing constructivist approach to teaching and learning as they integrated the teaching of algebraic thinking into the classroom.

**Marta Asaeda**, University of Iowa

Title: Introduction to Khovanov homology and Recent Developments

Date: Monday, 18 October 2004

Time: 3:40pm

Room: MG 108

Abstract: I will give an introduction to Khovanov homology: a homology theory for links whose Euler characteristics are Jones polynomials. I further give an overview of recent developements.

**Matthias K. Gobbert**, University of Maryland Baltimore County

Title: Numerical Simulations of Process Models in Microelectronics Manufacturing on Beowulf Clusters with High-Performance Networks

Date: Friday, 15 October 2004

Time: 3:40pm

Room: MG 139

Abstract: Production steps in the manufacturing of microelectronic devices involve gas flow at a wide range of pressures. We develop a kinetic transport and reaction model (KTRM) based on a system of time-dependent linear Boltzmann equations. A deterministic numerical solution for three-dimensional kinetic models requires the discretization of the three-dimensional velocity space, the three-dimensional position space, and time. We design a spectral Galerkin method to discretize the velocity space by specially chosen basis functions. For the spatial discretization, we use the discontinuous Galerkin finite element method. As an application example, we simulate chemical vapor deposition at the feature scale in two and three spatial dimensions and analyze the effect of pressure. Finally, we present parallel performance results which indicate that the implementation of the method possesses excellent scalability on a Beowulf cluster with a high-performance Myrinet network. I will describe the hardware of this system in some detail and discuss general issues involved in the assessment of performance in parallel computing.

## Schedule for 2003–2004

**Zbigniew Bartoszewski**, Gdansk University of Technology

Title: The existence and uniqueness of solutions and convergence of iterative methods for general differential-algebraic systems

Date: Friday, 7 May 2004

Time: 2:40pm

Room: MG 115

Abstract: The talk will be devoted to quite general classes of integro-algebraic and differential-algebraic systems. We do not require from the operators involved in the definitions of the systems under consideration to be of Voltera type. The existence and uniqueness of solutions to such systems as well as the convergence of different iterative processes of their solution including waveform relaxation methods will be discussed. There will be given constructive sufficient conditions under which the solutions exist and the iterative processes are convergent. A special attention will be paid to quasi-linear systems of differential-algebraic equations. A number of different iterative processes for their solution will be considered. The relationship between the spectral radii of the matrices that define the corresponding majorizing iterative processes will be provided. The talk will be concluded with numerical examples which illustrate how these spectral radii influence the rate of convergence of the iterative processes.

**Jodi L. Mead**, Boise State University

Title: Estimating Parameters in Mathematical Models

Date: Friday, 30 April 2004

Time: 2:40pm

Room: MG 115

Partial differential equations are used by scientists and engineers to model many different physical phenomenon. For example the wave equation (as its name suggests) can describe sound, light and water waves: d^{2}u/dt^{2}=c d^{2}u/dx^{2}, where c is the wave speed (wavelength/period) which depends on the type of wave being modeled and the medium through which the wave travels. In addition, u(x,t) is the measure of intensity of the wave at a particular location x and time t. Given the wave speed c, initial and boundary conditions, applied mathematicians are typically concerned with estimating u at some future point in time. On the other hand, geophysicists often send seismic or electromagnetic waves through the Earth’s subsurface, measure its intensity u and estimate the wave speed c. If they can accurately determine the wave speed, they have learned something about the composition of the Earth’s subsurface. Thus while applied mathematicians are typically concerned with solving the partial differential equation, scientists and engineers are in addition concerned with estimating the parameters in the model. I will discuss some traditional methods for estimating parameters m in the linear model d=Gm, and introduce an approach I am working on for estimating parameters and their uncertainty when the noise in the data is non-necessarily Gaussian.

**Kyungduk Ko**, Texas A&M University

Title: Bayesian wavelet approaches for parameter estimation and change point detection in long memory processes

Date: Thursday, 29 April 2004

Time: 2:40pm

Room: MG 108

Abstract: The main goal of this research is to estimate the model parameters and to detect multiple change points in the long memory parameter of Gaussian ARFIMA(p; d; q) processes. Our approach is Bayesian and inference is done on wavelet domain. Long memory processes have been widely used in many scientic fields such as economics, finance, computer science and hydrology. Wavelets, being self-similar, have a strong connection with these processes. The ability of wavelets to simultaneously localize a process in time and scale domain results in representing many dense variance-covariance matrices of the process in a sparse form. A wavelet-based Bayesian estimation procedure for the parameters of Gaussian ARFIMA(p; d; q) process is proposed. This entails calculating the exact variance-covariance matrix of given ARFIMA(p; d; q) process and transforming them into wavelet domains using two dimensional discrete wavelet transform(DWT2). Metropolis algorithm is used for sampling the model parameters from the posterior distributions. Simulations with different values of the parameters and of the sample size are performed. A real data application to the U.S. GNP data is also reported. Detection and estimation of multiple change points in the long memory parameter is also investigated. The reversible jump MCMC is used for posterior inference. Performances are evaluated on simulated data and on the benchmark Nile river dataset.

**Margaret Kinzel**, Boise State University

Title: Does Studying Other Bases Help? Pre-service Elementary Teachers’ Strategies for Solving Place-Value Tasks

Date: Friday, 16 April 2004

Time: 2:40–3:30pm

Room: MG 115

Abstract: Bases other than ten have long been used in mathematics courses for elementary education majors, the rationale being that studying other bases leads to a deeper understanding of the familiar base-ten system. In spring 2001 the mathematics education research group conducted a small study to evaluate the effectiveness of studying other bases. We conducted interviews with 7 students selected from two sections of MATH 157. Within the interviews, the students were asked to talk about and solve problems in base ten as well as in other bases. In addition, students were asked to convert the `non-decimal’ 2.314 in base five to a base-ten numeral. This was a new representation for the students; it is not part of the content of MATH 157. We analyzed the students’ work and postulated two components necessary for a robust understanding of place value. The presentation will use interview data to illustrate the students’ strategies and articulate the components of understanding. Implications for instruction will be discussed.

**Matt Bognar**, University of Iowa

Title: Bayesian Inference for Pairwise Interacting Point Processes

Date: Friday, 2 April 2004

Time: 3:40–4:30pm

Room: MG 115

Abstract: In the past, inference in pairwise interacting point processes has been performed via frequentist methods. However, some frequentist methods can produce severely biased estimates when the data exhibit strong interaction. Furthermore, interval estimates are typically obtained by parametric bootstrap methods, but, in the current setting, the behavior of such estimates is unclear. We propose Bayesian methods for inference in pairwise interacting point processes. The requisite application of Markov chain Monte Carlo (MCMC) techniques is complicated by the existence of an intractable function of the parameters in the likelihood. The acceptance probability in a Metropolis-Hastings algorithm involves the ratio of two likelihoods evaluated at differing parameter values. The intractable functions do not cancel, and hence an intractable ratio must be estimated within each iteration of a Metropolis-Hastings sampler. Our unique implementation involves the use of importance sampling techniques within an MCMC sampler to estimate this intractable ratio. Although computationally costly, the ability to obtain interpretable posterior distributions justifies our Bayesian model-fitting strategy.

**Thomas Fiedler**, Universite Paul Sabatier Toulouse 3

Title: Knot theory for braids

Date: Thursday, 18 March 2004

Time: 2:40–3:30pm

Room: MG 108

Abstract: We give an overview about the conjugacy problem in braid groups. This problem was solved by Garside in 1969 in an algebraic manner. But the solution is of exponential complexity. It is conjectured by Thurston that there should be a solution of polynomial complexity. We will indicate such a solution which is obtained in a topological manner.

**Robert B. Lund**, University of Georgia

Title: Markov Chain and Renewal Rates of Convergence

Date: Monday, 8 March 2004

Time: 3:40–4:30pm

Room: MG 118

Abstract: We consider the problem of finding good geometric convergence rates for discrete-time renewal sequences and Markov chains. The goal is to identify an explicit rate bound and first constant that can be computed via minimal information. A general renewal convergence rate is first derived from the hazard rates of the renewal lifetimes. This result is used to obtain renewal convergence rates for lifetimes possessing the new worse than used, new better than used, increasing hazard rate, decreasing hazard rate, and stochastically monotone structures. Attention is then directed to Markov chain convergence issues.

**Justin Moore**, Boise University University

Title: A five element basis for the uncountable linear orders

Date: Friday, 27 February 2004

Time: 2:40–3:30pm

Room: MG 115

Abstract: I will present a recent result in set theory: the class of uncountable linear orders consistently has a five elements basis. That is, there are five uncountable linear orders such that (consistently) any other uncountable linear order contains an isomorphic copy of one of these five. The list has long known to be minimal; it is provable from the usual axioms of mathematics that any basis must have at least five elements. It is not possible to prove such a result without appealing to additional axioms since, for instance, the Continuum Hypothesis implies that any basis must have as many elements as there are subsets of the reals. The talk will present each of the elements of the basis and discuss some of their properties. Some general remarks will also be made on the method of proof which can be considered as an infinitary version of Erdos’s probabilistic method.

**Zdzislaw Jackiewicz**, Arizona State University

Title: Construction and Implementation of General Linear Methods for Ordinary Differential Equations

Date: Friday, 20 February 2004

Time: 3:40–4:30pm

Room: MG 106

Abstract: In the first part of this lecture we will give the overview of different approaches to the construction of diagonally implicit multistage integration methods for both nonstiff and stiff differential systems of ordinary differential equations. The identification of high order methods with appropriate stability properties requires the solution of large systems of nonlinear equations for the coefficients of the methods. For low orders these systems can be generated and solved by symbolic manipulation packages. For high orders the approach to the construction of such methods is based on the computation of the coefficients of the stability function by a variant of the Fourier series method and then solving the resulting large systems of polynomial equations of high degree by least squares minimization. Using these approaches both explicit and implicit methods were constructed up to the order eight with good stability properties (Runge-Kutta stability for explicit methods, A-stability and L-stability for implicit methods). In the second part of this talk we will address different issues related to the implementation of general linear methods. They include selection of initial stepsize and starting values, computation of Nordsieck representation, efficient and reliable estimation of the local discretization errors for nonstiff and stiff equations, step size ond order changing strategies, construction of continuous interpolants, and updating vector of external approximations to the solution. Experiments with variable step variable order experimental Matlab codes for both nonstiff and stiff differential systems on interesting test problems will be presented and compared with appropriate codes from Matlab ODE suite. These experiments demonstrate the high potential of diagonally implicit multistage integration methods, especially for stiff systems of differential equations.

**Karen L. Ricciardi**, Bard College

Title: Developing a groundwater remediation system subject to uncertainty

Date: Friday, 16 January 2004

Time: 2:40–3:30pm

Room: MG 118

Abstract: A cost-effective groundwater pump-and-treat remediation design algorithm has been developed that takes into account the uncertainty of the hydraulic conductivity of a given aquifer. The resultant design is subject to both gradient constraints, which are linear with respect to changes in pumping rates, as well as concentration constraints, which are nonlinear with respect to changes in pumping rates. The uncertainty in the hydraulic conductivity is taken in to account in this model by using a multi-scenario approach whereby different hydraulic-conductivity fields, obtained through a representative sampling technique, are analyzed simultaneously. The tunneling method is a novel method of solving global-optimization problems of this form. This method has been modified to efficiently solve this optimization problem. An application to a hypothetical problem demonstrates the efficacy of this approach.

**Xiao-Wen Chang**, McGill University

Title: Numerical computations for GPS based position estimation

Date: Thursday, 11 December 2003

Time: 3:40–4:30pm

Room: MG 108

Abstract: It is now possible to find where you are. The measurements of position come from GPS (the Global Positioning System). GPS is a satellite based navigation system, which transmits signals that allow one to determine the location of GPS receivers. In this talk, it will be shown how numerical linear algebra techniques can be applied to this interesting area. I will use relative positioning (two receivers are used) as an example to show how to use the structures of the mathematical model to design an efficient and numerically reliable least squares algorithm for computing the position estimates. Real data test results will be presented to demonstrate the performance of our algorithm. This is a joint work with Professor Chris Paige.

**Uwe Kaiser**, Boise State University

Title: What is going on with the Poincare conjecture?

Date: Wednesday, 3 December 2003

Time: 2:40–3:30pm

Room: MG 121

Abstract: About a year ago the Russian mathematician Grigori Perelman announced a proof of the Geometrization Conjecture. This is concerned with the existence of certain geometric structures on 3-dimensional manifolds. It is known that the Geometrization Conjecture implies the famous Poincare Conjecture, a central problem in topology, open since 1904, and one of the seven Clay Problems. In this talk I will explain the mathematical contents of the Geometrization and Poincare Conjecture, and some ideas of Perelman’s approach.

**Nikos Apostolakis**, Boise State University

Title: Coloring Knots

Date: Wednesday, 19 November 2003

Time: 2:40–3:30pm

Room: MG 120

Abstract: Colorings of knots correspond to representations of the group of the knot into some symmetric group. We will examine such colorings both as a method of distinguishing knots and as representations of 3-dimensional manifolds. All terms will be explained during the talk.

**Karel in ‘t Hout**, Boise State University

Title: Direct methods for estimating Greeks with Monte Carlo

Date: Wednesday, 15 October 2003

Time: 2:40–3:30pm

Room: MG 121

Abstract: After a brief introduction into the pricing of options, using Monte Carlo simulation and stochastic differential equations, the talk would focus on estimating the sensitivities, the so-called Greeks, of option prices with respect to parameters that occur in the differential equation such as the interest rates and the volatility. These quantities play an important role in applications.

## Schedule for 2002–2003

**Vladimir Chernov**, Dartmouth College

Title: Affine Linking Numbers and the Causality Relation for Wave Fronts

Date: Tuesday, 13 May 2003

Time: 3:40–4:30pm

Room: MG 120

Abstract: The linking number is the classical invariant of the pair knots which is the number of intersections of one knot with a surface bounded by the other. We construct affine linking numbers that are extensions of linking numbers to the case where knots in question do not bound any surface. A CR causality invariant of the pair of fronts of two events is the algebraic number of times the earlier front has passed through the origin of the later front before the later front appeared, and it measures how strongly the earlier front influenced the event that caused the second front. We show that affine linking numbers can be effectively used to calculate the CR causality relation invariant from the current picture of the fronts of two signals without any knowledge of the signal propagation law. We also use it to count the algebraic number of times a front passed though a marked point between the two time moments.

**Liljana Babinkostova**, Boise State University

Title: Topological analysis of binary images and its applications in pattern recognition

Date: Friday, 25 April 2003

Time: 2:40–3:30pm

Room: MG 121

Abstract: Pattern recognition is the study of methods and the design of systems to recognize patterns in data. This area has applications in many fields, including image analysis, character recognition, speech analysis, identification, etc. In this talk we emphasize pattern recognition as classification: deciding if a given input belongs to a given category. Applying topology to analyzing images started in the 1970’s. Terms such as connectivity, boundary, interior, etc. are often encountered in this application. We use the notion of combinatorial homotopy equivalence to classify binary images into different categories, using such concepts as 0-Betti number, 1-Betti number and Euler characteristic.

**Jaechoul Lee**, University of Georgia

Title: Trends in United States Temperature Extremes

Date: Wednesday, 16 April 2003

Time: 2:40–3:30pm

Room: MG 120

Abstract: In this study, we investigate any linear trend inherent in monthly temperature minimums recorded at Lewiston, ME during the period 1887–2000. A statistical model is developed to quantify any temperature changes. Ordinary least squares estimates are computed with their standard errors under modeling of periodic features and site changes in the temperatures. An extreme value modeling based on generalized extreme value distributions is suggested. Maximum likelihood estimates are obtained and compared to the ordinary least squares estimates. Future analysis plan is listed for complete description of the trends in United States temperature extremes.

**Stefan Pittner**, Northeastern University

Title: Correlation and Predictability: Applications in Manufacturing

Date: Friday, 11 April 2003

Time: 2:40–3:30pm

Room: MG 121

Abstract: The problem of predicting a dependent numerical variable from an independent variable (or several independent variables) is generally approached with methods from approximation theory. In this talk it is demonstrated that establishing the predictability of a numerical variable is, however, mainly a problem of statistics. The importance of the predictability problem is demonstrated with an application in manufacturing process control. It is shown how existing concepts in statistics, such as statistical dependence, the Pearson correlation coefficient and the Spearman rank correlation coefficient, can be used for the evaluation of the predictability of a numerical variable. The merits and drawbacks of the concepts are discussed. Next, a nonparametric correlation measure called g-correlation coefficient is derived. The idea behind g-correlation is to replace the function approximation concept of predictability by a classification concept. The g-correlation concept allows one to detect (a) any monotonic relationship between the dependent and the independent variable and (b) a classification procedure in situations where accurate predictions are not possible. A method is proposed to estimate g-correlation from a set of samples for the variables under consideration. It is also sketched how g-correlation can be extended to more than one independent variable using Fisher linear discriminant functions. Results of the application of different correlation coefficients in a manufacturing application show that g-correlation has a central role among all standard concepts of correlation.

**Lisa Madsen**, Cornell

Title: Regression With Spatially Misaligned Covariates

Date: Tuesday, 18 March 2003

Time: 2:40–3:30pm

Room: MG 115

Abstract: When a response Y and a covariate X are measured in different spatial locations, we say the data are misaligned. This may occur when one of the variables is more difficult to measure, or when X and Y are measured by different agencies. Suppose we are interested in assessing the relationship of Y and X by estimating the parameters of a linear regression of Y on X, with X and Y misaligned. When X is generated by a spatially autocorrelated process, we can use the observed X’s to predict (krige) the covariate at the locations Y was observed. The predicted X’s can then be used in a standard regression analysis. This naive approach has an attractive simplicity. We will explore this method, obtaining expressions for the mean and variance of the estimator of the slope parameter, and assessing the performance of this estimator. We will show that it may be used with caution, and when the regression model has no intercept and E(X) is large, it performs nearly as well as if the data were not misaligned.

**Dan Canada**, Portland State University

Title: A Taste of Variation

Date: Friday, 14 March 2003

Time: 1:40–2:30pm

Room: MG 107

Abstract: Variation is a concept central to the twin domains of probability and statistics, yet research focusing on how children and their teachers understand variation has only been recently emerging. After discussing some earlier research and theoretical underpinnings, the methodologies of two current studies about conceptions of variation will be presented: An NSF-sponsored project focuses on the conceptions held by middle and high school students and their teachers, while ongoing doctoral research concerns the conceptions held by elementary preservice teachers. Sample tasks and responses will be shared to illustrate five key aspects comprising a conceptual framework for studying conceptions of variation.

**Kate Riley**, Montana State University

Title: An Investigation of Prospective Secondary Mathematics Teacher’s Conceptions of Proof and Refutations

Date: Friday, 7 March 2003

Time: 1:40–2:30pm

Room: MG 107

Abstract: A quantitative, descriptive research study was conducted to investigate prospective secondary mathematics teachers’ conceptions of proof and refutations as they were near completion of their preparation program. To research the primary question of the study, the researcher addressed two components of participants’ conceptions of proof: 1) understanding of the logical underpinnings of proof, and 2) ability to complete mathematical proofs. Both components focused on direct proof, indirect proof, and refutations. These components are common proof themes emphasized by the MAA (1998) and the NCTM Standards 2000.

**Craig Johns**, University of Colorado Denver

Title: Infilling Sparse Records of Spatial Fields

Date: Monday, 3 March 2003

Time: 2:40–3:30pm

Room: MG 121

Abstract: Historical records of weather such as monthly precipitation and temperatures from the last century are an invaluable database to study changes and variability in climate. These data also provide the starting point for understanding and modeling the relationship among climate, ecological processes and human activities. However, these data are irregularly observed over space and time. The basic statistical problem is to create a complete data record that is consistent with the observed data and is useful to other scientific disciplines. We modify the Gaussian-Inverted Wishart spatial field model to accommodate irregular data patterns and to facilitate computations. Novel features of our implementation include the use of cross-validation to determine the relative prior weight given to the regression and geostatistical components and the use of a space filling subset to reduce the computations for some parameters. We feel the overall approach has merit, treading a line along computational feasibility and statistical validity. Furthermore, we are able to produce reliable measures of uncertainty for the estimates.

**Pete Caithamer**, West Point

Title: Stochastic Differential Equations and their Applications

Date: Thursday, 27 February 2003

Time: 2:40–3:30pm

Room: MG 121

Abstract: This talk will discuss the distributional properties of multiple stochastic integrals and their symmetrizations. Stochastic partial differential equations with both additive and multiplicative noises will then be considered. Particular attention will then be paid to the energy of the associated system and to the properties of the solution of that system. Extensions of results from the case of Brownian motion to the case of fractional Brownian motion will then be discussed. Finally an application of stochastic differential equations to radiative may be considered.

**Paul Larson**, Fields Institute

Title: Set Theory, Independence and Absoluteness

Date: Wednesday, 12 February 2003

Time: 2:40–3:30pm

Room: MG 121

Abstract: A sentence is independent of a set of axioms if there is no proof from the axioms of the sentence or its negation. Our primary means for demonstrating independence, forcing, was invented by Paul Cohen in the early 1960’s, and has been used since then to show that independence is widespread in set theory. Results in the other direction, limiting the independence phenomenon, are called absoluteness results. I will briefly sketch the history of these two lines of research, leading up to my own contributions. No previous knowledge of set theory or logic will be assumed.

**Justin Moore**, Boise State University

Title: Cantor’s Continuum Problem

Date: Wednesday, 11 Dec 2002

Time: 3:40–4:30pm

Room: MG 118

Abstract: In the early stages of the study of sizes of infinite sets, Cantor showed that the set of reals was uncountable. Thus he showed that there are two infinite sets of real numbers which have a different “sizes”—the set of all natural numbers and the set of all the real numbers. He asked whether it was possible to find a third set which, from the point of view of cardinality, lay strictly between these two sizes. It is now known that this problem cannot be decided within the framework of the usual axioms of mathematics. The purpose of this talk is to give an introduction to Cantor’s Continuum Problem, its resolution and the modern research which relates to it. First I will present a probabilistic interpretation of Cohen’s method of forcing and how he used it to solve the Continuum Problem. Next I will discuss Solovay’s results on the properties (including cardinality) of definable sets of reals and contrast this with Cohen’s work. Finally (time permitting) I will mention some of the work aimed at gaining a better understanding of the relationship between the size of the set of all reals and infinitary combinatorics (such as the study of uncountable graphs).

**Uwe Kaiser**, Boise State University

Title: String and skein topology of oriented 3-manifolds

Date: Wednesday, 13 November 2002

Time: 3:40–4:30pm

Room: MG 118

Abstract: The skein (or quantum) topology of links in oriented 3-dimensional manifolds has been studied intensely during the last 15 years. But the precise relation of the “new” invariants with the classical geometric topology of 3-manifolds is still not fully understood. Recently Moira Chas and Dennis Sullivan discovered new interesting algebraic structures on the (equivariant) homology groups of the space of maps from a circle into the 3-manifold. These structures are defined from “string interactions” in the manifold and are motivated by the string theory in physics ( a theory trying to unify quantum mechanics and general relativity theory). In the talk I will describe results concerning the relation between the string topology of Chas-Sullivan and the skein topology of oriented 3-manifolds.

**Will Alexander**, Capital One

Title: Analytics and Careers in the Financial Services Industry

Date: Wednesday, 23 October 2002

Time: 3:40–4:30pm

Room: MG 118

Abstract: The financial services industry is huge and tremendously varied. It spans the gulf from consumer finance to Wall Street, straight-forward amortizing loans to complex derivatives, billion dollar companies to mom-and-pop collection agencies. Throughout it all, analytics is the common currency. This talk addresses the types of work that is conducted, who’s doing it, their qualifications and potential career paths.

**Randall Holmes**, Boise State University

Title: Automated Reasoning in Predicate Logic and Set Theory using a Sequent Calculus

Date: Wednesday, 11 Sept 2002

Time: 3:40–4:30pm

Room: MG 118

Abstract: A formal system for reasoning in propositional logic, predicate logic and set theory will be described, and a computer implementation of reasoning in this system will be described and demonstrated. The formal system is a sequent calculus adapted from a system appropriated from a paper of Marcel Crabbe, which incorporates a safe version of Quine’s set theory New Foundations. One will not need to understand this description to understand what is going on in the talk. The computer program is an interactive proof checker rather than an automatic theorem prover: the user needs to construct the proofs of theorems (although automation in the program helps with organization of proofs); the role of the prover is to check the validity of the proof steps proposed by the user and display what remains to be proved after each step.

## Schedule for 2001–2002

**Justin Moore**, Boise State University

Title: The Method of Minimal Walks

Date: Tuesday, 30 April 2002

Time: 2:40 pm

Room: MG 118

Abstract: This talk will give an introduction to the first uncountable cardinal and some of the combinatorics and set theory associated with it. The method of minimal walks was developed by Todorcevic both to solve specific problems and to give a unified method for analyzing the first uncountable cardinal. The aim of the talk will be to give a “light” introduction to this method and mention some of the ways in which it can be used.

**Robert Sulanke**, Boise State University

Title: Moments and the Cut and Paste for Lattice

Date: Wednesday, 10 April 2002

Time: 3:30pm

Room: MG 106

Abstract: Abstract. Let U(2n) denote the set of lattice paths that run from (0,0) to (2n,0) with the permitted steps (1,1) and (1,-1). Let E(2n+2) denote the set of paths in U(2n+2) that run strictly above the horizontal axis except initially and finally. Starting with Wallis’ well-known formula for computing pi as an infinite product, we first establish an interest in lattice path configurations and their moments. We then introduce the cut and paste bijection which relates points under paths of E(2n+2) to points on paths of U(2n). We apply this bijection to obtain enumerations, some involving the Narayana distribution. We also extend the bijection to a formula relating factorial moments for the paths of E(2n+2) to moments for the paths of U(2n).

**Jodi Mead**, Boise State University

Title: Modeling Floats and Pollutants in the Ocean

Date: Wednesday, 5 December 2001

Time: 3:40pm

Room: MG 118

abstract: A good physical understanding of the ocean is necessary to (1) produce accurate short term weather forecasts, (2) give long term climate predictions, and (3) understand the effect of pollutants in the water. Deterministic partial differential equations, such as the Navier-Stokes equations, describe the dynamic process between pressure, temperature and velocity in the ocean. Data collected from the ocean can supplement these solutions. There are two basic ways to collect data from the ocean. One is to place moorings attached to the ocean floor. A second and less costly way, is to place floats in the ocean, let them drift, and collect their data by satellite. My work in oceanography involves solving a variant of the Navier-Stokes equation (the shallow water equations) from the viewpoint of floats, or pollutants in the water. This is a novel approach because most researchers find solutions at a fixed point in space (similar to what a mooring does). I will show some results from this model, and outline future work.

**Paul Corazza**, Boise State University

Title: Has Modern Mathematics Finally Understood The Infinite? The Good News And The Bad News

Date: Wednesday, 7 November 2001

Time: 3:40 pm

Room: MG 118

Abstract: Prior to the beginning of the 20th century, there was an almost superstitious fear among mathematicians of the concept of the infinite. It was believed, for example that, although the natural numbers “go on forever”, they cannot all be collected together into a single set. Such concerns were tied both to philosophical beliefs about the infinite and to paradoxes that were popular at the time. The work of Georg Cantor showed that unless infinite sets were allowed in mathematics, it would not be possible to have a completely rigorous calculus—even the concept of a “real number” depends on the concept of an infinite set. Nearly single-handedly, Cantor developed the foundation for the modern theory of infinite sets, which eventually became today’s axiomatic set theory, usually denoted ZFC. ZFC was seen by its creators to be a kind of ultimate foundation for all of mathematics: ZFC not only provided a framework for Cantor’s theory of infinite sets; it also provided a set of axioms from which all known theorems of mathematics could be derived. Ironically, even as ZFC was being developed, research in other areas of mathematics was uncovering certain bizarre mathematical entities–now known as large cardinals–which would eventually be shown to lie outside the framework of ZFC. In this talk, I’ll describe what a large cardinal is, and why they are important in mathematics. I’ll then describe an axiomatic framework that can provide the same kind of unifying foundation for “set theory + large cardinals” that ZFC has provided for the rest of mathematics.

**Justin Moore**, Boise State University

Title: Comparing braids, winning games, and other uses for really big numbers

Date: Wednesday, 24 October 2001

Time: 3:40 pm

Room: MG 118

Abstract: This talk will present some uses of infinity (of various sizes) in proofs of statements which do not, a priori, involve the infinite. In particular I will mention The Finite Kruskal Theorem (from graph theory), Goodstein’s Theorem (number theory/logic), Duhornoy’s braid comparison algorithm (knot theory), and Martin’s theorems concerning the determinacy of certain games (analysis/descriptive set theory).

**Scott Stevens**, University of Montana Missoula

Title: Cardiac Forcing in Models of Human Hemodynamics

Date: Friday, 12 October 2001

Time: 1:40pm

Room: MG 120

Abstract: Many mathematical models of human circulatory dynamics require a preliminary function which accurately describes cardiac output. Approximations of human cardiac output are often given in terms of mean output, such as 5,000 mL/min. However, this description is of little use in resolving the pressure pulses caused by oscillations about the mean value. This talk develops and presents a “smooth”, periodic function describing ventricular output which accurately depicts these oscillations. The function is based on the heart rate and stroke volume as well as some preliminary assumptions regarding cardiac systole. The simplicity combined with the flexibility of this function makes it a practical forcing term for models of the human circulatory system describing normal and patho-physiology. In particular, this function is set in the context of a model describing human circulatory dynamics in microgravity. A good deal of this talk will be devoted to exploring the many aspects of this model.

**John Doherty**, Watermark Numerical Computing, Brisbane, Australia

Title: Environmental Modeling: Unveiling the Truth beneath the Fantasy

Date: Thursday, 4 October 2001

Time: 3:40pm

Room: MPC 201

Abstract: Alongside the growing use of models in environmental management, is a growing skepticism of the usefulness of these models. While many outside the modeling profession still cling to the idea that “if it comes from a computer it must be right”, there are a growing number of cases where the use of models in environmental management has been disappointing at best, and misleading at worst. In fact, there are signs of a crisis within the modeling profession. On the one hand there is general recognition that an attempt to simulate environment processes numerically can provide a sounder basis for the making of important decisions. On the other hand, many modelers are loathe to raise the expectations of their clients, or stakeholder groups, too high with regard to the usefulness of their models in the management process. So in this time of re-assessment, just how high should expectations be raised? And where exactly should modeling fit into the environmental management process? And should a modeler suffer a severe identity crisis if his/her model cannot provide the impossible “answer at the back of the book” that many are seeking from it?

The talk will attempt to address these questions by first demonstrating that predictions made by an environmental model will, by their very nature, be accompanied by a (sometimes frighteningly large) margin of uncertainty. It is demonstrated that the higher the level of “system detail” that a model attempts to simulate (eg, contaminant movement in areas of high geological heterogeneity, the response of a catchment to extreme climatic events, nuances of groundwater-surface water interaction, etc), the greater will be the uncertainty with which such predictions are made. Methodologies whereby model predictive uncertainty can be quantified will be discussed. Finally, a rationale will be presented for determining the “point of diminishing return” in the model construction process—this being the point where the data at hand, and our understanding of environmental processes on a field scale, precludes the devoting of any further resources to the building of a model.

**Margaret Kinzel**, Boise State University

Title: Analyzing College Calculus Students’ Interpretation and Use of Algebraic Notation

Date: Wednesday, 3 Oct 2001

Time: 3:40 pm

Room: MG 118

Abstract: My dissertation study investigated students’ interpretation and use of algebraic notation through task-based interviews. Ten calculus students participated in a sequence of four interviews in which they worked on nonroutine algebraic tasks. The analysis of the interviews focused on the ways in which students used and interpreted algebraic notation within the contexts of the tasks. Trends across an individual’s work as well as trends across participants were identified. In general, it was found that the participants, while comfortable with notation, lacked sufficient insight into algebraic notation to take full advantage of its potential. That is, the participants seemed able to apply a notational approach to familiar or routine tasks but lacked the tendency and skill to apply a notational approach to a less familiar situation or to interpret notational expressions within that situation. I propose the notion of “concept of representation” to explain the students’ work.

**Tomek Bartoszynski**, Boise State University

Title: On Applications of Set Theory

Date: Wednesday, 19 Sept 2001

Time: 3:40–4:30pm

Room: MG 118

Abstract: In this talk I will discuss several examples of statements that are neither provable nor disprovable, yet have the appearance of the ordinary mathematical propositions.