Everyone is required to participate in the large project somehow.

One way to do this is to pick up one or more subprojects and work on them.
This page lists a number of subprojects that *may* be appropriate.
(The word "may" here means that they also *may not* be appropriate,
so before starting work on one of these in a serious way, talk to me about
it.) I'll indicate the status of each project as time goes on.

- Implement code for triangulating a polygon that does not give long,
skinny triangles.
It will turn out that error bounds depend on the

*shape*of the triangles in a triangulation. Triangles that have very small angles lead to greater errors, so fixing the code in`polygon.scm`to avoid this is good. - Implement higher-order (quadratic, cubic, etc.) elements.
This will be a lot of work; two or three people should work together on this project.

- Implement Nitsche's method for handling Dirichlet
boundary conditions.
Neumann and Robin boundary conditions are often called

*natural*, since they are easier to handle than Dirichlet boundary conditions. Nitsche's trick, which penalizes the difference between the approximate solution and the Dirichlet boundary data rather than trying to interpolate that data directly, leads to symmetric, positive definite linear systems that can be solved very quickly using conjugate gradient or multigrid methods. - The current code does only uniform refinements; i.e., it refines
every triangle if it refines one triangle. Implement faster code for matrix
multiply, smoothing, etc., that takes advantage of this knowledge.
Here's some background thoughts. Often, after studying algorithms and data structures, the student comes away with the idea that for each problem there is a natural data structure and algorithm, e.g., if you want to implement a priority queue, you build a heap and have routines for adding and removing things from the heap while preserving invariants.

Well, in many cases, one can move the complexity of the implementation back and forth between the data structures and the algorithms. In our case, our existing triangulation refinement code builds a complete, complex data structure, with all nodes, edges, and triangles explictly built, with explicit links denoting inter-relationships, etc. So our sparse matrix-vector multiplication code is extremely simple, it just uses these explicit relationships to do the multiply in a small loop. It is also very general, it works on any type of unstructured triangulation.

Loading all that explicit information slows our matrix-vector operations considerably. If we do only uniform refinements of triangulations, then one can use this knowledge to simplify the refinement code substantially. In fact, one can "refine" the triangulation simply by taking the coarsest triangulation and recording the number of uniform refinements that one has "applied" to it. Then inside a triangle of the original triangulation, one knows the relationship among all interior basis elements

*without needing explicit links between triangles, edges, etc.*Using this implicit information (which will increase the complexity of the matrix multiplication and smoothings codes) will make the matrix-vector multiplication code faster, at the cost of more complexity.