Lecture
No.
|
Date
|
Topics
Covered
|
Lecture
1
|
Tuesday,
10/27
|
Introduced
the
course.
Began
covering
Ch.
6: Approximating
Functions with Sec.
6.1: Polynomial
Interpolation.
Introduced
the
polynomial
interpolation
problem
and
the
existence-uniqueness
theorem
for
the interpolating
polynomial (Theorem
1, p. 309).
Introduced
the Newton
form of the interpolating polynomial and the nested
multiplication algorithm for evaluating it.
|
Lecture 2
|
Thursday,
10/29
|
Continued
with
Sec.
6.1.
Noted
the Lagrange
form
of the interpolating polynomial. Stated Theorem 2, p. 315,
which gives a
useful expression for the error in the interpolating
polynomial.
|
Lecture 3
|
Friday,
10/30
|
Concluded
Sec. 6.1. Explored the consequences
of Theorem 2, p. 315. Applied the error expression with
a "nice"
function to find n such that the error in the
interpolating polynomial
p_n is uniformly less than a given tolerance. Gave a
demo with the Runge
function to show that some functions
are not "nice,"
i.e., the approximation error in p_n is unbounded
as n grows
large. Noted Theorems 6-9, p. 320, to indicate the
subtleties
approximation with interpolating polynomials. Noted in
particular the Weierstrass
Approximation Theorem
(Theorem 8, p. 320), |
Lecture 4
|
Monday,
11/2
|
Began
Sec.
6.2: Divided
Differences. The
goal is to develop an efficient
algorithm for computing the coefficients in the Newton
form of the
interpolating polynomial. Toward this, we introduced the Newton
divided
differences of a function at a given set of
points, defined to
be the coefficients in the Newton form of the
interpolating
polynomial. Stated and proved Theorem 1, p. 330, which gives
the fundamental recursive relationship for defining Newton
divided
differences. Used this relationship to develop an
efficient algorithm
for evaluating the Newton divided differences for a given
set of data.
|
Lecture 5
|
Tuesday,
11/3
|
Concluded
Sec.
6.2
with
an
example
of
divided-difference
computations
and
a
discussion
of divided difference properties as expressed in Theorems
2-4 on p. 333.
|
Lecture 6
|
Thursday
11/5
|
Began
Sec.
6.3: Hermite
Interpolation.
The general problem (called Birkhoff
interpolation) is to find a polynomial that takes
on some set of
specified values and derivative values (including
higher-order
derivative values) at specified points. This problem may
not have a
unique solution. We will consider only Hermite
interpolation, in which the values of the
polynomial and all
lower-order derivatives are specified wherever
higher-order derivative
values are specified. This problem always has a unique
polynomial
solution of degree less than or equal to m, where m+1 is
the number of
interpolation conditions (see Theorem
2,
p.
340).
|
Lecture 7
|
Friday,
11/6
|
Concluded
Sec.
6.3.
Focused
on
the
Hermite
interpolation
problem
of
major
interest,
in which the interpolating polynomial and its first
derivative are required to take on specified values at
specified
points. Defined the Newton form of the
interpolating
polynomial in this case, and extended our
divided-difference technique
to determine the coefficients in the Newton form. Noted
the Lagrange form
of the Hermite interpolating polynomial. Noted the
expression for the
error in the Hermite interpolating polynomial (Theorem 2, p. 344)
and applied
it in a simple example.
|
Lecture
8
|
Monday,
11/9
|
Began
Sec.
6.4: Spline
Interpolation.
Saw in two examples that using
polynomials of increasingly high degree to interpolate
given data may
yield very unsatisfactory results. Considered the
alternative approach
of using low-degree polynomials that are "pieced together"
at the
interpolation points (now called the "knots") to result in
a "smooth"
function. As an example, explored piecewise-linear interpolation.
Worked
out
a
bound
on
the
error
in
a
piecewise-linear
interpolating
function in a simple example. For given knots t_0 < t_1
< ...
< t_n, defined a spline of degree k
to be a function S that is (k-1)-times continuously
differentiable on
[t_0,t_n] and is a polynomial of degree less than or equal
to k on each
subinterval [t_i,t_(i+1)] for i = 0, ..., n-1. A spline is an interpolating
spline
if it takes on specified values at t_0, t_1, ..., t_n. Saw
that
a piecewise-linear interpolating function is an
interpolating spline of
degree one.
|
Lecture
9
|
Tuesday,
11/10
|
Continued
Sec.
6.4.
Developed quadratic
interpolating
splines. Saw that there is one "degree of
freedom"
in determining a quadratic interpolating spline. As an
example of how
this might be specified, we saw how specifying S'(t_0)
(i.e.,
"clamping" the left end of the spline) completely
determines the spline
S(t). Began developing cubic interpolating
splines, the most important splines used in
practice.
|
Lecture
10
|
Thursday,
11/12
|
Continued
Sec. 6.4. Completed the development
of cubic interpolating splines. Saw that there are two
"degrees of
freedom" in determining a cubic interpolating spline.
Illustrated how
these can be specified to determine the spline but
specifying "end
conditions," i.e., the values of S"(t_0) and
S"(t_n). This leads
to a triadiagonal symmetric positive-definite system of
linear
equations for the second derivatives at the knots t_1,
..., t_(n-1).
This can be solved very efficiently to yield the second
derivatives
and, in turn, the spline. An important special case is the
"natural"
cubic
spline, which results from specifying S"(t_0) =
S"(t_n) = 0.
Noted the "optimality" of natural
cubic splines (Theorem
1,
p.
355).
|
Lecture 11
|
Friday,
11/13
|
Continued Sec.
6.4. Demonstrated the behavior of cubic splines using
MATLAB's spline command.
Explained the "not
a knot" end
conditions used by spline. Also noted
the appropriateness of other end conditions in some
circumstances,
e.g., "periodic"
end conditions
when the data are
from a periodic function. |
Lecture
12
|
Monday,
11/16
|
Began
Sec.
7.1: Numerical
Differentiation and
Richardson Extrapolation.
Derived the forward-
and central-difference
approximations of f'(x) for a given function
f(x), assuming we
can evaluate f at anywhere near x. Began discussing the
effects of truncation error
and roundoff
error on the accuracy of these approximations.
|
Lecture
13
|
Tuesday,
11/17
|
Reviewed for the
mid-term exam.
|
Lecture
14
|
Friday, 11/20 |
Continued
Sec. 7.1. Gave an analysis of the
effects of truncation and roundoff error on the accuracy
of the
forward- and central-difference derivative
approximations. For each
approximation, we assumed a relative bound ("function
epsilon") on
errors in f-evaluations and found an
"optimal |h|" that
minimizes a bound on the total error, leading to useful
"Rules
of Thumb":
(1)
Assuming
f,
f',
and
f''
are
about
the
same
in magnitude, the
optimal |h| for the forward-difference approximation is
about the
square root of function epsilon; for this |h|, the
computed
approximation has about half as many accurate decimal
digits as the
computed values of f. (2) Assuming f, f', and f''' are
about the same
in magnitude, the optimal |h| for
the central-difference approximation is about the cube root of
function epsilon; for
this |h|, the computed approximation has about
two-thirds as many
accurate decimal digits as the computed values of f.
|
Lecture
15
|
Monday, 11/23 |
Continued Sec.
7.1. Discussed Richardson
extrapolation, a wonderfully simple idea by
which higher-order
approximations can be obtained from a lower-order
approximation that
has an error expressible in a Taylor series.
|
Lecture
16 |
Tuesday,
11/24 |
Began
Section
7.2: Numerical
Integration Based on
Interpolation. The problem
of interest is to approximate a definite integral of an
integrand f
using f-values. Our approximations (rules) will be in the
form of
weighted sums of f-values at designated points (nodes)
within the
interval of integration. Outlined an approach based on
integrating
polynomials that interpolate f at the nodes. Using a
linear
interpolating polynomial in the case of two nodes, we
derived the (simple) trapezoid
rule and gave an expression for the error.
Assuming equally spaced
nodes a = x_0
< x_1 < ...
< x_n = b, we derived the composite
trapezoid
rule together with an expression for the error
that is O(1/n^2)
or, equivalently, O(h^2), where h = (b-a)/n.
|
Lecture
17 |
Monday,
11/30 |
Continued
Section 7.2. In an example, saw how
many nodes are needed to guarantee specified accuracy
with the
composite trapezoid rule. Developed the (simple)
Simpson's
rule and the composite Simpson's
rule together with an
expression for the error that is O(1/n^4)
or, equivalently, O(h^4), where h = (b-a)/n.
|
Lecture
18
|
Tuesday,
12/1 |
Finished Section
7.2 by continuing the example started previously. Saw
that the number
of nodes needed to guarantee
specified accuracy with the composite Simpson's rule
is much
smaller than with the
composite trapezoid rule. Touched on Section 7.3 with
a qualitative
discussion of Gaussian
quadrature.
|
Lecture
19
|
Thursday,
12/3 |
Covered
Section 7.4: Romberg
Integration. First observed
(without proof) that the error in the composite
trapezoid rule for a
particular h can be expressed as a power series in h^2.
Then applied
Richardson extrapolation to obtain the Romberg
integration
process.
Developed a recursive relationship for efficiently
evaluating the
composite trapezoid rules necessary for Romberg
integration.
|
Lecture
20
|
Friday, 12/4
|
Covered
Section 7.5: Adaptive
Quadrature. Began
by observing that some integrands vary slowly in some
parts of the
interval of integration and vary rapidly in others. For
these, equally
spaced nodes may inevitably be too close together for
efficiency in
some parts of the interval and too far apart for
accuracy in others. In adaptive
quadrature, nodes are placed (only) where
needed to achieve a
desired level of accuracy; this is done automatically by
the algorithm
based on estimates of the local error. Discussed how
this is carried
out using Simpson's rule, as in MATLAB's quad
routine.
|
Lecture
21
|
Monday,
12/7
|
Began
Chapter 8: Numerical
Solution of Ordinary
Differential Equations. Covered Section 8.1: The Existence and
Uniqueness of Solutions,
focusing
on
the
first-order
ODE
initial-value problem and conditions under which
it
has a unique solution for t near the initial value t_0.
Began Section
8.2: Taylor-Series
Methods
and derived the simplest such method, the (forward)
Euler
method.
|
Lecture
22
|
Tuesday,
12/8
|
Continued Section 8.2. Outlined the
general approach to deriving Taylor-series
methods and illustrated it by deriving the second
method of this
type.
(The Euler method is the first such method.) Introduced
the
fundamentally important notions of global error
and local
error. The
global error at the kth step is x_k - x(t_k), where x(t)
is the
solution of the
initial-value problem. The local error in x_k is x_k -
y(t_k), where
y(t) is the solution of the "local" initial-value problem
y' = f(t,y),
y(t_{k-1}) = x_{k-1}. We would like to know a bound on the
global error
so that we can make it small; however, there is no way to
estimate it
efficiently. As we will see, there are efficient ways to
estimate the
local
error; however, its relationship to the global error is
unclear. It can
be shown, though, that the following holds for most
numerical methods
for initial-value problems: If the local error for the
method is
O(h^{k+1}), then, assuming f satisfies mild assumptions
and the
solution x(t) exists on a bounded interval [t_0,T], there
is a constant
C such that the absolute value of the global error |x_k -
x(t_k)| is
bounded by Ch^k for t_0 \le t_k \le T as h -> 0. In
this case, the
method is said to be convergent
and of order k.
Saw
that,
for
our
Taylor-series
methods, the local error is given by
the truncation error in the Taylor series. Thus the Euler
method has
O(h^2) local error and is a first-order method, while the
second
Taylor-series method has O(h^3) local error and is a
second-order
method.
|
Lecture
23
|
Thursday,
12/10
|
Began
Section
8.3: Runge-Kutta
Methods.
Illustrated the derivation these methods by deriving the one-parameter
family
of second-order Runge-Kutta methods. Noted (but
did not
derive)
the classical 4th-order Runge-Kutta method. NOTE: The method identified
in the text as
Huen's method is really the modified
Euler
method; the method
identified in the text as the modified Euler method is
really the midpoint
method.
|
Lecture
24
|
Friday,
12/11
|
Discussed adaptive methods, which
determine step-sizes dynamically to achieve overall
accuracy and
efficiency. The strategy is to estimate the local error at each
step and
control it to efficiently maintain desired accuracy.
Mentioned in
passing step-halving
(or step-doubling)
to estimate local
error. (This works but is relatively inefficient.) Focused
on error
estimation using embedded methods.
(This
is especially effective with Runge-Kutta methods.) In
these, one
simultaneously implements a lower-order method and a
higher-order
method. The difference between the two resulting values is
the estimate
of the local error in the lower-order method. Also
discussed how to
adjust the step size to efficiently satisfy an error
criterion.
Embedded 4th-5th order pairs are efficiently implemented
in the RKF45
routine discussed in the text and in MATLAB's ode45
routine.
|
Lecture
25
|
Monday,
12/14
|
Discussed
"problematic" problems and special methods. Noted that
some problems
cannot be accurately solved over even modestly long
t-intervals, no
matter what method is used. Gave examples that involved
exponentially
diverging solution curves; for these, small errors that
inevitably
occur grow rapidly over a series of steps. Discussed stiff
ODEs
and stiff ODE
methods. Stiffness typically occurs when an ODE
has solutions
that vary over very different time scales, in particular
when some
solutions decay very rapidly relative to others. Phenomena
that often
give rise to stiff ODEs are chemical reactions, electrical
circuits,
and some mechanical systems. If an adaptive Runge-Kutta
method is
applied to a stiff ODE, it will typically take very small
(perhaps
intolerably small) stepsizes in order to maintain
stability; such small
stepsizes are typically much smaller than would be
required for
accuracy alone. Effective stiff solution methods are
necessarily
implicit and require solving a linear or nonlinear
equation at each
time step. However, the cost of this is usually more than
offset by the
much larger stepsizes that can be taken with the method.
|
Lecture
26 |
Tuesday,
12/15 |
Reviewed for the final
exam.
|