If you have difficulty accessing these materials due to visual impairment, please email me at mohlenka@ohio.edu; an alternative format may be available.
MATH 6640-100 (11486), Spring 2019
Numerical Analysis: Linear Algebra
Syllabus
Catalog Description:
In-depth analysis of numerical aspects of problems and algorithms in linear algebra.
Desired Learning Outcomes:
Students will have a deep understanding of numerical methods
for linear algebra. They will know the standard methods and be able
to analyze and learn new methods on their own.
Prerequisites:
MATH 5600 Introduction to Numerical Analysis
Instructor:
Martin J. Mohlenkamp,
mohlenka@ohio.edu,
(740)593-1259, 315-B Morton Hall.
Please communicate with me using your @ohio.edu email address. I will not send any private information to other addresses.
Office hours:
Monday, Wednesday, and Friday 2:00-2:55pm.
By appointment. Do not hesitate to ask for an appointment.
Monday, Wednesday, Friday 3:05-4:00pm in 314 Morton Hall.
How this courses is structured:
In most courses, the material is developed from basic to
more advanced and keeps building. That order is nice and
logical, but means that the material is often poorly motivated and
the course never gets to current research in the area.
This course is organized backwards. We will start with
papers on numerical linear algebra that were published in
2018 and then learn whatever background material we need in
order to understand them. This order is illogical and
chaotic, but much closer to how mathematics is learned and
used in research and professionally.
Homework:
We will have some homework due each week, but the size may vary.
Types of homework will include:
computer calculations,
problem solving and proofs,
reading tasks, such as reading a paper and formulating questions about it,
writing tasks, such as answering questions about a paper you have read or describing a background concept, and
short presentations.
On the writing tasks we will also work on writing skills using the
Good Problems program.
Tests:
We will have three tests. They will test definitions and proofs,
rather than calculations.
Final Project:
You will individually do a final project to validate (or invalidate) a recent published work.
You will produce:
An oral presentation.
Presentation slides, in \(\LaTeX\) using the beamer
class.
A Jupyter notebook with:
Analytic checks of the paper, such as filling in missing steps in proofs, checking that results cited from other papers are really in those papers, etc.
Numerical checks of the paper, such as reproducing their numerical results, using simulations to test inequalities, etc.
Final Exam:
The final exam is scheduled on Wednesday, May 1, 12:20-2:20pm. There will not be an exam, but your final project notebook is due at that time.
Attendance:
Your
attendance and participation is essential for the operation of this class. You
are allowed 4 absences (out of 41 classes) without penalty;
these include university excused absences for illness, death in
the immediate family, religious observance, jury duty, or
involvement in University-sponsored activities. Each additional
absence will reduce your final average by 0.5%. Your attendance
record will be available in Blackboard.
Grade:
Your grade is based on homework at 40%, tests at
30%, and the final project at 30%. An average
of 90% guarantees you at least an A-, 80% a B-, 70% a C-, and
60% a D-. Grades are not the point.
Computational Environment:
We will use the cloud computing
environment CoCalc. Sign
up for a free account, using your @ohio.edu email
address.
Academic Misconduct:
Homework and Project:
These must be your own work, but you can use any help
that you can find as long as you acknowledge in writing what
help you got and from whom or where. (You do not have to
acknowledge me.) Your words must be your own; any text not
your own must be properly quoted and cited.
Tests:
You may not give or receive any
assistance during a test, including but not
limited to using notes, phones, calculators, computers, or
another student's solutions. (You may ask me questions.) A
minor, first-time violation will result in a zero grade on
that test.
Serious or second violations will result in failure in the
class and be reported to the Office of
Community Standards and Student Responsibility, which may
impose additional sanctions. You may appeal any sanctions
through the grade appeal process.
Special Needs:
If you have specific physical,
psychiatric, or learning disabilities and require
accommodations, please let me know as soon as possible so that
your learning needs may be appropriately met. You should also
register with Student Accessibility
Services to obtain written documentation and to learn about
the resources they have available.
Responsible Employee Reporting Obligation:
If I learn of any instances of sexual harassment, sexual
violence, and/or other forms of prohibited discrimination, I
am required to report them. If you wish to share such
information in confidence, then use one of the confidential
resources listed by
the Office
of Equity and Civil Rights Compliance.
Learning Resources:
People:
Your classmates are your best resource. Use them!
Books:
Numerical Linear Algebra,
by Lloyd N. Trefethen and David Bau III.
Society for Industrial and Applied Mathematics, 1997;
ISBN 978-0-898713-61-9. (Become a student member of SIAM and
buy it
through them at a discount.)
This is a very nice book that I used as the textbook when I last taught this course. I am not using it now because it does not include any developments in the last 20 years and because of posted homework solutions.
Internet searches will reveal many other sources and
copies of books. If we find some particularly useful
(and not copyright violations), then I will add them to
this list.
Sign up for a free account, using
your @ohio.edu email address.
Create a project. Within it hit "Settings". Within "Add new collaborators" search for me mohlenka@ohio.edu and then add me as a collaborator.
Within the project click "New", from the library select "First Steps in CoCalc" and hit "Get a Copy". Click the file first-steps.tasks. Do the tasks up to and including the Jupyter Notebook.
Get our first paper Concave Mirsky Inequality and Low-Rank Recovery by
Simon Foucart.
SIAM J. Matrix Anal. Appl. 39-1 (2018), pp. 99-103, doi:10.1137/16M1090004. Upload it into CoCalc. Read it by the next class and make a list of topics it relies on that you do not know enough about.
January 16
Our goal today is to learn how to write in Jupyter notebooks using markdown cells and \(\LaTeX\) encoding.
Within your CoCalc project, click "New" and get
"Markdown in CoCalc" from the
library. Read markdown-intro.md
and markdown-in-jupyter.ipynb.
Create a Jupyter notebook week1.ipynb. Put
your name in a markdown cell.
Put the list of topics that you made about the paper in
another markdown cell.
For each of the following topics, make a markdown cell
with a brief (e.g. 1-3 sentences and a formula) summary and
links/citations to the sources you used. Use \(\LaTeX\)
encoding for any formulas.
Today we continue learning how to write and refresh our
knowledge of definitions.
Read more
about markdown
syntax. In particular, learn about blockquotes;
use them whenever you have copied something into
your homework.
For each of the following topics, make a markdown cell
with a brief (e.g. 1-3 sentences and a formula) summary and
links/citations to the sources you used. Use \(\LaTeX\)
encoding for any formulas.
Today we will work on some topics that you identified in the paper.
In \(\LaTeX\) either $...$
or \(...\) can be used to indicate inline
math. The form \(...\) is better within
html using MathJax, but markdown works better
with $...$. Similarly, markdown
prefers $$...$$ over \[...\]
for displayed math.
Homework:
Create a Jupyter
notebook week2.ipynb. Put your name in a
markdown cell. Do your work this week in it.
Read
about Norm
and Matrix
norm. Using these or other sources, answer the
following, each in a markdown cell. Remember to cite
your sources.
State the defining properties of a norm.
Let \(f(\mathbf{x})\) be the function defined
on vectors \(\mathbf{x}=(x_1,x_2,x_3)\in
\mathbb{R}^3\) by
\[f(\mathbf{x})=|x_1|+2|x_2|+3|x_3|\,.\] Determine
whether or not \(f\) is a norm.
Define what it means for a matrix norm to be
induced by a vector norm.
Show that if \(A\) and \(B\) are square matrices
and the matrix norm is induced by a vector norm,
then \(\|AB\|\le \|A\|\, \|B\|\).
Today we will work on some topics that you identified in the paper.
We will also go through an existence proof for the SVD.
When \(\LaTeX\) is rendered on an html page using MathJax (like this page), you can see the underlying \(\LaTeX\) code by right-clicking on the math, selecting "Show Math As", and then selecting "TeX Commands".
Create a Jupyter
notebook week3.ipynb for your work this week. Make sure it is using a python kernel.
In a python cell, run
import numpy
from numpy import identity, zeros, dot
from numpy.random import randn
from numpy.linalg import norm
Use randn to generate a random scalar \(a\)
and three random \(10\times 10\) matrices \(A\), \(B\),
and \(C\). Use identity to create the identity
matrix \(I\) and zeros to create the zero matrix \(O\).
Test the following equalities:
\(O+A=A\)
\(A+B=B+A\)
\(A+(B+C)=(A+B)+C\)
\(IA=A\)
\(AB=BA\)
\(A(BC)=(AB)C\)
\(a(B+C)=aB+aC\)
\(A(B+C)= AB+AC\)
(To test an equality like \(A=B\), check \(\|A-B\|\approx 0\).)
Tuesday January 29, 9am: homework week2 due
January 30
Information:
Today we work more on coding and test the classical Mirsky inequality.
To get the documentation on a python function within a jupyter notebook, use "?". So, to get information about range, run range? in a python cell.
Homework:
Generate a random \(10\times 7\) matrix \(A\), compute its
SVD using numpy.linalg.svd,
and run tests to show the results satisfy the definition
of the SVD.
Write a program that inputs the size \(n\), generates random matrices \(X\) and \(Y\), computes both sides of the classical Mirsky inequality as stated in the paper, and reports whether or not the inequality was satisfied. Test it.
Get our second paper A Generalized Matrix
Inverse That Is Consistent with Respect to Diagonal
Transformations by Jeffrey Uhlmann. SIAM
J. Matrix Anal. Appl. 39-2 (2018),
pp. 781-800, doi:10.1137/17M113890X. We
will only read sections 1. and 2., ending on page
786. Make a list of topics it relies on that you do not
know enough about.
February 1
Today we will start going through the proof of Theorem 1 in the first paper.
Week 4 (February 4)
February 4
Information:
Today we will continue going through the proof of Theorem 1 in the first paper.
Homework:
After we finish the proof together, you will be assigned one of the following parts of the proof:
Step 1a. through "... \((t_m,f(t_m))\)."
Step 1a. from "By concavity of \(f\),..."
Step 1b.
Step 2.
Step 3.
As homework, prepare to present that part of the proof.
Tuesday February 5, 9am: homework week3 due
February 6
Today you will each have 10 minutes to present as much of your part of the proof of Theorem 1 as you can.
At the end I will announce which part will be on the test.
February 8
Test. The test will be on paper, without computers, and consist of:
Stating some of the definitions and formulas you have done in your homework.
Proving the existence theorem for the SVD when the matrix is square and invertible. You will be given the spectral theorem.
Proving some part of Theorem 1.
Week 5 (February 11)
February 11
Information:
Today we practice with python and check the concave Mirsky inequality.
Write python functions for \(g(x)=x\), \(s(x)=\sqrt{x}\), \(h(x)=\min\{1,x\}\), and \[p(x)=\begin{cases}4x & \text{if \(x \le 1\)}\\ 4+3(x-1) & \text{if \(1 \lt x \le 2\)}\\ 7+2(x-2) & \text{if \(2 \lt x\)}\end{cases}\,.\]
Write a program that inputs sizes \(n_1\) and \(n_2\) and function \(f\), generates random \(n_1\times n_2\) matrices \(X\) and \(Y\), computes both sides of the concave Mirsky inequality as stated in Theorem 1 in the paper, determines whether or not the inequality was satisfied, and returns True or False.
Using nested for loops, test all combinations with f in [g,s,h,p], n1 in [2,10,100], and n2 in [2,5,15,20].
February 13
Information:
Today we get started on definitions from the second paper.
Each of you will be assigned a proof bit, which you will present next week:
Proof of Lemma 2.2
Proof of Theorem 2.3, Equations (39) to (44)
Proof of Theorem 2.3, Equations (45) to (48)
Proof of Theorem 2.3, Equations (49) to (50)
Proof of Theorem 2.3, Equations (51) to (60)
Homework:
Write the definition for each of the following. Include citations of your sources.
Make a python function that returns a random \(n\times n\) permutation matrix \(P\). Run a test to show \(P\) is orthogonal.
Generate a random matrix, compute its polar
decomposition, and show that the results satisfy the
definition of the polar decomposition.
February 15
Information:
Today we learn about the Moore-Penrose pseudoinverse and reproduce an example from the paper.
Homework:
Read about the Moore-Penrose pseudoinverse. State its definition.
Use numpy.linalg.pinv
to compute the pseudoinverse and run tests to show it
satisfies the definition.
Use the SVD of \(A\) produced by svd() to construct
the pseudoinverse of \(A\) and compare with the one
produced by pinv().
Reproduce the example in the paper in Equations (13) to (16).
Week 6 (February 18)
February 18
Information:
Today we first discuss Unit Consistency.
Then you will do some coding in preparation for Definition 2.1 and Lemma 2.2
The person assigned to prove Lemma 2.2 should be ready for Wednesday and the others should be ready for Friday.
Homework:
Make a python function that returns a random \(n\times n\) diagonal matrix \(D_+\) with positive entries on the diagonal.
Make a python function that returns a random \(n\times n\) diagonal matrix \(D_u\) that is also real and unitary.
Make a python function that returns a random \(n\times n\) unitary matrix \(U\). (Hint: use the output of svd().)
Make a python function that has a matrix \(A\) as input and returns the matrix \(D\) in equation (29). Use \(\|\cdot\|_2\) as the vector norm.
Tuesday February 19, 9am: homework week5 due
February 20
Information:
Today we go through the statement and proof of Lemma 2.2, with one of you presenting the proof.
Then we do numerical tests to verify Lemma 2.2.
Homework:
Write tests to check Lemma 2.2. Specifically, show
that the matrix \(D\) in equation (29) satisfies equations (25)-(28). Test for both square and rectangular \(A\) and with some \(A\) that have a zero row.
Get our third paper Subspace Acceleration for the Crawford Number and Related Eigenvalue Optimization Problems by Daniel Kressner, Ding Lu, and Bart Vandereycken. SIAM
J. Matrix Anal. Appl. 39-2 (2018),
pp. pp. 961-982, doi:10.1137/17M1127545. We
will only read through section 2.1, ending on page
966. Make a list of topics it relies on that you do not
know enough about.
February 22
Information:
Today we go through the statement and proof of Theorem 2.3, with four of you presenting the proof.
Then we do numerical tests to verify Theorem 2.3.
Homework:
Write tests to check Theorem 2.3.
Week 7 (February 25)
February 25
Today we catch up
Tuesday February 26, 9am: homework week6 due
February 27
Information:
Today we learn connections between the SVD, pseudoinverse, and least-squares problems.
For a linear system \(Ax=b\), the residual is defined by \(r=b-Ax\).
A least-squares solution is an \(x\) such that \(\|r\|_2\) is minimized.
One goal today is to show that \(x = A^+b\) is a least-squares solution, where \(A^+\) is the (Moore-Penrose) pseudoinverse.
Homework:
Prove that \(x\) is a least-squares solution to \(Ax=b\) if and only if it is a least-squares solution to \((UA)x=(Ub)\) for any conformable unitary matrix \(U\).
Prove that \(x\) is a least-squares solution to \(Ax=b\) if and only if \(y=U^*x\) is a least-squares solution to \((AU)y=b\) for any conformable unitary matrix \(U\).
Prove that if \(A\) is a diagonal (not necessarily square) matrix then \(x = A^+b\) is a least-squares solution to \(Ax=b\).
Using the above and the definiton of the pseudoinverse in terms of the SVD, show that \(x = A^+b\) is a least-squares solution to \(Ax=b\) for any \(A\).
Run numerical tests of the claim: \(x\) is a least-squares solution to \(Ax=b\) if and only if \(A^*r=0\). Test for both square and rectangular \(A\) and with some \(A\) that have a zero row or a zero column.
March 1
Information:
Today we start on definitions from the third paper and related coding.
Each of you is assigned a proof bit from our third paper, which you will present after spring break:
Proof of Lemma 2.1 (a)
Proof of Lemma 2.1 (b) through "obtain \(\phi(\theta;V)=\phi(\theta)\)."
Proof of Lemma 2.1 (b) from "If \(\phi(\theta)\) is a simple".
Proof of Theorem 2.4 through \(s_1=s_0+\ln \varepsilon_1 \lt 0\).
Proof of Theorem 2.4 starting "By standard techniques..."
State the Hausdorff-Toeplitz Theorem and explain the terms in it.
Write a python function that inputs \(n\) and returns a random matrix in \(\mathbb{C}^{n\times n}\). (Hint: Make two real matrices and add them as A1+1j*A2.)
Write a python function that inputs a square matrix \(A\) and an integer \(N\) and returns \(N\) random points in the numerical range of \(A\).
Generate a random matrix in \(\mathbb{C}^{4\times 4}\), compute 10000 random points in its numerical range, and plot the points using matplotlib.pyplot.plot. Compare with Figure 1 in the paper.
Week 8 (March 4)
March 4
Information:
Today we experiment with the Crawford number.
Homework:
Write a python function AtoSK that inputs \(A \in \mathbb{C}^{n\times n}\) and returns the matrices \(S\) and \(K\) as used in Equation (2).
Write a python function mineigSKtheta that inputs hermitian matrices \(S\) and \(K\) in \(\mathbb{C}^{n\times n}\) and \(\theta \in [0,2\pi]\) and returns
\(\lambda_{\mathrm{min}}(S \cos(\theta)+K\sin(\theta))\)
as in Equation (2). (Use numpy.linalg.eigvalsh to get the eigenvalues.)
Generate a random matrix in \(\mathbb{C}^{4\times 4}\), compute \(S\) and \(K\), run mineigSKtheta for 100 values of \(\theta \in [0,2\pi]\), and plot the resulting \(\lambda_{\mathrm{min}}\) as a function of \(\theta\). Compare with Figure 1 in the paper. Based on your plot, what is the Crawford number of your \(A\)?
Tuesday March 5, 9am: homework week7 due
March 6
Information:
Today we work on some topics that you identified in the paper.
First, I will discuss optimization terms: Objective function, Univariate optimization, Eigenvalue optimization
Then we will discuss the main ideas of low-dimensional surrogate models, subspace methods, subspace acceleration, and monotonicity.
Homework:
State the Min-max Theorem, which gives a variational characterization of the eigenvalues of a Hermitian matrix.
Test. The test will be on paper, without computers, and consist of:
Stating some of the definitions and formulas you have done in your homework in weeks 5-7.
Formulas and proofs related to the Moore-Penrose pseudoinverse, such as:
State the defining properties of the Moore-Penrose pseudoinverse \(A^+\).
State the formula for \(A^+\) using the SVD of \(A\).
Show that the \(A^+\) defined using the SVD satisfies the properties of the pseudoinverse.
Prove that \(x=A^+b\) is a least-squares solution of \(Ax=b\).
Formulas and proofs from the second paper, such as:
Explain the similarities and differences between a unitary consistent generalized matrix inverse and a unit consistent generalized matrix inverse.
From Lemma 2.2, prove that (29) satisfies (25), (26), and (28).
From Theorem 2.3, prove that (34) satisfies (35) and (36).
Spring Break
Week 9 (March 18)
March 18
Information:
Your final project is to individually read a paper, do some checks on the proofs, do some numerical tests, and give a presentation. Today we work on finding a suitable paper.
Homework:
Pick two papers from the SIAM Journal on Matrix
Analysis and Applications, volume 39 (2018), which has
issues 1,
2,
3, and
4.
They cannot be ones that we have done as a
class. Choose ones that seem interesting and
accessible to you. (Avoid picking the same ones as
your classmates, since the final project papers must
be different.) For each paper:
Upload it to Cocalc.
Make a list of topics it relies on that you do not
know enough about.
Identify 1-3 proofs that you could check.
Identify 1-3 numerical tests that you could do.
Tuesday March 19, 9am: homework week8 due
March 20
Information:
Today we first go through the proof of Lemma 2.1(a), with one of you presenting.
Then we do a simple check on Lemma 2.1(a).
Homework:
Make a python function that inputs inputs hermitian matrices \(S\) and \(K\) in \(\mathbb{C}^{n\times n}\), a matrix \(V \in \mathbb{C}^{n\times k} \) with orthonormal columns, and a scalar \(\theta \in [0,2\pi]\), and returns \(\lambda_{\mathrm{min}}(V^*(S \cos(\theta)+K\sin(\theta))V)\). This is called \(\phi(\theta;V)\) in the paper.
Generate a random matrix \(A\in\mathbb{C}^{4\times 4}\) and compute its \(S\) and \(K\) (as you did in week 8). Generate a random unitary matrix \(W\in\mathbb{C}^{4\times 4}\). Let \(U\in\mathbb{C}^{4\times 2} \) be the first two columns of \(W\) and \(V\in\mathbb{C}^{4\times 3} \) be the first three columns of \(W\).
Plot \(\phi(\theta;U)\), \(\phi(\theta;V)\), and \(\phi(\theta)\) on the same axis to visually test Lemma 2.1(a).
March 22
Information:
Today we first go through the proof of Lemma 2.1(b), with 2 of you presenting.
Then we will do a simple numerical test of Lemma 2.1(b).
Homework:
Generate a random matrix \(A\in\mathbb{C}^{4\times 4}\) and compute its \(S\) and \(K\) (as you did in week 8). Let \(B=S \cos(1)+K\sin(1)\). Use numpy.linalg.eigh to compute the eigenvalues and eigenvectors of \(B\) and let \(V \in\mathbb{C}^{4\times 2}\) be the matrix whose columns are the eigenvectors of \(B\) with smallest eigenvalues.
Plot \(\phi(\theta;V)\) and \(\phi(\theta)\) on the same axis. What does Lemma 2.1(b) say about these two graphs? Do your graphs agree with the Lemma?
Week 10 (March 25)
March 25
Information:
Today we work on the concept fo rate of convergence.
Write a python function sequencemaker that inputs \(M \gt 0\), \(x_0 \gt 0\), \(q \ge 1 \), and \(N \gt 0\) and outputs the list \([x_0,x_1,\dots,x_N]\) where \(x_k = M x_{k-1}^q\).
What condition is needed on \((M,x_0,q)\) for a sequence generated this way to converge? What will it converge to?
It is claimed that the sequence
\[q_k = \frac{\ln\left(\left|\frac{x_{k+3}-x_{k+2}}{x_{k+2}-x_{k+1}}\right|\right)
}{\ln\left(\left|\frac{x_{k+2}-x_{k+1}}{x_{k+1}-x_k}\right|\right)}\] will converge to the order of convergence of the sequence \(\{x_k\}\).
Write a python function orderestimator that inputs a list \([x_0,x_1,\dots]\) and outputs the list \([q_0,q_1,\dots]\)
Using \((M,x_0,q)=(0.5,10,1)\):
Generate a list using sequencemaker.
Apply orderestimator.
What does it indicate is the order of convergence?
Repeat using \((M,x_0,q)=(0.99,10,1)\), \((1.1,0.5,1.5)\), and \((1.1,0.5,2)\).
Tuesday March 26, 9am: homework week9 due
March 27
Information:
Today you choose the paper for your project.
Homework:
Select the paper for your project. It can be one of the two you looked at last week or another. Two students cannot do the same paper.
Expand the list of topics it relies on that you do not
know enough about. Include Wikipedia or other web links to them if available.
Give further details on which proof(s) you propose to check. List the main elements of the argument and inlude any you are unfamiliar with in your topic list above.
Give further details on the numerical tests that you propose to do.
Specify which sections of the paper you propose to cover.
March 29
Information:
Today we first go through Algorithm 1,
Then we will go through the statement of Theorem 2.3
Then we will go through the proof of Theorem 2.4, with 2 of you presenting.
Week 11 (April 1)
April 1
Information:
First we finish up any proofs from Friday.
Then we will do a simple numerical test of Theorem 2.4.
Homework:
Make a python function that inputs \(e_0\), \(e_1\), \(C\), and \(K\) and returns \([e_0,e_1,e_2,\dots,e_K]\) with \(e_{k+1}\) determined by equation (7) with \(=\) instead of \(\le\).
Use your function to generate a sequence with \((e_0,e_1,C,K)=(0.4,0.3,2,12)\).
Apply orderestimator.
What does it indicate is the order of convergence? Does it agree with Theorem 2.4?
Tuesday April 2, 9am: homework week10 due
April 3
Information:
Today we get started on the files for your project.
Homework:
Create a jupyter notebook for your project. In it, fill in (at least) the following:
A title for the project.
Your name, the course, the semester, and the year.
Full bibliographic information for the paper.
An introductory section with a brief summary of what the paper is about.
A section stating in full the theorems, lemmas, etc. whose proofs you will check. Include definitions for things mentioned. Divide into subsections as appropriate.
A section stating the things you will test numericaly. Quote the relevant material from the paper, rather than just refering to the paper. Divide into subsections as appropriate.
Upload slides.tex and OHIOCLR.pdf. Edit slides.tex and fill in (at least) the following:
A title for the project.
Your name.
Full bibliographic information for the paper on the slides references page.
An abstract for your talk (not the paper's abstract).
An introduction of 1-3 slides with a brief summary of what the paper is about.
\section{...} and \subsection{...} commands so that the table of contents on page 3 gives an outline of your talk.
April 5
Information:
Today we learn about condition number.
Homework:
Read about Condition
Number. Summarize in your own words. State the
relationship between condition number and the error in
solving a linear system. State the formula for the
condition number of a matrix with respect to
\(\|\cdot\|_2\) in terms of the singular values.
Write a python function with inputs \((m,n,k)\) that returns a random-looking matrix \(A\) such that
\(A \in \mathbb{C}^{m\times m}\),
\(\|A\|_2=n\), where \(\|A\|_2\) is the matrix norm induced by the vector norm \(\|\cdot\|_2\), and
the condition number of \(A\) with respect to
\(\|\cdot\|_2\) is \(k\).
Validate your function using A.shape, numpy.linalg.norm(A,ord=2), and numpy.linalg.cond(A,p=2).
Write a python function with inputs \((m,n,k)\) that calls your function above to create \(A\), creates a \(m\times 1\) vector \(y\),
computes \(b=Ay\), solves \(Ax=b\) for \(x\) using numpy.linalg.solve, and returns
the relative error \(\|x-y\|_2/\|y\|_2\).
State the theoretical dependence of that relative
error on \(m\), \(n\), and \(k\). Use your function to
test this dependence. Include the case \((m,n,k)=(50,10^3,10^{14})\) in your tests.
Week 12 (April 8)
April 8
Information:
Today we work on the concept of computational cost.
In theory, the computational complexity of adding two vectors in \(\mathbb{R}^n\) is \(\mathcal{O}(n)\). The following code tests this.
import timeit
# setup, not timed
def s(n): return "import numpy as np; x=np.random.randn(%s); y=np.random.randn(%s)"%(n,n)
# operation to test
op = "z=x+y"
# sizes to try
ns = [2**j for j in range(20)]
# theoretical order of cost
def c(n): return n
# number of repetitions to use
def r(n): return 2**21/c(n)
# time to do the operation
T = [timeit.timeit(op,setup=s(n),number=r(n))/r(n) for n in ns]
# time compared to theoretical order
Tc = map(lambda t,n: t/c(n), T, ns)
for i in range(len(ns)):
print i,ns[i],T[i],Tc[i]
Run this code and interpret the results.
We would also like to test the following operations:
Adding two square matrices.
Multiplying a square matrix times a vector.
Multiplying two square matrices.
Computing the SVD of a square matrix.
For each of these, modify the above code, run a test, and interpret the results.
How does the time it takes to compute the SVD compare to the time for other operation(s) that have the same theoretical order?
Tuesday April 9, 9am: homework week11 due
April 10
Information:
Today is for working on your project.
In week 14 you will do a presentation about it.
It must be at least 20 and at most 25 minutes long.
It must use the \(\LaTeX\) slides template you started last week.
Aim for half the presentation to explain to the
class what the paper is about and half to show what
you did.
If you would like to do a practice talk next week, then let me know.
April 12
Test: The test will be on paper, without computers, and consist of:
Stating some of the definitions and formulas you have done in your homework in weeks 8-11.
Material from the third paper.
Lemma 2.1.
State the lemma.
Explain in words and graphically what it means.
(You do not need to prove it.)
Algorithm 1.
Write down the algorithm.
Explain what each line does. Use graphs to illustrate.
Week 13 (April 15)
April 15
Information:
First we set the order for the presentations next week.
Then we learn a little about some common matrix decompositions.
Homework:
Read about the QR Decompositon, state its definition, and state one use for it.
Generate some random \(A \in \mathbb{C}^{m\times m}\), get their QR decomposition using numpy.linalg.qr, and show that the results satisfy the
definition.
Read about
the LU
Decompositon, state its definition, and state one
use for it. Generate some random \(A \in
\mathbb{C}^{m\times m}\), get their LU decomposition
using scipy.linalg.lu, and show that the
results satisfy the definition.
Read about
the Cholesky
Decompositon, state its definition, and state one
use for it. Generate some random positive definite \(A
\in \mathbb{C}^{m\times m}\), get their Cholesky
decomposition using numpy.linalg.cholesky,
and show that the results satisfy the definition.
April 17
Today is time for working on projects.
Thursday April 18, 9am: homework week12-13 (through April 15) due