Week 3: Exercises#
Exercises – Long Day#
1: Orthonormal Basis (Coordinates). By Hand.#
Question a#
Do the vectors
constitute an orthonormal basis in
Hint
The vectors need to be pairwise orthogonal, meaning they must be perpendicular to each other. How do we check for that?
Hint
Two vectors are orthogonal, precisely when their scalar product (dot product) is 0. But more is needed for them to be orthonormal.
Hint
A basis is orthonormal if the vectors are mutually orthogonal and each of them has a length (norm) of 1.
Answer
Question b#
Consider the vector
Hint
These are just the usual dot products
Question c#
Let’s denote the basis by
Question d#
Create the
2: Orthonormal Basis (Construction). By Hand.#
Create an orthonormal basis for
is the first basis vector.
Hint
You can, for example, start by guessing on a unit vector that is perpendicular to the given vector.
Hint
What about
Hint
Can you guess one more? Simply start by finding a vector that is perpendicular to both the given vector and
Hint
What about
Hint
Answer
A suggestion for an orthonormal basis is
but there are other possibilities! Think about why that is the case.
3: Orthonormalization. By Hand.#
Find the solution set to the homogeneous equation
and justify that it constitutes a subspace in
Hint
If we are to find an orthonormal basis for the solution space, we first need to find a basis for the solution space, meaning we must solve the equation.
Hint
Let
Hint
With
How do we form a basis for this solution space?
Hint
The solution space is
Now they just need orthonormalization.
Hint
When two vectors need to be orthonormalized, we must first rotate them relative to each other within the space they span so that they become orthogonal (perpendicular to each other) while still spanning the same space. Secondly, they must be normalized, meaning they must either be lengthened or shortened so that they all have a length of 1.
Hint
Fortunately, we can simply carry this out by following the Gram-Schmidt procedure.
Answer
An orthonormal basis for the solution space of the equation is formed by the vectors
4: Orthogonal Projections#
Let
where
Question a#
Let
Question b#
As usual, we consider all vectors as column vectors. Now, find the
5: An Orthonormal Basis for a Subspace of #
Question a#
Find an orthonormal basis
v1 = Matrix([I, 1, 1, 0])
v2 = Matrix([0, I, I, sqrt(2)])
Hint
You can find it using the Gram-Schmidt procedure. Check your result with the Sympy command GramSchmidt([v1,v2], orthonormal=True)
.
Question b#
Let
Calculate
What does this linear combination give? Does
6: A Python Algorithm#
Question a#
Without running the following code, explain what it does and explain what you expect the output to be. Then run the code in a Jupyter Notebook.
from sympy import *
from dtumathtools import *
init_printing()
x1, x2, x3 = symbols('x1:4', real=True)
eqns = [Eq(1*x1 + 2*x2 + 3*x3, 1), Eq(4*x1 + 5*x2 + 6*x3, 0), Eq(5*x1 + 7*x2 + 8*x3, -1)]
eqns
A, b = linear_eq_to_matrix(eqns,x1,x2,x3)
T = A.row_join(b) # Augmented matrix
A, b, T
Question b#
We will continue the Jupyter Notebook by adding the following code (as before, do not run it yet). Go through the code by hand (think through the for
loops). What T.shape[0]
gives the number of rows in the matrix
for col in range(T.shape[0]):
for row in range(col + 1, T.shape[0]):
T[row, :] = T[row, :] - T[row, col] / T[col, col] * T[col, :]
T[col, :] = T[col, :] / T[col, col]
T
Question c#
Write Python code that ensures zeros above the diagonal in the matrix
Note
Do not take into account any divisions by zero (for general
Hint
You will need two for
loops, such as:
for col in range(T.shape[0] - 1, -1, -1):
for row in range(col - 1, -1, -1):
Remember that index
Hint
Ask an AI tool, such as https://copilot.microsoft.com/ (log in with your DTU account), for help if you get stuck. You should, though, first attempt to work on the exercise on your own.
Answer
# Backwards Elimination: Create zeros above the diagonal
for col in range(T.shape[0] - 1, -1, -1):
for row in range(col - 1, -1, -1):
T[row, :] = T[row, :] - T[row, col] * T[col, :]
T
Question d#
What kind of algorithm have be implemented? Test the same algorithm on:
x1, x2, x3, x4 = symbols('x1:5', real=True)
eqns = [Eq(1*x1 + 2*x2 + 3*x3, 1), Eq(4*x1 + 5*x2 + 6*x3, 0), Eq(4*x1 + 5*x2 + 6*x3, 0), Eq(5*x1 + 7*x2 + 8*x3, -1)]
A, b = linear_eq_to_matrix(eqns,x1,x2,x3,x4)
T = A.row_join(b) # Augmented matrix
Answer
This is the Gaussian elimination procedure for solving linear equation systems (see also Math1a). The output is the reduced row-echelon form of the input matrix. We have not taken column switches (nor row switches) into account and thus there is a risk of division by zero if the first
7: Orthogonal Polynomials#
This is an exercise from the textbook. You can find help there.
Consider the list
Question a#
Argue that
Hint
The assumption is that
Hint
Choose four different
Hint
We choose fx
Hint
Check that the four equations only have the zero solution. Conclude that the four polynomials are linearly independent.
Question b#
Apply the Gram-Schmidt procedure on
Hint
Do calculate the norms and the inner products you will need integrate
in Sympy (or calculate the integrations by hand). For example, use x = symbols('x', real=True)
and integrate(x**2, (x,-1,1))
.
Hint
The polynomial
x = symbols('x', real=True)
v1 = 1
v2 = x
v3 = x**2
v4 = x**3
v1_norm = sqrt(integrate(v1**2, (x,-1,1)))
u1 = v1/v1_norm
u1
Hint
The polynomial
w2 = v2 - integrate(v2*u1, (x,-1,1)) * u1
u2 = w2/sqrt(integrate(w2**2, (x,-1,1)))
u2
Hint
The polynomial
w3 = v3 - integrate(v3*u1, (x,-1,1)) * u1 - integrate(v3*u2, (x,-1,1)) * u2
u3 = w3/sqrt(integrate(w3**2, (x,-1,1)))
Hint
The polynomial
w4 = v4 - integrate(v4*u1, (x,-1,1)) * u1 - integrate(v4*u2, (x,-1,1)) * u2 - integrate(v4*u3, (x,-1,1)) * u3
u4 = w4/sqrt(integrate(w4**2, (x,-1,1)))
Answer
The answer is obtained by running the code as given in the above hints. Compare the result with legendre(0,x)
, legendre(1,x)
, legendre(2,x).factor()
and legendre(3,x).factor()
. They should differ by a scaling factor.
Exercises – Short Day#
1: Matrix Multiplications. By Hand.#
Define
Let
Question a#
Method 1: As a linear combination of the columns. Calculate the linear combination
Question b#
Method 2: As “dot product” of the rows in
Note
Since
Question c#
Calculate
2: A Subspace in and its Orthogonal Compliment#
Let the following vectors be given in
A subspace
Question a#
v1 = Matrix([1,1,1,1])
v2 = Matrix([3*I,I,I,3*I])
v3 = Matrix([2,0,-2,4])
v4 = Matrix([4-3*I,2-I,-I,6-3*I])
Run the command GramSchmidt([v1,v2,v3,v4], orthonormal=True)
in Python. What does Python tell you?
# GramSchmidt([v1, v2, v3, v4], orthonormal = True)
Question b#
Now show that
Answer
The vectors
constitute the coordinate vector with respect to basis
Question c#
Provide an orthonormal basis for
Hint
The vectors
Answer
constitute an orthonormal basis for
Question d#
Determine the coordinate vector of
Hint
This can be calculated with inner products (as we are used to by now) or with a fitting matrix-vector product.
Question e#
Determine the orthogonal complement
Hint
First, guess a vector and check if your guess is located in
Hint
You can, for example, try the guess
Hint
A vector that belongs to the orthogonal complement is a vector that is orthogonal to the basis vectors in a basis for
Hint
Use the Gram-Schmidt procedure - continue from where you left off in the previous question b and just implement the vector you have just guessed. You do not need to normalize the vector in the last step, since it is enough to just orthogonalize. (Why?)
Answer
The orthogonal complement is a 1-dimensional subspace of
An Alternative Approach
It is not necessary to continue with the Gram-Schmidt procedure as described above. Here is another alternative approach that starts directly from the definition of the orthogonal complement: Let
Then solve this linear system of equations, and determine/choose a vector that spans the null space (the kernel).
Question f#
Choose a vector
3: Orthogonal Projection on a Plane#
Let the matrix
Question a#
Show that
Answer
By calculation, it is shown that
Question b#
Let
Answer
From this it is easy to show that
Question c#
Choose a vector
Answer
One can, for example, choose
U = Matrix([[sqrt(3)/3, sqrt(2)/2], [sqrt(3)/3, 0], [-sqrt(3)/3, sqrt(2)/2]])
P = U * U.T
x = Matrix([1,2,3])
t1, t2 = symbols('t1 t2')
r = t1 * U[:,0] + t2 * U[:,1]
p = dtuplot.plot3d_parametric_surface(r[0], r[1], r[2], (t1, -5, 5), (t2, -5, 5), show = False, rendering_kw = {"alpha": 0.5})
p.extend(dtuplot.scatter((x, P*x), show = False))
p.xlim = [-5,5]
p.ylim = [-5,5]
p.zlim = [-5,5]
p.show()
Question d#
Show that
Hint
With the choice
Hint
An arbitrary vector in
Hint
Show that
Answer
Since
4: Unitary Matrices#
Let a matrix
n = 4
F = 1/sqrt(n) * Matrix(n, n, lambda k,j: exp(-2*pi*I*k*j/n))
F
State whether the following propositions are true or false:
is unitary is invertible is orthogonal is symmetric is HermitianThe columns in
constitute an orthonormal basis forThe columns in
constitute an orthonormal basis for
Answer
True
True
False
True
False
True
False
False
Note: The matrix is called the Fourier matrix. It fulfills for all