Demo 3: Inner-Product Spaces#

Demo by Magnus Troen. Revised March 2026 by shsp.

from sympy import*
from dtumathtools import*
init_printing()

The usual inner product \(\left<\cdot, \cdot \right>\) in an inner-product space \(V\subseteq \mathbb{F}^n\) is given by:

\[ \left<\cdot, \cdot \right>: V \times V \to \mathbb{F},\phantom{...} \left<\boldsymbol{x}, \boldsymbol{y} \right> = \sum_{k=1}^{n} x_k\overline{y}_k\,\,, \]

for all \(\boldsymbol{x},\boldsymbol{y} \in V\). See more in the textbook. Sympy does not have a direct inner-product command and we will have to approach it manually for each individual vector space.

Inner Product on \(\Bbb R^n\)#

For \(\Bbb R^n\), the above-mentioned usual inner product simplifies to the well-known dot product. So, we can simply use Sympy’s .dot command in case of real vectors:

x1 = Matrix([1,2,3])
x2 = Matrix([4,5,6])
x1.dot(x2), x2.dot(x1)
../_images/090d50eb47dc133b94aa8e25d2f6eb3d2837c694eefda79797112837f48c95f0.png

A quick manual check with \(\boldsymbol x_1 = (1,2,3)\) and \(\boldsymbol x_2 = (4,5,6)\) agrees:

\[ \left<\boldsymbol x_1,\boldsymbol x_2\right> = 1\cdot4+2\cdot5+3\cdot6=32. \]

Note that the order of the vectors in this inner product doesn’t matter.

Inner Product on \(\Bbb C^n\)#

For \(\Bbb C^n\), the above-mentioned inner product definition is not directly a dot product, rather it is a variant of a dot product where the second vector becomes conjugated. Fortunately, all it takes in Sympy is adding the argument conjugate_convention = 'right' to the .dot command:

x3 = Matrix([1, 2])
x4 = Matrix([2-I, 2])
x3.dot(x4, conjugate_convention = 'right')
../_images/5e5210b6f2cc6c272ce8feb0c8f03f3af514933af04d85fc7f1a2d12de83c010.png

If this feels too long to type out, then feel free to make your own script such as this one:

def inner(x: Matrix, y: Matrix):
    '''
    Computes the inner product of two vectors of same size
    '''
    
    return x.dot(y, conjugate_convention = 'right')

MutableDenseMatrix.inner = inner
ImmutableDenseMatrix.inner = inner

Instead of x3.dot(x4, conjugate_convention = 'right') you can now simply type x3.inner(x4) or inner(x3,x4):

x3.inner(x4), inner(x3,x4), x4.inner(x3), inner(x4,x3)
../_images/5836a9a9be9a7dc972f27bfed3ffd948f145ba34f184bc738a3060886dc5db22.png

Note how the order of vectors does matter in this case, as expected. A quick manual check with the two complex vectors \(\boldsymbol x_3 = (1,2)\) and \(\boldsymbol x_4 = (2 - i,2)\) agrees:

\[ \left<\boldsymbol x_3,\boldsymbol x_4\right> = 1\cdot(\overline{2-i}) + 2\cdot \overline{2} = 2+i + 4= 6 + i. \]

Note that our home-made Python function inner also is usable for \(\Bbb R^n\), since it simplifies to the dot product on real vectors where conjugation has no influence.

Norm#

The textbook defines the norm as \(\Vert \boldsymbol{x} \Vert = \sqrt{\left<\boldsymbol{x}, \boldsymbol{x} \right>}\), which with the usual inner product on vectors simply becomes the Pythagorean Theorem. For real vectors in particular, the norm is of course typically thought of as vector length.

Instead of manually typing out the Pythagorean theorem, Sympy has the norm command for us to use:

x1.norm(), x2.norm(), x3.norm(), x4.norm() 
../_images/a9f6f8914284a9d38dd80079e818421e6aa2c50e228b9f96cacaccf03acb0521.png

Note that this version of the norm also is known as the 2-norm (or the Euclidean norm or the \(\ell^2\)-norm), denoted by \(||\cdot||_2\) if that has to be clarified. You can compute other norms as follows - here \(||\boldsymbol x_1||_2\), \(||\boldsymbol x_1||_3\), \(||\boldsymbol x_1||_4\), and \(||\boldsymbol x_1||_\infty\) - but we won’t dive more into these in this course:

x1.norm(2), x1.norm(3), x1.norm(4), x1.norm(oo)
../_images/3960d7f59ae2bcf9b1ad07edc0580dbee26b58fef98a5cff9141c34050cedd60.png

If you prefer, the general norm formula \(\Vert \boldsymbol{x} \Vert = \sqrt{\left<\boldsymbol{x}, \boldsymbol{x} \right>}\) is easily typed out in code manually:

sqrt(x4.inner(x4)).simplify() , sqrt(x1.dot(x1)).simplify()
../_images/9c6eebb572a3acd3d8a27ae4b2ff3889bc8e56f2529b1183583899334e86e7de.png

But be careful to use the correct inner product calculation (if the norm is not a real number, then a wrong definition has been used):

sqrt(x4.inner(x4)).simplify() , sqrt(x4.dot(x4)).simplify()
../_images/fef4acf1e8f374e1512c42e203e382d88fa52a1974996a68a302b742c3c21947.png

Projections onto a Line#

The Projection Formula#

The textbook explains how the projection of a vector \(\boldsymbol{x} \in \mathbb{F}^n\) onto a line \(Y = \operatorname{span}\{\boldsymbol{y}\}\) spanned by a vector \(\boldsymbol{y} \in \mathbb{F}^n\) can be computed as

\[ \operatorname{Proj}_Y(\boldsymbol{x}) = \frac{\left<\boldsymbol{x},\boldsymbol{y} \right>}{\left<\boldsymbol{y},\boldsymbol{y} \right>}\boldsymbol{y} = \left<\boldsymbol{x},\boldsymbol{u}\right>\boldsymbol{u}, \]

where \(\boldsymbol{u} = \frac{\boldsymbol{y}}{||\boldsymbol{y}||}\).

As an example, let \(\boldsymbol{x}_1, \boldsymbol{x}_2 \in \mathbb{R}^2\) be given by:

\[\begin{split} \boldsymbol{x}_1 = \begin{bmatrix} 3\\6\end{bmatrix}, \boldsymbol{x}_2 = \begin{bmatrix} 2\\1\end{bmatrix}. \end{split}\]

Let us project \(\boldsymbol{x}_1\) onto the line given by \(U = \operatorname{span}\{\boldsymbol{x}_2\}\):

x1 = Matrix([3,6])
x2 = Matrix([2,1])

projU_x1 = x1.inner(x2)/x2.inner(x2) * x2 # Alternatively: inner(x1,x2)/x2.norm()**2 * x2
projU_x1
\[\begin{split}\displaystyle \left[\begin{matrix}\frac{24}{5}\\\frac{12}{5}\end{matrix}\right]\end{split}\]

Since we are working in \(\mathbb{R}^2\), this can be illustrated as follows:

x = symbols('x')
plot_x1 = dtuplot.quiver((0,0),x1,rendering_kw={'color':'r', 'label': '$x_1$'}, xlim = (-1,7), ylim = (-1,7), show = False, aspect='equal')
plot_x2 = dtuplot.quiver((0,0),x2,rendering_kw={'color':'c', 'label': '$x_2$', 'alpha': 0.7}, show = False)
plot_projX2 = dtuplot.quiver((0,0),projU_x1,rendering_kw={'color':'k', 'label': '$proj_{X_2}(x_1)$'},show = False)
plot_X2 = dtuplot.plot(x2[1]/x2[0] * x, label = '$u$',rendering_kw={'color':'c', 'linestyle': '--'}, legend = True,show = False)

(plot_x1 + plot_X2 + plot_projX2 + plot_x2).show()
../_images/8bf6209c53e8c2515994a1d27d391fefb1ae4b59638c2858e3b4e0f50126f56d.png

The Projection Matrix#

As an example, let us define some vectors in \(\Bbb C^4\):

c1 = Matrix([2+I, 3, 5-I, 6])
c2 = Matrix([1, I, 3-I, 2])
c1,c2
\[\begin{split}\displaystyle \left( \left[\begin{matrix}2 + i\\3\\5 - i\\6\end{matrix}\right], \ \left[\begin{matrix}1\\i\\3 - i\\2\end{matrix}\right]\right)\end{split}\]

The textbook explains how the projection \(\operatorname{Proj}_{c_2}\), projecting input vectors onto the line spanned by \(\boldsymbol c_2\), can be described as the linear map:

\[ \boldsymbol P: \mathbb{C}^4 \to \mathbb{C}^4, \phantom{...} \boldsymbol P(\boldsymbol{c}_1) = \boldsymbol{u}\boldsymbol{u}^* \boldsymbol{c_1}=P\boldsymbol{c_1}, \]

where \(\boldsymbol{u} = \boldsymbol c_2/||\boldsymbol c_2||\). The mapping matrix \(P=\boldsymbol{u}\boldsymbol{u}^*\) is often called a projection matrix, and the asterisk \(^*\) denotes the adjoint of a vector/matrix (see these terms defined in the textbook):

u = simplify(c2/c2.norm())
P = expand(u*u.adjoint())
u,P
\[\begin{split}\displaystyle \left( \left[\begin{matrix}\frac{1}{4}\\\frac{i}{4}\\\frac{3}{4} - \frac{i}{4}\\\frac{1}{2}\end{matrix}\right], \ \left[\begin{matrix}\frac{1}{16} & - \frac{i}{16} & \frac{3}{16} + \frac{i}{16} & \frac{1}{8}\\\frac{i}{16} & \frac{1}{16} & - \frac{1}{16} + \frac{3 i}{16} & \frac{i}{8}\\\frac{3}{16} - \frac{i}{16} & - \frac{1}{16} - \frac{3 i}{16} & \frac{5}{8} & \frac{3}{8} - \frac{i}{8}\\\frac{1}{8} & - \frac{i}{8} & \frac{3}{8} + \frac{i}{8} & \frac{1}{4}\end{matrix}\right]\right)\end{split}\]

With this matrix it is easy to do projections of vectors, e.g. a projection of \(\boldsymbol c_1\) onto the line spanned by \(\boldsymbol c_2\):

simplify(P*c1)
\[\begin{split}\displaystyle \left[\begin{matrix}\frac{15}{8}\\\frac{15 i}{8}\\\frac{45}{8} - \frac{15 i}{8}\\\frac{15}{4}\end{matrix}\right]\end{split}\]

A comparison with the previous method agrees:

simplify(c1.inner(u)/u.inner(u)*u)
\[\begin{split}\displaystyle \left[\begin{matrix}\frac{15}{8}\\\frac{15 i}{8}\\\frac{45}{8} - \frac{15 i}{8}\\\frac{15}{4}\end{matrix}\right]\end{split}\]

The projection of \(\boldsymbol c_2\) onto the line spanned by \(\boldsymbol c_2\) should of course not change the vector \(\boldsymbol c_2\) - let’s check:

P*c2, c2
\[\begin{split}\displaystyle \left( \left[\begin{matrix}\frac{3}{8} + \left(\frac{3}{16} + \frac{i}{16}\right) \left(3 - i\right)\\\frac{3 i}{8} + \left(- \frac{1}{16} + \frac{3 i}{16}\right) \left(3 - i\right)\\\frac{45}{16} - \frac{15 i}{16} + i \left(- \frac{1}{16} - \frac{3 i}{16}\right)\\\frac{3}{4} + \left(\frac{3}{8} + \frac{i}{8}\right) \left(3 - i\right)\end{matrix}\right], \ \left[\begin{matrix}1\\i\\3 - i\\2\end{matrix}\right]\right)\end{split}\]

At first, this doesn’t look promising, but don’t forget to simplify:

simplify(P*c2), c2
\[\begin{split}\displaystyle \left( \left[\begin{matrix}1\\i\\3 - i\\2\end{matrix}\right], \ \left[\begin{matrix}1\\i\\3 - i\\2\end{matrix}\right]\right)\end{split}\]

As expected. The projection matrix \(P\) is per definition Hermitian, meaning \(P = P^*\). This can be verified manually:

simplify(P-P.adjoint())
\[\begin{split}\displaystyle \left[\begin{matrix}0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\\0 & 0 & 0 & 0\end{matrix}\right]\end{split}\]

or with a dedicated command:

P.is_hermitian
True

Orthonormal Bases#

The textbook defines a set of vectors \(\boldsymbol{u}_1,\boldsymbol{u}_2,\cdots,\boldsymbol{u}_n\) as orthonormal if the inner products on the vectors satisfy:

\[\begin{split}\left< \boldsymbol{u}_i, \boldsymbol{u}_j\right> = \begin{cases} 0 & i \neq j\\ 1 & i = j\end{cases},\phantom{...} i,j = 1,\cdots ,n.\end{split}\]

In words, the inner products of a vector with itself should result in \(1\) (unit length), while the inner products of two different vectors should result in \(0\) (orthogonality).

Orthonormal bases show to be incredibly practical. One example is change of basis, which is way easier when using orthonormal bases. If we, for example, are given an orthonormal basis \(\beta=(\boldsymbol{u}_1,\boldsymbol{u}_2,\cdots,\boldsymbol{u}_n)\) for a vector space \(V\), then, according to the textbook, a coordinate vector \(\phantom{ }_\beta\boldsymbol{x}\) with respect to basis \(\beta\) of a vector \(\boldsymbol x\in V\) can be found as:

\[\begin{split} \phantom{ }_\beta\boldsymbol{x} = \begin{bmatrix} \left<\boldsymbol{u}_1, \phantom{ }\boldsymbol{x} \right>\\ \left<\boldsymbol{u}_2, \phantom{ }\boldsymbol{x} \right>\\ \vdots\\ \left<\boldsymbol{u}_n, \phantom{ }\boldsymbol{x} \right> \end{bmatrix}. \end{split}\]

(Compare this with Mathematics 1a, where we were to solve linear equation systems in order to find coordinate vectors.)

Manual Check of Orthonormality#

As an example, we are given the following list of three vectors in \(\Bbb R^3\):

\[\begin{split} \beta = (\boldsymbol{q}_1,\boldsymbol{q}_2,\boldsymbol{q}_3) = \left( \left[\begin{matrix}\frac{\sqrt{3}}{3}\\\frac{\sqrt{3}}{3}\\\frac{\sqrt{3}}{3}\end{matrix}\right], \left[\begin{matrix}\frac{\sqrt{2}}{2}\\0\\- \frac{\sqrt{2}}{2}\end{matrix}\right], \boldsymbol q_1\times \boldsymbol q_2 \right), \end{split}\]

where \(\boldsymbol q_3\) is given as the cross product of the first two.

q1 = Matrix([sqrt(3)/3, sqrt(3)/3, sqrt(3)/3])
q2 = Matrix([sqrt(2)/2, 0, -sqrt(2)/2])
q3 = q1.cross(q2)
q1,q2,q3
\[\begin{split}\displaystyle \left( \left[\begin{matrix}\frac{\sqrt{3}}{3}\\\frac{\sqrt{3}}{3}\\\frac{\sqrt{3}}{3}\end{matrix}\right], \ \left[\begin{matrix}\frac{\sqrt{2}}{2}\\0\\- \frac{\sqrt{2}}{2}\end{matrix}\right], \ \left[\begin{matrix}- \frac{\sqrt{6}}{6}\\\frac{\sqrt{6}}{3}\\- \frac{\sqrt{6}}{6}\end{matrix}\right]\right)\end{split}\]

It is claimed that these form an orthonormal basis for \(\mathbb{R}^3\). Let us check that claim by manually checking each inner product combination:

q1.inner(q1), q1.inner(q2), q1.inner(q3), q2.inner(q2), q2.inner(q3), q3.inner(q3) 
../_images/a06eb74094bac2de6bfbc495fbbf17efccfc8a8413b6fc8525f6befdec58279e.png

The requirements are fulfilled, so we have confirmed that \(\beta\) consists of three orthonormal vectors. Orthonormal vectors are linearly independent, and three linearly independent vectors in \(\Bbb R^3\) span \(\Bbb R^3\). Hence we conclude that \(\beta\) forms an orthonormal basis for \(\mathbb{R}^3\).

Now we can write the coordinate vector of a vector such as for example \(\boldsymbol{x} = (1,2,3)\) with respect to the \(\beta\) basis:

x = Matrix([1,2,3])
beta_x = Matrix([x.inner(q1) , x.inner(q2) , x.inner(q3)])
beta_x
\[\begin{split}\displaystyle \left[\begin{matrix}2 \sqrt{3}\\- \sqrt{2}\\0\end{matrix}\right]\end{split}\]

Check of Orthonormality using Rank#

As another example, consider the list of vectors \(\gamma = (\boldsymbol v_1,\boldsymbol v_2,\boldsymbol v_3,\boldsymbol v_4)\), all from \(\mathbb{C}^4\), where

\[\begin{split} \boldsymbol{v}_1 = \left[\begin{matrix}2 i\\0\\0\\0\end{matrix}\right], \: \boldsymbol{v}_2 = \left[\begin{matrix}i\\1\\1\\0\end{matrix}\right], \: \boldsymbol{v}_3 = \left[\begin{matrix}0\\i\\1\\1\end{matrix}\right], \: \boldsymbol{v}_4 = \left[\begin{matrix}0\\0\\0\\i\end{matrix}\right]. \end{split}\]
v1 = Matrix([2*I,0,0, 0])
v2 = Matrix([I, 1, 1, 0])
v3 = Matrix([0, I, 1, 1])
v4 = Matrix([0, 0, 0, I])

We wish to form an orthonormal basis of the vector space spanned by \(\gamma\). We can first try to check whether the vectors in \(\gamma\) span all of \(\Bbb C^4\) or just a subspace within it:

V = Matrix.hstack(v1,v2,v3,v4)
V.rref(pivots=False)
\[\begin{split}\displaystyle \left[\begin{matrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{matrix}\right]\end{split}\]

With a rank of four, we have confirmed that \(\gamma\) spans and therefore constitutes a basis for \(\Bbb C^4\). But not necessarily an orthonormal basis. You can check for orthogonality and unit length first, and you’ll see that they need adjustment. The Gram-Schmidt procedure will fix this for us - see the textbook for the procedural steps and details that we use in the following.

The Gram-Schmidt Procedure#

We will carry out the Gram-Schmidt procedure on the example with four vectors \(\boldsymbol v_1,\boldsymbol v_2,\boldsymbol v_3,\boldsymbol v_4\) from \(\Bbb C^4\) in the previous section.

First, let \(\boldsymbol{w}_1 = \boldsymbol{v}_1\). Then, \(\boldsymbol{u}_1\) is found by normalizing \(\boldsymbol{w}_1\):

w1 = v1
u1 = w1/w1.norm()
u1
\[\begin{split}\displaystyle \left[\begin{matrix}i\\0\\0\\0\end{matrix}\right]\end{split}\]

The rest of the new orthonormal vectors can be computed one by one by doing:

\[ \boldsymbol{w}_k = \boldsymbol{v}_k - \sum_{j = 1}^{k-1}\left<\boldsymbol{v}_k, \boldsymbol{u}_j\right>\boldsymbol{u}_j \]

on each of them, followed by:

\[ \boldsymbol{u}_k = \frac{\boldsymbol{w}_k}{||\boldsymbol{w}_k||}. \]
w2 = simplify(v2 - v2.inner(u1)*u1)
u2 = expand(w2/w2.norm())

w3 = simplify(v3 - v3.inner(u1)*u1 - v3.inner(u2)*u2)
u3 = expand(w3/w3.norm())

w4 = simplify(v4 - v4.inner(u1)*u1 - v4.inner(u2)*u2 - v4.inner(u3)*u3)
u4 = expand(w4/w4.norm())

u1,u2,u3,u4
\[\begin{split}\displaystyle \left( \left[\begin{matrix}i\\0\\0\\0\end{matrix}\right], \ \left[\begin{matrix}0\\\frac{\sqrt{2}}{2}\\\frac{\sqrt{2}}{2}\\0\end{matrix}\right], \ \left[\begin{matrix}0\\- \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4}\\\frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4}\\\frac{\sqrt{2}}{2}\end{matrix}\right], \ \left[\begin{matrix}0\\\frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4}\\- \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4}\\\frac{\sqrt{2} i}{2}\end{matrix}\right]\right)\end{split}\]

We should now check to confirm that \(\boldsymbol{u}_1,\boldsymbol{u}_2,\boldsymbol{u}_3,\boldsymbol{u}_4\) indeed are orthonormal - we will do one check here and leave the rest for the reader:

simplify(u1.inner(u2)) # And so on
../_images/651416cddb7ade65e68d31c70e7a3dbe2ef01977dfec97a06aaaa159fde31487.png

These new orthonormal vectors \(\boldsymbol{u}_1,\boldsymbol{u}_2,\boldsymbol{u}_3,\boldsymbol{u}_4\) satisfy

\[\begin{split} \operatorname{span}\left\{\boldsymbol{u}_1\right\} = \operatorname{span}\left\{\boldsymbol{v}_1\right\},\\ \operatorname{span}\left\{\boldsymbol{u}_1,\boldsymbol{u}_2\right\} = \operatorname{span}\left\{\boldsymbol{v}_1,\boldsymbol{v}_2\right\},\\ \operatorname{span}\left\{\boldsymbol{u}_1,\boldsymbol{u}_2,\boldsymbol{u}_3\right\} = \operatorname{span}\left\{\boldsymbol{v}_1,\boldsymbol{v}_2,\boldsymbol{v}_3\right\},\\ \operatorname{span}\left\{\boldsymbol{u}_1,\boldsymbol{u}_2,\boldsymbol{u}_3,\boldsymbol{u}_4\right\} = \operatorname{span}\left\{\boldsymbol{v}_1,\boldsymbol{v}_2,\boldsymbol{v}_3,\boldsymbol{v}_4\right\}. \end{split}\]

See the appropriate Theorem for this in the textbook.

The same results can be obtained using the Sympy command GramSchmidt:

y1,y2,y3,y4 = GramSchmidt([v1,v2,v3,v4], orthonormal=True)
y1,y2,expand(y3),expand(y4)
\[\begin{split}\displaystyle \left( \left[\begin{matrix}i\\0\\0\\0\end{matrix}\right], \ \left[\begin{matrix}0\\\frac{\sqrt{2}}{2}\\\frac{\sqrt{2}}{2}\\0\end{matrix}\right], \ \left[\begin{matrix}0\\- \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4}\\\frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4}\\\frac{\sqrt{2}}{2}\end{matrix}\right], \ \left[\begin{matrix}0\\\frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4}\\- \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4}\\\frac{\sqrt{2} i}{2}\end{matrix}\right]\right)\end{split}\]

Unitary and (Real) Orthogonal Matrices#

According to the textbook, a square matrix \(U\) is called unitary if it satisfies

\[ UU^* = U^*U = I, \]

where \(^*\) symbolizes the conjugate transpose matrix, also called the adjoint matrix, \(U^*=\bar U^T.\) For a real matrix \(Q\in \mathsf M_n(\mathbb{R})\), the adjoint \(Q^*\) simplifies to just the transpose \(Q^T\), and similarly to above, real square matrices are called (real) orthogonal, if they satisfy

\[QQ^T=Q^TQ=I.\]

Note that (real) orthogonal matrices also are unitary. The textbook follows up with proving that a matrix is unitary, respectively (real) orthogonal, if and only if its columns are orthonormal.

A matrix \(U = \left[\boldsymbol{u}_1, \boldsymbol{u}_2, \boldsymbol{u}_3, \boldsymbol{u}_4\right]\) formed from the vectors we obtained via the Gram-Schmidt procedure earlier, will thus produce a unitary matrix.

U = Matrix.hstack(u1,u2,u3,u4)
U
\[\begin{split}\displaystyle \left[\begin{matrix}i & 0 & 0 & 0\\0 & \frac{\sqrt{2}}{2} & - \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4} & \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4}\\0 & \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4} & - \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4}\\0 & 0 & \frac{\sqrt{2}}{2} & \frac{\sqrt{2} i}{2}\end{matrix}\right]\end{split}\]

The adjoint matrix \(U^*\) of \(U\) is:

U.adjoint(), conjugate(U.T)
\[\begin{split}\displaystyle \left( \left[\begin{matrix}- i & 0 & 0 & 0\\0 & \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} & 0\\0 & - \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4} & \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4} & \frac{\sqrt{2}}{2}\\0 & \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4} & - \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4} & - \frac{\sqrt{2} i}{2}\end{matrix}\right], \ \left[\begin{matrix}- i & 0 & 0 & 0\\0 & \frac{\sqrt{2}}{2} & \frac{\sqrt{2}}{2} & 0\\0 & - \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4} & \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4} & \frac{\sqrt{2}}{2}\\0 & \frac{\sqrt{2}}{4} - \frac{\sqrt{2} i}{4} & - \frac{\sqrt{2}}{4} + \frac{\sqrt{2} i}{4} & - \frac{\sqrt{2} i}{2}\end{matrix}\right]\right)\end{split}\]

That \(U\) is indeed unitary can be verified by:

simplify(U*U.adjoint()), simplify(U.adjoint()*U)
\[\begin{split}\displaystyle \left( \left[\begin{matrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{matrix}\right], \ \left[\begin{matrix}1 & 0 & 0 & 0\\0 & 1 & 0 & 0\\0 & 0 & 1 & 0\\0 & 0 & 0 & 1\end{matrix}\right]\right)\end{split}\]

A matrix \(Q = \left[\boldsymbol{q}_1, \boldsymbol{q}_2, \boldsymbol{q}_3\right]\) formed from the real \(\boldsymbol q\) vectors that we confirmed to be orthonormal earlier, is hence (real) orthogonal.

Q = Matrix.hstack(q1,q2,q3)
Q
\[\begin{split}\displaystyle \left[\begin{matrix}\frac{\sqrt{3}}{3} & \frac{\sqrt{2}}{2} & - \frac{\sqrt{6}}{6}\\\frac{\sqrt{3}}{3} & 0 & \frac{\sqrt{6}}{3}\\\frac{\sqrt{3}}{3} & - \frac{\sqrt{2}}{2} & - \frac{\sqrt{6}}{6}\end{matrix}\right]\end{split}\]

That \(Q\) is indeed (real) orthogonal can be verified by:

simplify(Q*Q.T), simplify(Q.T*Q)
\[\begin{split}\displaystyle \left( \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right], \ \left[\begin{matrix}1 & 0 & 0\\0 & 1 & 0\\0 & 0 & 1\end{matrix}\right]\right)\end{split}\]