We're now going to move on from the space of geometric vectors

from which we learned new terminology, new concepts,

new ideas, and we're now inspired to carry over those ideas to other linear spaces

so we're now going to talk about the spaces of polynomials, and more general functions

so we'll consider two different spaces,

and we'll also consider two different transformations

our transformation will be the derivative operator

and the second derivative operator

the derivative operator will be denoted by the letter D and the second derivative will be quite

appropriately denoted by D squared

because after all the second derivative is

the derivative applied to the derivative

so it's two successive applications of the derivative operator

and you're used to slightly different notation in Calculus,

you're used to the prime or d/dx or something like that

but we're going to use something a little bit more consistent with linear algebra

and I just want to point out again that we've kind of forgetten this for some time

but we're now going to go back to celebrating the fact that linear algebra is applicable

to all different sorts of objects. And even though your objects may be very different,

many of the ideas are the same. The ideas may come from geometric vectors

but then they find applications in other spaces

including these spaces that are as different from geometric vectors as they can possibly be

uh, spaces of expressions, or functions, so keep that in mind

as we're having this discussion. So you know exactly what the derivative operator does

now we're going to interpret it as a transformation, and we're going to

ask all the same questions as we did before: just another example of linear algebra

being the science of applying similar ideas inspired by geometry, to all different kinds of objects.

the questions we'll be asking are: are these operators linear?

If they're linear, what are their eignevalues, and what are the eigenvectors?

(We're thinking of functions as vectors). Very appropriate questions.

One change in terminology: instead of calling them eigenvectors,

we're going to call them eigenfunctions. But of course it's the same thing. And maybe in the

context of a linear algebra discussion it would be good to stick to eigenvectors,

but that's just not the tradition -- the tradition is to call them eigenfuctions -- so we'll go along with that tradition

so just a couple examples: for example D of x squared plus x equals

of course you know what it equals: it's the derivative of x squared plus x

you know what a transformation is: it's a mapping. Input is a function, the output is another function.

there's a rule for converting the input function to the output function and

it is up to me to give you that rule and then once the rule is given, we'll analyze it together

so of course the derivative of x squared plus x is two x plus one, and that's our first example.

We should probably consider a couple more. Well how about we go straight for the second derivative

So here we're doing four things at once... two different linear -- we don't know yet if they're linear --

two different transformations, two different spaces

so let's stick to polynomials. Here, x to the fourth minus 3 x cubed

and we're now taking the second derivative, so this becomes 12 x squared

12 x squared

minus 18 x.

how about an example with (still the second derivative)

but a function that's not a polynomial

how about e to the 2x

and of course the answer is 4 times e to the 2x

let's throw in a sine

the derivative squared, second derviative, of sine of 5 x

equals minus 25. Sine of 5x equals minus 25 sine of 5x

and this is just about all the calculus you need to know

for this discussion. Just being able to find elementary derivatives

so we're not going back to calculus. This is not a course in caluclus.

there won't be any calculus review, of course.

We're just extracting the part of calculus that has the linear algebra framework.

or that can be described in linear algebra terms

and talking about derivatives as operators or transformations.

and we'll now determine whether or not they're linear, is very much

a subject of linear algebra

OK. Is this transformation linear or are these transformations linear?

And the answer is, of course they are.

What you think of as sum formulas in calculus (let me use the prime notation just that it looks familiar)

the derivative of the sum is of course the sum of the derivatives

they usually use this rule without giving this much thought

and you shouldn't, it's completely trivial, it's fundamental,

calculus wouldn't exist as we know it if this rule wasn't there.

But from our point of view, it means that we're halfway to linearity.

It means that the derivative operator (and of course the same would go for the second derivative, doesn't matter)

uh is the sort of operator for which it doesn't matter whether you

add the two vectors first and then transform the result,

or whether you transform the individuals vectors and then add together the result

so that's half of linearity, and the other half of course comes

from this product rule. This is not product rule. This is

multiplication by a scalar rule and maybe it's a special set of the product rule

but it reads like this, and that's the second half of linearity

works just as well for the second derivative. So we have both of these rules for first and second derivative

so yes, this transformation. These transformations, are linear.

And once we have a linear transformation, what's the next obvous question, natural question?

Important question is: what are the eigenvalues, and in this case eigenfunctions of these two transformations?

So let's try to answer these questions by trial and error

first, let's stick to the space of polynomials.

and then we'll talk about functions in general

can you think of a polynomial

that becomes a multiple of the derivative and its second derivative is a multiple of the original polynomial

Now your first thought may be, well there's no such polynomial

cuz anything with x to the 4th will end up with something less than x to the 4th.

Right, because the derviative knocks the power down.

So how can the result be a multiple of the preimage

if it has a different power? That's a very correct way of reasoning.

Typically won't be a multiple of what you started out with.

But there is one little exception, and that's the constant function.

The derivative of a constant function is zero.

So this is constant the function, not constant the number.

Let me draw a graph of this function.

As any function, it has a graph. If c = 3, then maybe that's the graph

Alright, this has f of x. Is it even in the shot? Barely.

But you know what I mean. F of x equals 3.

There you go. A function like that (a constant function)

Its derivative is zero. And this is our second example of an

eigenfunction (eigenvector you may think if it as) eigenfunction with a zero

corresponding eigenvalue. So an eigenvector cannot be zero.

Uninteresting, inapplicable, but an eigenvector, eigenvalue, may very much be zero.

And here it's actually one of the more interesting eigenvalues. And here we're

definitely seeing another example of the zero eigenvalue.

That's, that's it for the first derivative. For the second derivative, this eigenspace is a little bit richer,

because you can take any linear function, any linear function, ends up being zero

So this eigenspace corresponding to the eigenvalue zero, is two dimensional

This was 1 dimensional, and in the case of the second dervative, the eigenspace corresponding to the eigenvalue zero

is richer. It is two dimensional. The 3rd derivative would have 3-dimensional eigenspace corresponding

to the eigenvalue zero, and so forth. Very interesting. But that's it for polynomials.

Polynomials don't have any other eigenfunctions.

Because that logic of the derivative knocking down that power is the correct logic.

It's just that sometimes, you end up with zero. That's the only exception.

Fits well within that logic. Now what if we now broaden our view to general functions?

What are the eigenfunctions of the derivative and second derivative transformations?

Uh, when we allow all functions?

Well, that makes, of course these still apply. The polynomials are still general functions

But there are now a richer space of eigenfunctions.

And actually there are some hints here. well for the 1st derivative

we can take e to a power. Let's call it p for power. e to the px where p is a constant

is of course, by the celebrated derivative chain rule, p times e to the p x.

Now once again, very familiar from calculus,

But now you have a a much -- don't wanna compare -- a very interesting linear algebra perspective.

This identity states that px where p is any number, is an eigenfunction of the derivative operator,

with p being the corresponding eigenvalue

and if you can see, this is our first example where there are infinitely many eigenvalues.

There are infitely many eigenvalues.

Any number is an eigenvalue of the derivative transformation.

And we have infinitely many completely linearly independent eigenfunctions.

Very interesting example, something that's very good to keep in mind.

And that's actually it for the space of functions.

Alright. If you have studied differential equations, you can actually show

rather easily that that's the only function there is.

Let's talk about the second derivative.

So, whenever you apply a transformation twice,

Whatever was an eigenfunction or eigenvector of the original transformation

is still an eigenfunction or eigenvector of the repeated transformation.

Why? because applying the transformation once produces a multiple times the same thing.

And then, applying it again squares that multiple, and once again produces the same thing

So it's that multiple squared times the same thing.

So each of the px is still very much (and that's what you're seeing here)

e to the px is very much still an eigenvector, eigenfunction,

and the corresponding eigenvalue is p squared.

Very interesting. So once again, still infinitely many

Eigenvalues, but now this only gives us positive numbers.

And to every positive value, such as 25,

There are two corresponding eigenfunctions.

So that eigenspace corresponding to the eigenvalue

is two dimensional

Its p to the 5 x and p to the minus 5x.

Both will produce the same eigenvalue of 25.

So very interesting. Still infinitely many eigenvalues.

But now they're only positive numbers.

and the corresponding eigenspaces are two dimensional.

Whereas in this case, the corresponding eignespaces

were one dimensional. Very interesting. Kind of similar to this.

It preserved the eigenvalue, but now the eigenspace is 2 dimensional

Seems like the dimensionality of all the eigenspaces

correspond to the order of the derivative

Alright, but of course there is one more

and you can kind of see the clue right here

Well, maybe two more, sines and cosines

We would have to think very carefully about the dimensionality

of the corresponding eigenspaces, because

sines and cosines mix in a very nice way according

to the trigonometric formulas.

But we can some other space. I'll just transform this formula.

Second derivative of sine of px.

And the same thing for cosine.

Is minus p squared, right, so make sure that that

holds for both sine and cosine. When you take

the derivative of sine it becomes cosine. And then

take the derivative of cosine, that's where you pick up the minus sign.

And with cosine, the derivative of cosine is minus sine, and

then derivative of minus sine is cosine, so it's on the

first step that you pick up the minus sign.

equals minus p squared sine of px, or, minus p

squared cosine of px. so sine and cosine

work very similarly

so, as where the negative numbers as eigenvalues

come from so the eigenvalues associated with these transformations

are all the negative numbers. Very interesting.

So, exponents give us all the positive numbers.

Sines and cosines all the negative numbers.

And I'll just leave it up to you to work with a little bit of calculus

and a little bit of trigonometry to show

that the dimension of the corresponding eigenspace is 2 in this case as well

You'll need to do a little bit of trigonometry.

Uh, you have to transform, modify, a linear combination

Of a sine and a cosine

into a single sine or cosine. I think you'll be

able to do it. That's just the hint. But

there you go. I think this completes

this example. And we have now encountered

the exact same concept in a completely different

vector space: the space of polynomials, and functions.

but we were still able to talk about transformations

transformations, their linearity, and the

associated eigenvalues and eigenvectors.