Digital Garden
Machine Learning
Convolution

Convolution

Of Vectors

For Deep learning.

Is just cross-correlation. Dot product of the 2 vectors after flattening the matrix (patch and kernel). Theorectically the one vector should be flipped upside down for it to not be a cross-correlation but nobody cares.

This is associtiave in regards to applying filters? Where as cross-correlation isnt?

https://www.youtube.com/watch?v=4ERudRAxyGE (opens in a new tab) this connects the vectors with the gradients

for imag eprocessing we just flip the kernel not the image. Has impact on output size hence padding stride etc. IF we use convolution and not cross correlation i.e do flipping it allows us to do fft?

Of Functions

For Signal processing.

Discrete vs continuos is very different.

of 2 functions is then another function where one of them is flipped on y axis and then slowly slid along the x axis over the other function. Should probably both be scaled to the same units then can interpret it better? But If resulting funciton is same as an input then the two inputs are the same??? How to interpret cross correlation.