From da5befa30bdece3fa113b109c0f0aa8e99bae8a3 Mon Sep 17 00:00:00 2001 From: "Steven G. Johnson" Date: Fri, 21 Apr 2023 14:20:51 -0400 Subject: [PATCH] lecture 29 notes --- README.md | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/README.md b/README.md index c8b88f3..4d57c5a 100644 --- a/README.md +++ b/README.md @@ -328,3 +328,11 @@ http://dx.doi.org/10.1137/S1052623499362822) — I used the "linear and separabl * Non-negative matrix factorization — guest lecture by [Prof. Ankur Moitra](https://people.csail.mit.edu/moitra/). **Further reading:** Coming soon. + +## Lecture 29 (Apr 21) + +* [(Discrete) convolutions](https://en.wikipedia.org/wiki/Convolution#Discrete_convolution), translation-invariance, [circulant matrices](https://en.wikipedia.org/wiki/Circulant_matrix), and convolutional neural networks (CNNs) +* pset 5 solutions: +* pset 6: coming soon, due Friday May 5. + +**Further reading:** Strang textbook sections IV.2 (circulant matrices) and VII.2 (CNNs), and [OCW lecture 32](https://ocw.mit.edu/courses/18-065-matrix-methods-in-data-analysis-signal-processing-and-machine-learning-spring-2018/resources/lecture-32-imagenet-is-a-cnn-the-convolution-rule/). See also these [Stanford lecture slides](http://cs231n.stanford.edu/slides/2016/winter1516_lecture7.pdf) and [MIT lecture slides](https://mit6874.github.io/assets/sp2020/slides/L03_CNNs_MK2.pdf).