13th June 2022, 6:42 PM
Actually, visual representations of higher dimensional spaces are really important. I don't know anything about this astral projection things, but in signal processing, you've got principal component analysis that takes higher dimensional data and projects the data to a lower dimensional space. If you've reduced down to 3 dimensions, you can pan around it in MATLAB.
Machine learning methods really look into this a lot! If you've got a multidimensional tensor, there was non-linear principal component analysis using neural networks (autoencoders), but nowadays I hear UMAP and tSNE are the main visual representations of higher dimensional data. There's a lot of cool stuff with this all. Variational autoencoders are probably my favorite method for smearing between dimensions and understanding what the hyperplanes in your neural network are handling. I don't think VAEs are very... well, good, but it is really fun to look at your data with them.
I know this was kinda jokey, but I do love this stuff.
Machine learning methods really look into this a lot! If you've got a multidimensional tensor, there was non-linear principal component analysis using neural networks (autoencoders), but nowadays I hear UMAP and tSNE are the main visual representations of higher dimensional data. There's a lot of cool stuff with this all. Variational autoencoders are probably my favorite method for smearing between dimensions and understanding what the hyperplanes in your neural network are handling. I don't think VAEs are very... well, good, but it is really fun to look at your data with them.
I know this was kinda jokey, but I do love this stuff.