In How the web should be, Anil points out a flash rendition of a Coltrane tune that feels like an abstract version of the sonic wire sculptor. I wonder how difficult it would be to write code to generate these sort of visuals automatically from audio analysis? Obviously this is what visualizers do when playing mp3's in iTunes or WinAmp, but they don't have the same sense of narrative. I think to pull that off the entire audio file would have to analyzed as a whole instead of as a stream. Eh, cloudy thoughts for a cloudy morning.