Machine learning analyses of MRI patterns has previously enabled machines to visualise ‘low-level’ images, such as basic shapes, or to match a thought to an example photo.
Recent work found that the “hierarchical features of a deep neural network” can be decoded or translated into more complex images, providing the foundations for studies of this kind.
The study unveils a new reconstruction method, which puts greater emphasis on the way pixels are interpreted by the human deep neural network at multiple layers.
It found that for both natural images (e.g. those of animals) and for artificial shapes, the AI was able to interpret MRI data to reconstruct an image from scratch.
The algorithm was trained using images from nature, so the application to artificial shapes was significant in suggesting that the algorithms were genuinely interpreting thought data rather than recalling and matching images from past inputs.
The algorithm is applicable to both seen and imagined images.