Meta has launched the “Image Decoder” application, an artificial intelligence technology that transforms brain activity into visual images.
This technology combines knowledge depth and MEG, achieving an accuracy of up to 70% in optimal cases.
While promising, the “Image Decoder” raises ethical concerns, particularly regarding mental privacy.
According to venturebeat, Meta, the owner of Facebook and Instagram, unveiled an innovative deep learning application known as the “Image Decoder.” This technology can convert brain activity into accurate visual representations of an individual’s idea in real time.
In the future, computing interfaces will surpass touchscreens and even hand gestures. Although this concept is still in its early stages, there is significant progress in Meta’s “Image Decoder” (capable of decoding symbols), which is built on the DINOv2 open model foundation.
The Image Decoder can enable researchers to see what a person is thinking or imagining. This is possible if the individual is subjected to magnetic resonance imaging and examined using an MEG device. The Image Decoder is built on a mix of deep learning and MEG, with deep learning working on computer systems that learn from data to analyze and classify new data accurately, while MEG measures brain activity in a non-invasive way by detecting subtle changes in the brain’s magnetic fields when the individual is thinking.
Although the Image Decoder system is not yet complete, researchers are optimistic about the results, with accuracy levels reaching 70% in the best cases, which is a sevenfold improvement over current methods. However, this technology raises ethical concerns about mental privacy, and funding research of this kind from a large company adds complexities and suspicions surrounding the matter.