And the Spirit & the bride say, come.... Reveaaltion 22:17

And the Spirit & the bride say, come.... Reveaaltion 22:17
And the Spirit & the bride say, come...Revelation 22:17 - May We One Day Bow Down In The DUST At HIS FEET ...... {click on blog TITLE at top to refresh page}---QUESTION: ...when the Son of man cometh, shall he find faith on the earth? LUKE 18:8

Saturday, November 8, 2025

Creation Moment 11/9/2025 - Who Created the "geometry of human thought" & "semantic web"?

Q: Who Created the "geometry of human thought" & the "semantic web" in the first place? [This level of sophistication could never arise from time + chance.....then interweave seamlessly together]
A: But now thus saith the LORD that created thee.... Isaiah 43:1


"In a dark MRI scanner outside Tokyo, a volunteer watches a video of someone hurling themselves off a waterfall. Nearby, a computer digests the brain activity pulsing across millions of neurons. A few moments later, the machine produces a sentence: “A person jumps over a deep water fall on a mountain ridge.”

No one typed those words. No one spoke them. They came directly from the volunteer’s brain activity.

That’s the startling premise of “mind captioning,” a new method developed by Tomoyasu Horikawa and colleagues at NTT Communication Science Laboratories in Japan. Published this week in Science Advances, the system uses a blend of brain imaging and artificial intelligence to generate textual descriptions of what people are seeing — or even
 visualizing with their mind’s eye — based only on their neural patterns.
As Nature journalist Max Kozlov put it, the technique “generates descriptive sentences of what a person is seeing or picturing in their mind using a read-out of their brain activity, with impressive accuracy.

To build the system, Horikawa had to bridge two universes: the intricate geometry of human thought and the sprawling semantic web that language models use to understand words.

The researchers also made a surprising discovery when they scrambled the word order of the generated captions. The quality and accuracy dropped sharply, showing that the AI wasn’t just picking up on keywords but grasping something deeper — perhaps the structure of meaning itself, the relationships between objects, actions, and context.

Even when subjects were only imagining the videos, the AI generated accurate sentences describing them, sometimes identifying the right clip out of a hundred. That result hinted at a powerful idea: the brain uses similar representations for seeing and visual recall, and those representations can be translated into language without ever engaging the traditional “language areas” of the brain.

The ethical implications of this technology are hard to ignore. If machines can turn brain activity into words, even imperfectly, who controls that information?" 
ZMEscience