Encoding information into melodies

Have you ever wondered how melodies travel through walls and across large distances — and yet remain recognizable, untouched? Could we build artificial communication systems that behave the same way?

Following this link for life demo

This work stays at the intersection of AI and information communication. Our inspiration came from observing how the human brain effortlessly recognizes musical melodies — regardless of distortion, noise, reflection, or even note errors. This is the first public demo of our innovative technology: an artificial information transmission system inspired by human auditory perception.

Let’s unpack this.

Imagine you’re in a two-story house. A piano is playing on the first floor. You’re upstairs in a closed room, with street noise coming through the window. The sound travels — bouncing off walls, mixing with echoes, street noises, partially absorbed and distorted by obstacles. And yet, when it reaches your ears, your brain easily picks up the melody.

Despite all the changes to the physical sound wave, the melody — the meaning — remains intact. So we asked: If melodies are this robust, can we transmit information using the same principle?

That’s exactly what we’ve done.

Firstly,

Our system encodes information in artificial melodies. These melodies aren’t designed for human enjoyment; they’re designed to be easily recognized by our algorithms — robustly, even under distortion. We can support billions of such melodies.

Our algorithms separate two levels of information:

  • The semantic level — the level of melodies.

  • The representation level — the physical medium signals, like sound waves, radio waves, or even visual codes.

For this demo, we use sound waves. The system converts binary data into artificial melodies, projects them as acoustic signals via a speaker, and records them with a standard microphone. The receiver then recognizes the melody and decodes it back into binary data.

Secondly,

Our system can recognize and correct for distortions — such as amplitude attenuations, phase shifts, clock drift, and inter-symbol interference.

Classical wireless systems like 5G or Wi-Fi rely on pilot signals — predefined reference chunks — to estimate and correct signal distortions. But our brains don’t need reference signals to perceive and recognize music. We perceive the meaning directly and intuitively correct distortions and filter out noises as irrelevant.

Our algorithm is aligned with human-like behaviour. It doesn’t require pilot signals. Jointly with identification of melodies it identifies and compensates for distortions in the transmission medium. It relies on the internal integrity of the melody — its semantic consistency — and functions like answering the question: "What must the distortions be such that corrected signal represents a crystal clear melody?"

Following this link for life demo

Previous
Previous

Smart textures