Back in the 1980s I was fascinated by sound synthesis and analysis. The most well-known work I did was a little application called SoundCap (for Sound Capture) that was coupled with an Analog-to-Digital converter initially sold by Fractal Software, my partnership with Tom Hedges, and eventually sold by MacNifty. It is fortunate for many of the early Macintosh developers that this box hooked up to the back of a Mac through the serial port. Several sound-producing apps were produced with it, including Airborne! by San Diego's Silicon Beach Software.
Stephen St. Croix was a friend of mine. He contacted me at Fractal Design in the 1990s and wow'ed me with a few of his wondrous stories. We spoke at length on several occasions about digital sound synthesis, one of my many hobbies. I was surprised to learn that he was one of the inventors, at Marshall Electronics, of the Time Modulator, the box that introduced digital delay line flanging to more than a few famous musicians.
The most interesting story he told me was about the job he did with Lay's. Yes, the people who make the potato chips. It seems that their spokesman, Jack Klugman (of Quincy fame), had lost his voice as a result of throat cancer. This really made a problem for them because his commercials for Lay's potato chips were pulling quite well. After all, he was a very recognizable and a well-loved actor. His voice was distinctive. People listened to him.
Stephen informed me that they invented a new kind of voice synthesis device to recreate his voice. It used formant synthesis. Incredibly, they could exactly duplicate the distinctive gravelly sound of his voice in this manner! It seems that the very low-frequency warbling of his vocal cords, though inimitable by human voice impersonators, was entirely imitable by digital synthesis techniques.
At Marshall Electronics, they spent quite some time analyzing sound. They had room analyzers. And so they also had room simulators. But the least known cleverness involved voice analyzers. Imagine picking apart someone's voice, layer by layer. Figuring out the pitch-profiles and the syllabic inflections. Hand-tuning the cadence of the words. My mind was boggled constantly by Stephen's work.
I informed him of my work in music extraction. I had a special application called Do-Re-Mi that allowed you to whistle a tune that could be output using MIDI in key duration format, complete with amplitude and pitch profiles suitable for modulating a pitch wheel and a volume pedal. It could tell you how many cents (hundredths of a semitone) sharp or flat you were when you whistled. I used a clever correlation technique that involved a time-delta histogram for correlation, pitch-multiple disambiguation, Lagrange peak-finding, and other techniques for isolating the pitch accurately. This work was all done in the 1980s, before Fractal Design, as part of Fractal Software's work.
Tom Hedges, of course, was the hardware designer of the first Macintosh sound sampling box and my contribution was the software, much of it written in Motorola 68000 assembler. Our work with sound continued when we did a bit of work with Bogas Productions, involving Ed Bogas, Ty Roberts, Neil Cormia and others. I met them through a mutual acquaintance, Steve Capps, who was working on the Finder in 1984.
I wrote a sequencing application in 1984 and Tom was fascinated by it. He modified it so it could sequence samples and then proceeded to digitize his piano, note for note. This was in a day when samplers existed, but were quite crude and expensive. He encoded Rhapsody in Blue (he was so proud of playing it) and also a perennial favorite, Wasted on the Way (a thickly vocal-harmonic piece from Crosby, Stills, and Nash). We were both musically literate, but in different ways. I was a composer who played piano and I was fully familiar with sheet music (actually, I had to teach the rudiments of it to Tom before he could digitize the songs, which took a week or so to get it just right). Tom was a DJ with KZSU Stanford and an advanced audiophile. And he had a very wide understanding of music. His father played piano (which explained Tom's interest in Gershwin's Rhapsody in Blue).
So when I began speaking with Stephen St. Croix, I was very deep into audio analysis and synthesis. And the author of a very popular application for sound manipulation on the coolest new computer around, the Macintosh.
It wasn't a big surprise at all that we spent hours and hours talking about sound synthesis, analysis, music, and the recording business. Crazy times and a really good guy.
Mark Zimmer: Creativity + Technology = Future
Please enter your email address and click submit to follow this blog
Wednesday, June 4, 2014
Seven Ways
There are seven ways that we best retain information, and five of these ways are tied to our natural innate skills as humans. These five are: typing, handwriting, speaking, seeing, and hearing. Two other ways help you complete the process of learning by semantic cross-tagging: mixing and anchoring.
Typing is a skill that we develop to codify something in symbolic notation: language. When we use keyboards for entry, this gives the language center of our brain a workout, which is concerned with coding and symbolication. But what are these codes and symbols? In language, we break our writing into chapters, chapters into paragraphs, paragraphs into sentences, sentences into words, and words into letters. These symbols, their organization, and their semantic meanings are inherent to symbolic processing. And, as humans, we definitely excel at this.
But there are more kinds of codes and symbols. When we use a musical instrument, we usually produce music in a coded symbolic representation: note for note. We break songs into sections, such as verses, refrains, and bridges. We break sections into chords. We layer melody on top of accompaniment, on top of bass. We accent with drums. We break melodies and chords into notes. We even break notes into tone, duration, and volume. Unlike text, music has quite a number of internal properties of continuity. Like staccato and slurring notes together. All of these are also kinds of language symbols that our brains use. Clearly we are using our brains' auditory centers when we make music.
I certainly didn't miss that writing is a bit like music, also. Because when we write creatively, we use plots like we use the interrelationship of melodies and leitmotivs. We make characters and develop alternate realities. We use metaphor and hyperbole. We season our writing with alliteration and onomatopoeia. A theme can pervade a novel. The resolution of a character's arc can stir us like a brilliant cadence. But writing struggles in the last chapter to compete with the finality and intense closure of the coda for a great piece of music.
Handwriting is a perfect way to match our muscle memory to our brains. We coordinate our hands and eyes to denote what we hear or what we think. Taking notes can be a compelling way to retain your thoughts. When we combine it with symbolic representations, we can end up with text, mathematical equations, musical notation, or even scribbles, doodles, and drawings. Let's face it, we think a bit more when we are handwriting than when we type, because a different part of our brain is required to do it.
In some ways, handwriting is utilizing the visual center of our brain. Typing does this also because we use our eyes to verify the text we enter. It seems like it is the connections between brain centers that reinforce our understanding of knowledge and help us to retain and memorize.
Speaking is our natural form of expression. We use our voices conversationally and this method of communication is highly generative, using our cognitive powers to express a thought, a concept, to deliver commands, to convince or inform. We use our language processing centers in a different way, and this is evident in the way we often speak very differently than we write: less formally. When we are in front of a group, we speak from memory, following a train of thought. Actors and presenters learn to do this and shade their performances with attitude and gesticulation, making the art of speaking a multi-dimensional task.
When we sing, we are expressing much more than just notes and words. We are using emotion. We link our generative capabilities to our voice when we sing. When we learn to play piano and sing, we are using much more of our brain than we usually might employ.
Seeing is much more than just looking at a photograph or diagram. It's also seeing in the mind's eye. Some people are very visual and can instantly see a concept in their head before they can express it. They can see the directions on a map in their head when they drive. Our eyes are the key to visualizing, certainly. But even blind people can see concepts. We have spatial reasoning to thank for this. When you have a visual memory, you get to see an object when it is described.
There is more to visualizing than just what is real, though. We can thank our imaginations for this fact. We can imagine impossible figures, for instance, and this concisely illustrates that our imaginations can transcend the real.
Perhaps for many people the spark of an idea comes visually. Perhaps concepts are symbolic for others. Perhaps concepts are neither visual nor symbolic for some: just floating in consciousness waiting to be expressed in some way.
Hearing is a natural way to capture and acquire information. But few of us actually hear a sentence and turn it into text in our head. Maybe a few of us turn it into visual information. But most likely hearing is its own domain. Somehow what we hear simply gets directly converted to knowledge. Still, often we must write something down to retain it, usually.
When I am composing or playing piano, I do not generally rely on my ear to remember the tune and the rhythm. Thankfully, I can record what I play. In other situations, I write down what I play (by hand), in common musical notation.
Even so, I can hear quite a bit of music in my head. It even seems like it is playing back. At 17 years old, I used to do this just before going to sleep, in that nebulous state in between waking and sleeping. I would consciously play a piece in my head. One that I was working on, or a familiar song. Or even a symphony. I guess I was practicing the ability to imagine polyphony. I was on the verge of being a composer at that age.
Mixing modes is the most powerful form of memorization. Sight-reading is a great way to commit a piece to memory. Playing a piece I'm composing, to cement the chord and melody structure is a good way to hone a piece. Record it and listen to it later, to take a step back and form new ideas for where the piece is going.
Listening and taking notes is a good mixture of modes for memorization and retention. But if you really want to cement it into your memory, type up your notes later. Draw diagrams. Learning, though, is much more than memorization. True retention requires application of a concept.
Anchoring is an essential endgame for learning a subject properly. I have a friend who says "I don't want to hire the people who can memorize terms and subjects, I want to hire people that can do something with what they've learned". Memorizing words in a foreign language is useful, but using those same words in sentences is much more powerful because then the words will forever be connected to concepts and subjects in your mind.
Typing is a skill that we develop to codify something in symbolic notation: language. When we use keyboards for entry, this gives the language center of our brain a workout, which is concerned with coding and symbolication. But what are these codes and symbols? In language, we break our writing into chapters, chapters into paragraphs, paragraphs into sentences, sentences into words, and words into letters. These symbols, their organization, and their semantic meanings are inherent to symbolic processing. And, as humans, we definitely excel at this.
But there are more kinds of codes and symbols. When we use a musical instrument, we usually produce music in a coded symbolic representation: note for note. We break songs into sections, such as verses, refrains, and bridges. We break sections into chords. We layer melody on top of accompaniment, on top of bass. We accent with drums. We break melodies and chords into notes. We even break notes into tone, duration, and volume. Unlike text, music has quite a number of internal properties of continuity. Like staccato and slurring notes together. All of these are also kinds of language symbols that our brains use. Clearly we are using our brains' auditory centers when we make music.
I certainly didn't miss that writing is a bit like music, also. Because when we write creatively, we use plots like we use the interrelationship of melodies and leitmotivs. We make characters and develop alternate realities. We use metaphor and hyperbole. We season our writing with alliteration and onomatopoeia. A theme can pervade a novel. The resolution of a character's arc can stir us like a brilliant cadence. But writing struggles in the last chapter to compete with the finality and intense closure of the coda for a great piece of music.
Handwriting is a perfect way to match our muscle memory to our brains. We coordinate our hands and eyes to denote what we hear or what we think. Taking notes can be a compelling way to retain your thoughts. When we combine it with symbolic representations, we can end up with text, mathematical equations, musical notation, or even scribbles, doodles, and drawings. Let's face it, we think a bit more when we are handwriting than when we type, because a different part of our brain is required to do it.
In some ways, handwriting is utilizing the visual center of our brain. Typing does this also because we use our eyes to verify the text we enter. It seems like it is the connections between brain centers that reinforce our understanding of knowledge and help us to retain and memorize.
Speaking is our natural form of expression. We use our voices conversationally and this method of communication is highly generative, using our cognitive powers to express a thought, a concept, to deliver commands, to convince or inform. We use our language processing centers in a different way, and this is evident in the way we often speak very differently than we write: less formally. When we are in front of a group, we speak from memory, following a train of thought. Actors and presenters learn to do this and shade their performances with attitude and gesticulation, making the art of speaking a multi-dimensional task.
When we sing, we are expressing much more than just notes and words. We are using emotion. We link our generative capabilities to our voice when we sing. When we learn to play piano and sing, we are using much more of our brain than we usually might employ.
Seeing is much more than just looking at a photograph or diagram. It's also seeing in the mind's eye. Some people are very visual and can instantly see a concept in their head before they can express it. They can see the directions on a map in their head when they drive. Our eyes are the key to visualizing, certainly. But even blind people can see concepts. We have spatial reasoning to thank for this. When you have a visual memory, you get to see an object when it is described.
There is more to visualizing than just what is real, though. We can thank our imaginations for this fact. We can imagine impossible figures, for instance, and this concisely illustrates that our imaginations can transcend the real.
Perhaps for many people the spark of an idea comes visually. Perhaps concepts are symbolic for others. Perhaps concepts are neither visual nor symbolic for some: just floating in consciousness waiting to be expressed in some way.
Hearing is a natural way to capture and acquire information. But few of us actually hear a sentence and turn it into text in our head. Maybe a few of us turn it into visual information. But most likely hearing is its own domain. Somehow what we hear simply gets directly converted to knowledge. Still, often we must write something down to retain it, usually.
When I am composing or playing piano, I do not generally rely on my ear to remember the tune and the rhythm. Thankfully, I can record what I play. In other situations, I write down what I play (by hand), in common musical notation.
Even so, I can hear quite a bit of music in my head. It even seems like it is playing back. At 17 years old, I used to do this just before going to sleep, in that nebulous state in between waking and sleeping. I would consciously play a piece in my head. One that I was working on, or a familiar song. Or even a symphony. I guess I was practicing the ability to imagine polyphony. I was on the verge of being a composer at that age.
Mixing modes is the most powerful form of memorization. Sight-reading is a great way to commit a piece to memory. Playing a piece I'm composing, to cement the chord and melody structure is a good way to hone a piece. Record it and listen to it later, to take a step back and form new ideas for where the piece is going.
Listening and taking notes is a good mixture of modes for memorization and retention. But if you really want to cement it into your memory, type up your notes later. Draw diagrams. Learning, though, is much more than memorization. True retention requires application of a concept.
Anchoring is an essential endgame for learning a subject properly. I have a friend who says "I don't want to hire the people who can memorize terms and subjects, I want to hire people that can do something with what they've learned". Memorizing words in a foreign language is useful, but using those same words in sentences is much more powerful because then the words will forever be connected to concepts and subjects in your mind.
Labels:
composing,
conceptualization,
handwriting,
hearing,
learning,
memory,
music,
seeing,
sight,
speaking,
typing,
writing
Subscribe to:
Posts (Atom)