Jump to content
Why become a member? ×

How does a speaker make more than one sound at a time ?


essexbasscat
 Share

Recommended Posts

[quote name='essexbasscat' timestamp='1334922273' post='1623471']
Question. Using entirely hypothetical, random figures here;

If one instrument plays a note of say 240 Hz and a second instrument plays a note of 435 Hz, why does the speaker produce two notes and not the one note associated with the sound of 240 + 435 = 675 Hz ?
[/quote]

Ok well the thing is that frequency is just a measure of how many times particles of air move backwards and forwards in a second. That's all sound is, it's just energy that makes air particles (or solid or liquid particles) move forwards and backwards in an oscillatory motion (IE, a wave). If you have one bit of energy pushing the air particles by x distance and you have another bit of energy moving the particles by distance y, then it will cause the particles to move a total distance of x + y. The important thing is though that it depends on the direction that they are moving in. So if the second bit of energy is moving the particles in the opposite direction to the first, then the distance they move will suddenly be x - y. That means you get a quieter sound. That's where we get the idea of "phasing" from. If they are in the same phase, so both bits of energy are acting in the same direction at all times and are the same size then they will double the volume of the sound, conversely if they are "out of phase" then there will be no movement of air and as such, no sound. You can't add the frequencies together, because having 2 bits of energy moving air at the same frequency doesn't make the air move twice as fast, it makes it move twice as much distance.

Edited by EdwardHimself
Link to comment
Share on other sites

[quote name='ShergoldSnickers' timestamp='1334924207' post='1623514']
We need a long skipping rope with someone doing oscillations at one frequency, and someone at the other end doing them at a different frequency. I'll post you your end. :lol:
[/quote]

How to create a big enough wave to reach your end ? hang on a minute, I'll just break the trampoline out :D Up, down, up, down, jiggle, jiggle, flick, got it !

Bugger :angry: everyone keeps driving over the rope and cancelling out my waves :blink:

Edited by essexbasscat
Link to comment
Share on other sites

[quote name='essexbasscat' timestamp='1334922273' post='1623471']
Question. Using entirely hypothetical, random figures here;

If one instrument plays a note of say 240 Hz and a second instrument plays a note of 435 Hz, why does the speaker produce two notes and not the one note associated with the sound of 240 + 435 = 675 Hz ?
[/quote]

The speaker doesn't produce two notes. It combines the information from the original sources into a single waveform that's more complex than either of the originals (Mart's excellent post shows what it would look like if you could see it). It's your brain that works out that there are 2 notes being played.

To explain properly how the source notes are combined would take quite a lot of hard sums that basically explain what's happening in Mart's graphs. Are you sure you want that? (Before answering, read Eddie's astute post then put your hand on your heart and come back and tell us that you understood it.)

Edit: Or of course you could just take our word for it that it does.... ;)

Edited by leftybassman392
Link to comment
Share on other sites

Gorillaz were right. "its all in your head". I used to be a biology teacher for what that is worth.

All your ears do is turn the changes in air pressure into electrical disturbances in the nerves which get transmitted to the brain where the real work starts. If the air pressure changes made by a speaker were the same as those made by an instrument then the nerve impulses would be the same and we would hear the same. Hearing involves learning as well as just nervous transmission. We learn to associate certain patterns of vibration and hence patterns of impulses as certain objects. We work out how far away they are by the small difference in time between arrival by one ear and another and by things like reverberation decay and how loud they are. A lot of sound perception depends upon learning, I doubt most of us could tell an oboe from a cor anglais playing the same note but would have no difference telling the difference between a J-bass and a P-bass, for classical musicians it would be the other way round.

The signal processing is complex but similar to the sort of thing we do when the echoes from an ultrasound scan are processed to make a picture of a baby in the womb. If you could hear or see the ultrasound it would make no sense at all but the information is all there. It needs a lot of computing power to extract it is all. Hearing works the same way. Notice that we constantly 'hear' sounds that aren't actually there. If you ever record your gigs or even a conversation the mic picks up everything and the background voices bangs and clinking of glasses etc all seem incredibly loud even compared with the PA. This is real sound. You hear the band as being louder because your brain filters out sounds it doesn't think are important, the ears are still picking them up. Remember the brain is still more powerful than any computer we have built and this is the sort of thing it is doing all the time.

I could give a much longer answer but, you know, your brain has other information to process and I have a bass to play.

Hope this helps

Link to comment
Share on other sites

[quote name='essexbasscat' timestamp='1334922273' post='1623471']
Question. Using entirely hypothetical, random figures here;

If one instrument plays a note of say 240 Hz and a second instrument plays a note of 435 Hz, why does the speaker produce two notes and not the one note associated with the sound of 240 + 435 = 675 Hz ?
[/quote]

Thing is, if one instrument produced only 240Hz and second produced only 435Hz then apart from the actual note both would sound identical. But they don't sound identical because they don't produce only a single frequency. This is why a an "A" sounds completely different when played on different instruments - it's the harmonics that give the tone.

So, if a speaker could ONLY reproduce single frequencies at a time there would be no long-winded debates about 'tone' because alll instruments (and not just basses) would sound identical.

Link to comment
Share on other sites

I thought I'd have a go at answering the original question...

As previously pointed out if a speaker was producing a simple sine wave then the cone would move back and forth smoothly at the wave's frequency.

If it is having to produce a complex wave, say the result of two sine waves of different frequencies, the form might look like the lower frequency but with ripples on it of the higher frequency.

The cone will therefore follow the shape of that wave - in general moving back and forth at the lower frequency but rather than smoothly it will be making much smaller back and forth movements at the higher frequency.

As I understand it that is what makes speakers imperfect (but not impracticably so) because the above behaviour introduces Doppler distortion. (The effect of a fire engine's horn apparently changing pitch as it passes you). Imagine the cone is the fire engine and its horn the higher frequency - the fact the higher frequency is being produced by something that is itself moving.

Link to comment
Share on other sites

[quote name='thinman' timestamp='1334945583' post='1624002']
I thought I'd have a go at answering the original question...

As previously pointed out if a speaker was producing a simple sine wave then the cone would move back and forth smoothly at the wave's frequency.

If it is having to produce a complex wave, say the result of two sine waves of different frequencies, the form might look like the lower frequency but with ripples on it of the higher frequency.

The cone will therefore follow the shape of that wave - in general moving back and forth at the lower frequency but rather than smoothly it will be making much smaller back and forth movements at the higher frequency.

As I understand it that is what makes speakers imperfect (but not impracticably so) because the above behaviour introduces Doppler distortion. (The effect of a fire engine's horn apparently changing pitch as it passes you). Imagine the cone is the fire engine and its horn the higher frequency - the fact the higher frequency is being produced by something that is itself moving.
[/quote]

Now this is getting close to answering the original question I asked. There's no doubt however that the answer goes much deeper than I first realised, as it encompasses sound generation by the speaker, reception by the ear drum, neurological conversion and finally, interpretation by the brain. There's also the scientific analysis of sound waveforms.

The BBC could make a really interesting programme exploring the journey of discovery that led to our current understanding of this topic. Could be an interesting academic paper too.

Thanks for that answer thinman :)

Link to comment
Share on other sites

[quote name='thinman' timestamp='1334945583' post='1624002']
I thought I'd have a go at answering the original question...

As previously pointed out if a speaker was producing a simple sine wave then the cone would move back and forth smoothly at the wave's frequency.

If it is having to produce a complex wave, say the result of two sine waves of different frequencies, the form might look like the lower frequency but with ripples on it of the higher frequency.

The cone will therefore follow the shape of that wave - in general moving back and forth at the lower frequency but rather than smoothly it will be making much smaller back and forth movements at the higher frequency.

As I understand it that is what makes speakers imperfect (but not impracticably so) because the above behaviour introduces Doppler distortion. (The effect of a fire engine's horn apparently changing pitch as it passes you). Imagine the cone is the fire engine and its horn the higher frequency - the fact the higher frequency is being produced by something that is itself moving.
[/quote]

Well yes, this is ok up to a point but there's more going on than that (and the Doppler effect, although very complex in a real-life situation, is actually minimal and hence not really much of an issue beside the main one from the OP's question - and in any case the brain simply filters it out so that what you hear is the sound you were supposed to hear). In real-life sound production from loudspeakers the cone movement is extremely complex, having to resolve audio components from a whole range of instruments (anything up to around 100 in the case of a symphony orchestra), each of which has it's own characteristic and complex waveform. However complex it gets though, the basic principle behind it is pretty straightforward:

1. The speaker unit moves in response to electrical inputs from the amplifier, and if wired correctly will move the cone forward in response to a rising voltage and backwards in response to a falling voltage. Therefore we can move a step nearer the amp and study how the voltage is changing (we don't have to, but it's a bit easier to explain if we do because the voltage behaviour is a step nearer the wave behaviour of the instruments).

2. At any given instant, the voltage value (and hence cone position - sort of) is the mathematical sum of all the individual voltages derived from the different instruments (again, the maths gets a bit fiddly in a real-life scenario so I'm trying to avoid getting too far into that side of it). At that instant some of the voltages will be positive and others will be negative. At the next instant all the individual voltages will be different so the voltage sum will be different (theoretically the voltage sum could be the same but in practice it just isn't). And on to the next instant; and the next; and so on. In other words the voltage present at the speaker is constantly changing. The way the voltage changes can be calculated if you have enough information about the waveforms of the individual instruments (which are themselves pretty complex!), but there is only one voltage present at the speaker at any given instant

3. Therefore, and as has been said before, a single speaker (or even a collection of speakers in a single cabinet) can only produce a single waveform. However, because the speaker is responding the best it can to a voltage that is changing in a very complex way, the cone movements will generate a single, highly complex audio waveform. That is what reaches your ears - and hence your brain. The rest is up to how your brain processes the information.

So to answer the OP's original question 'how does a speaker make more than one sound?', the answer is as it was before - it doesn't.

Edited by leftybassman392
Link to comment
Share on other sites

Hmm. Seems like this could do with some more detailed definitions of what's meant by "one sound" or even "one waveform".

I'd agree that a speaker can only produce one waveform, after all it can only move back and forth, but one waveform can contain many 'sounds'.
Thus a single 'sound' is a collection of individual frequencies, hence my earlier point about an "A" played on piano as sounding different to an "A" played on a guitar - a single 'sound' but many different frequencies at many different amplitudes.

Link to comment
Share on other sites

[quote name='flyfisher' timestamp='1335016562' post='1624763']
Hmm. Seems like this could do with some more detailed definitions of what's meant by "one sound" or even "one waveform".

I'd agree that a speaker can only produce one waveform, after all it can only move back and forth, but one waveform can contain many 'sounds'.
Thus a single 'sound' is a collection of individual frequencies, hence my earlier point about an "A" played on piano as sounding different to an "A" played on a guitar - a single 'sound' but many different frequencies at many different amplitudes.
[/quote]

There's an analogy here that might be useful.

In the old days of Computer Science, teachers like me used to try to get students to understand the difference between 'data' and 'information'.

[b]Data [/b]is what the technology processes. It doesn't know or care about the data, it's simply a machine following instructions that it's been given.

[b]Information [/b]is what people make of that data. At this point it acquires 'meaning'.


Moving back to the speaker system, the waveform is data. The speaker is simply reacting to the changing voltages it's presented with. If instead of a speaker the voltages were fed into an oscilloscope it would display the waveform. Neither the speaker or the oscilloscope know or care about the data, and are simpy reacting the way they've been designed to.

The words 'hearing' and 'sound' are associated with animals (in this case, humans) and are words we use to describe (i.e. give meaning to) what's going on.

I know this is hard to get to grips with (the notion of [i]objects [/i]making [i]sounds [/i]is deeply ingrained in our thinking - it's how we make sense of what our ears detect), but speakers don't produce sounds. They produce waveforms that travel though the air in the way Eddie Himself described. There needs to be somebody there to receive the data and make sense of it.

In an earlier post I talked about thinking of the ears as very sensitive and hugely complicated pickups. It's a very good analogy, because the ear detects a waveform and transmits an electrical signal in much the same way as a guitar pickup. Hearing goes on in our heads, not in our ears. The word 'sound' refers to our perception and analysis of the waveforms our ears have detected.

Long story short - speakers transmit waveforms, brains detect sounds. Simples. :)

Link to comment
Share on other sites

[quote name='leftybassman392' timestamp='1335020246' post='1624822']
Long story short - speakers transmit waveforms, brains detect sounds. Simples. :)
[/quote]

Well, yes, but our brains detect [i]everything[/i] we perceive so I'm not sure that really helps answer the original question. Eyes receive light, brains make pictures, etc.

A speaker may only transmit a waveform, which as you rightly say can be displayed on an oscilloscope without needing a brain to hear the 'sound', but that waveform is comprised of many individual frequency components, which can also be displayed on a spectrum analyser without needing a brain to hear the 'sound'. That's basic physics.

How the brain makes sense of the received air pressure wave "data" and turns it into sound "information" is indeed a very complicated matter, but the incoming waveform has to somehow contain all the necessary "data" for the brain to decode in the first place. Surely the "data" are the individual frequency components that make up the complex waveform in question?

Link to comment
Share on other sites

[quote name='flyfisher' timestamp='1335024600' post='1624927']
Well, yes, but our brains detect [i]everything[/i] we perceive so I'm not sure that really helps answer the original question. Eyes receive light, brains make pictures, etc.

A speaker may only transmit a waveform, which as you rightly say can be displayed on an oscilloscope without needing a brain to hear the 'sound', but that waveform is comprised of many individual frequency components, which can also be displayed on a spectrum analyser without needing a brain to hear the 'sound'. That's basic physics.

How the brain makes sense of the received air pressure wave "data" and turns it into sound "information" is indeed a very complicated matter, but the incoming waveform has to somehow contain all the necessary "data" for the brain to decode in the first place. Surely the "data" are the individual frequency components that make up the complex waveform in question?
[/quote]

If you'll forgive me saying so, and with great respect, I think we're talking at crossed purposes. The reason it answers the OP's question is that a speaker doesn't emit sound at all - it emits a pressure wave whose intensity at any given instant is related to the voltage level present in the speaker coil at approximately that same instant. And yes, the pressure wave is highly complex in nature even with a single instrument due to the harmonic overtones present in the original wave created by the instrument (which I think is what you're referring to and which enables an experienced listener to identify the instrument as, say, a piano).

However complex, what arrives at a listener's ears is still a single wave. Broadly speaking the data does contain all the necessary components for the brain to reconstruct it. (At very low frequencies, the brain analyzes the harmonic components and extrapolates downwards to 'insert' the fundamental frequency though the speaker is not physically reproducing it.)

As you have rightly pointed out, an audio frequency spectrum analyzer can identify and display the various components. In hearing sounds, the brain is acting as an extremely sophisticated spectrum analyzer, with the bonus that it can, with training, reassemble the components so as to be able to identify the instrument that made them. Moreover, it is able to do this even when there are multiple notes and even multiple instruments. Exactly how it does this is a little beyond my knowledge, but I suspect that it has to do with prioritising components from the incoming data in some way. Even more impressively, over time it can learn to recognise familiar patterns very rapidly, thereby releasing 'processing power' for less familiar elements. In truth there's even more than that going on in the brain when we listen to music, but perhaps another time?

The main difficulty in all this (as I hinted at in a previous post) is that the language we use when describing musical information is so centred on the source of the sound that we find it hard to think about it in any other way - "This bass has a great sound" is so much easier to deal with than "my brain is really enjoying analyzing this pressure wave". For everyday use there's no reason why we need to do anything different, but for the scientific analysis required to properly address this type of question, it's slightly misplaced.

I'm not trying to be contentious or argumentative in saying all this - all I'm trying to do is answer a question from a fellow forum member. If that answer happens to be difficult for people to grasp, then so be it. It is what it is.

Link to comment
Share on other sites

[quote name='ThomBassmonkey' timestamp='1335111465' post='1625885']
Remember that an instrument isn't just one wave, it's a collection of sound waves that make up the sound of the instrument. The differences in the presence of different tones is what makes up the timbre of the instrument.
[/quote]

Its all one wave is kind of all of the point. Only separate waves after you run a Fourier Transform, or filter through a crossover network which is a bit artificial, although your brain functions in an analogous manner.

Link to comment
Share on other sites

[quote name='Mr. Foxen' timestamp='1335112548' post='1625914']
Its all one wave is kind of all of the point. Only separate waves after you run a Fourier Transform, or filter through a crossover network which is a bit artificial, although your brain functions in an analogous manner.
[/quote]

Ok sorry, let me change my wording. :)

It's not all one frequency.

Link to comment
Share on other sites

[quote name='ras52' timestamp='1334854201' post='1622450']
Indeed, and this ties in with another thread which discussed people who couldn't/didn't separate individual instruments in a band. Hence the term "ear-training" for the process by which we learn to unscramble what we hear.
[/quote]

Very true, and when considering how the (not so) humble moving coil driver and how it can reproduce such complex waveforms with (varying degrees of) fidelity,remember a microphone/s (ok- maybe the odd DI!) captured it all in the first place - a process which seems more intuitive, but demonstrates that one device can perform such a task.

Link to comment
Share on other sites

not read the thread but isn't the OP question akin to :


How can our eardrums hear more than one sound at a time?
or
How can a microphone pick up more than one sound at time?

Speakers are just ear drums/mics in reverse, aren't they?

Link to comment
Share on other sites

[quote name='Twigman' timestamp='1335209988' post='1627400']
[b]not read the thread [/b]but isn't the OP question akin to :


How can our eardrums hear more than one sound at a time?
or
How can a microphone pick up more than one sound at time?

Speakers are just ear drums/mics in reverse, aren't they?
[/quote]

In all humility, might I respectfully suggest that you actually read the thread? A lot has gone by since the OP first posted this thread.

Edited by leftybassman392
Link to comment
Share on other sites

[quote name='essexbasscat' timestamp='1334922273' post='1623471']
If one instrument plays a note of say 240 Hz and a second instrument plays a note of 435 Hz, why does the speaker produce two notes and not the one note associated with the sound of 240 + 435 = 675 Hz ?
[/quote]

The speaker emits a complex waveform that contains the two notes, plus components equal to the sum and the difference of their frequencies. So you do also get a 675Hz note and a 195Hz note.

Do you ever tune up using harmonics or 5th fret and open string? That low frequency that you hear is the beat frequency - the note produced by the difference of the two principle notes.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...