Jump to content
Why become a member? ×
Scammer alert: Offsite email MO. Click here to read more. ×

Recommended Posts

Posted

All ordered. Managed to buy from Thomann without adding in a guitar or indeed anything else, though sorely tempted by their Rickenbacker look-a-likes.

Posted
On 16/10/2023 at 16:20, horrorshowbass said:

I never answered this @xgsjx apologies I'm on Windows 

If you want a freebie that’ll do what you want, then Cakewalk is a great DAW that comes bundled with some extra instruments & fx. 

  • 1 month later...
Posted

Hi guys

Having issues with audio coming out as mono when I use my focusrite itrack solo.

Just ordered a scarlett 212 and mate suggested I use a webcam as opposed to laptop camera I currently use. Should a scarlett 212 and a decent camera allow me to record in glorius stereo?

Mostly using to do bass covers

Thanks

Michael

Posted
1 hour ago, horrorshowbass said:

Hi guys

Having issues with audio coming out as mono when I use my focusrite itrack solo.

Just ordered a scarlett 212 and mate suggested I use a webcam as opposed to laptop camera I currently use. Should a scarlett 212 and a decent camera allow me to record in glorius stereo?

Mostly using to do bass covers

Thanks

Michael

 

What software are you using for recording..?

Posted
50 minutes ago, horrorshowbass said:

I was hoping I didn’t need to use any but my DAW is audacity.

 

In 'Preferences', is your interface recognised, and are you creating Stereo tracks..? That should do the trick; Audacity is minimalist but should do the job.

  • Thanks 1
Posted
59 minutes ago, horrorshowbass said:

I was hoping I didn’t need to use any but my DAW is audacity.

If you need a better DAW for the same price (free), then go look at cakewalk. 

  • Thanks 1
  • 4 months later...
  • 6 months later...
Posted (edited)
On 28/11/2023 at 16:31, horrorshowbass said:

Hi guys

Having issues with audio coming out as mono when I use my focusrite itrack solo.

Just ordered a scarlett 212 and mate suggested I use a webcam as opposed to laptop camera I currently use. Should a scarlett 212 and a decent camera allow me to record in glorius stereo?

Mostly using to do bass covers

Thanks

Michael

 

edit: whoah did not realise I am reacting to something from 2023 lol

 

How are you recording your mono bass signal in stereo, and why? 🤔

Why is the camera important? All cameras record audio in rather crappy quality and they are directly connected via USB, not a Scarlett.

If you have a Scarlett, isn't your bass plugged in directly?

I'm happy to try to help, I just don't understand your current situation 🙂

Edited by BabyBlueSound
Posted
13 hours ago, BabyBlueSound said:

 

edit: whoah did not realise I am reacting to something from 2023 lol

 

How are you recording your mono bass signal in stereo, and why? 🤔

 

I've often wonder this, how can you get a stereo output from a mono input? I often wondered about the point stereo in ears as well, maybe I'm missing something

  • 1 month later...
Posted
On 16/10/2024 at 23:53, PaulWarning said:

I've often wonder this, how can you get a stereo output from a mono input? I often wondered about the point stereo in ears as well, maybe I'm missing something

The solution is a cheap mix desk, like Behringer. I'd plug the bass into a mix desk channel one (e.g.) then the output from mix desk L channel in scarlet input 1 and right channel in Scarlet input 2. You can use the spare channels on mix desk to run background /backing tracks. Download OBS , it's free and you can record and broadcast live on Facebook, YouTube etc with a superb quality. Plenty of tutorials on you tube. Nit sure if this answers your questions. 

This is one of the tutorials I used for OBS

 

Posted (edited)
On 16/10/2024 at 23:53, PaulWarning said:

I've often wonder this, how can you get a stereo output from a mono input? I often wondered about the point stereo in ears as well, maybe I'm missing something

Stereo is about placing the instruments on the soundstage; just a left-right balance control does that.

(That's one of the things mixers do.)

Of course, if you split your mono signal and route it via separate Fx channels, then you can have two completely different sounds.

Edited by prowla
  • 1 month later...
Posted
On 16/10/2024 at 23:53, PaulWarning said:

I've often wonder this, how can you get a stereo output from a mono input? I often wondered about the point stereo in ears as well, maybe I'm missing something

In terms of producing stereo from a mono input, you fake it! 

Basically, you put the mono signal onto tracks 1&2, then phase some (or all of the frequencies slightly differently (i.e add a microdelay). But luckily, there's many plug-ins that'll do this for you so it sounds reasonably realistic - frinstance, I usually use Imager in Ozone 9. It does the microdelay-at-frequencies thing, and allows you to "stereoise" further. It does this by using the concept of "M&S" (not Marks and Sparks, but Middle and Side or even Mittel und Seite as if was invented in Germany!).

In this, M= track 1 + 2, S = track 1 - 2 so S is the difference between the 2 tracks. In that way you can make something more stereo by boosting S, then recombining to make tracks 1 and 2 (i.e. (M+S)-(M-S)= 2M+2S).

As for "point stereo" in-ears, they may work using "binaural" rather than straight stereo.

In ordinary stereo, you record with a pair of crossed mics, i.e both in the same position but ends about 1-2" apart. In binaural, mics are placed the width of a human head apart - the idea being that it gave a more natural reproduction of space, especially with headphones and was popular in the 60s and 70s...

Posted
1 hour ago, Leonard Smalls said:

In terms of producing stereo from a mono input, you fake it! 

Basically, you put the mono signal onto tracks 1&2, then phase some (or all of the frequencies slightly differently (i.e add a microdelay). But luckily, there's many plug-ins that'll do this for you so it sounds reasonably realistic - frinstance, I usually use Imager in Ozone 9. It does the microdelay-at-frequencies thing, and allows you to "stereoise" further. It does this by using the concept of "M&S" (not Marks and Sparks, but Middle and Side or even Mittel und Seite as if was invented in Germany!).

In this, M= track 1 + 2, S = track 1 - 2 so S is the difference between the 2 tracks. In that way you can make something more stereo by boosting S, then recombining to make tracks 1 and 2 (i.e. (M+S)-(M-S)= 2M+2S).

As for "point stereo" in-ears, they may work using "binaural" rather than straight stereo.

In ordinary stereo, you record with a pair of crossed mics, i.e both in the same position but ends about 1-2" apart. In binaural, mics are placed the width of a human head apart - the idea being that it gave a more natural reproduction of space, especially with headphones and was popular in the 60s and 70s...

I sort of get all that, my point being is there any point in faffing around with stereo for IEM, a lot of effort, and money, for negligible improvement, as long as you can hear yourself and what's going on with the rest of the band, it's job done

Posted
1 minute ago, PaulWarning said:

is there any point in faffing around with stereo for IEM

Nope...

Only really for recording and then only if you're using bass reverb or fancy phasers'n'stuff. Though I quite like the OmniBass (TM) effect you get by turning Stereoise up to maximum below 200Hz!

Posted
4 minutes ago, PaulWarning said:

I sort of get all that, my point being is there any point in faffing around with stereo for IEM, a lot of effort, and money, for negligible improvement, as long as you can hear yourself and what's going on with the rest of the band, it's job done

 

Having stereo placement in IEMs can help with clarity in the mix when you have lots of instruments and vocals. However if you can hear everything well enough with a mono mix then you probably don't need it.

Posted
2 minutes ago, Leonard Smalls said:

Nope...

Only really for recording and then only if you're using bass reverb or fancy phasers'n'stuff. Though I quite like the OmniBass (TM) effect you get by turning Stereoise up to maximum below 200Hz!

 

You do need to watch what you are doing with stereo, phase differences and low frequencies if you are intending to release your recordings as a records, as these can render the track(s) uncuttable.

  • 10 months later...
Posted
On 07/01/2025 at 10:42, Leonard Smalls said:

In terms of producing stereo from a mono input, you fake it! 

Basically, you put the mono signal onto tracks 1&2, then phase some (or all of the frequencies slightly differently (i.e add a microdelay). But luckily, there's many plug-ins that'll do this for you so it sounds reasonably realistic - frinstance, I usually use Imager in Ozone 9. It does the microdelay-at-frequencies thing, and allows you to "stereoise" further. It does this by using the concept of "M&S" (not Marks and Sparks, but Middle and Side or even Mittel und Seite as if was invented in Germany!).

In this, M= track 1 + 2, S = track 1 - 2 so S is the difference between the 2 tracks. In that way you can make something more stereo by boosting S, then recombining to make tracks 1 and 2 (i.e. (M+S)-(M-S)= 2M+2S).

As for "point stereo" in-ears, they may work using "binaural" rather than straight stereo.

In ordinary stereo, you record with a pair of crossed mics, i.e both in the same position but ends about 1-2" apart. In binaural, mics are placed the width of a human head apart - the idea being that it gave a more natural reproduction of space, especially with headphones and was popular in the 60s and 70s...

 

A few things are incorrect here - with M/S recording, M is not track 1 + 2. M is simply track 1 and S is simply track 2. When they are decoded (easily done with a simple plugin, or you can do it manually), they then give an L/R output (instead of M/S). When decoded, L=M+S and R=M-S, and as you state, the difference gives the stereo information. When summed backed to mono, the S signal is completely discarded, giving strong mono compatibility from the (hopefully) well-placed M mic.

 

With 'ordinary stereo', it's not always a crossed pair of mics. There are two fundamental approaches:

*Co-incident stereo arrays (which is a crossed pair of mics, as close as possible to being in the same point in space) - these rely on the microphone polar patterns to achieve level difference between the same sound hitting both mics. So a sound coming from the left (for example), hits both mics at the same time (because they're in the same point in space), but the left mic will be facing the sound, and therefore picks it up stronger, and so when played back, the sound will appear to be coming from the left. For this reason, omnidirectional mics don't work in a co-incident pair because they're equally sensitive all round although in practice, you'd probably get some sense of width at higher frequencies unless you managed to get hold of a perfectly matched pair of perfectly omnidirectional mics... which I don't think exist!)

There are many variations of co-incident stereo, using different polar patterns and mutual angles. Figure-8 mics at 90 degrees are the classic 'Blumlein pair', which gives great results in a good acoustic, but needs to placed quite far because the stereo recording angle is quite narrow (which exaggerates the width if placed close). Cardioid mic at 90 degrees is also seen often, but are often unsuitable and quite useless because the recording angle is so wide, it gives a narrow image where everything in front of you is bunched up around the middle. It can be useful to record ambience all around you, or when you have to really close to the source.

Co-incident arrays usually gives a very pinpoint placement of sources, but can sometimes lack spaciousness.

 

*Spaced arrays - In a spaced array, the mics are spaced apart (by anything from a few CM to a meter or two), and are therefore not at the same point in space. Because of this, they depend on [i]time of arrival[/i] differences to create a stereo image instead of level differences. Put simply, when the mics are spaced, a sound coming from the left will hit the left microphone before reaching the right microphone. Again, there are many variations using different polar patterns and spacings for a different recording angle, with spaced omnis probably being the most useful, and cardioids again being quite limited in their usefulness. Spaced arrays usually sound more spacious, but more vague in their imaging. 

 

A lot of the most popular stereo techniques (at least in classical music), combine the two methods. Common techniques such as ORTF rely on a mutual angle for level difference (like a co-incident pair) combined with a slight spacing between the mics for time-of-arrival differences (like a spaced pair). In the case of ORTF, it's a mutual angle of 110 degrees between mics, and spaced 17cm apart. 

 

 

All this is quite irrelevant to bass playing though... these 'proper' stereo techniques are about giving a fairly natural representation of an acoustic soundstage. They are treated as 'one system', not 'twi mics'. There are plenty of other ways to get stereo 'results' without taking the above approaches (drum overheads, for example, are rarely a 'proper' stereo technique, but the results are still fine), and of course it's quite common on acoustic instruments to use two mono mics panned apart (by this I mean two mics placed intentionally to pick up a different sound - e.g one by the bridge and one nearer the neck. It's not a true stereo array, but can give nice results in stereo.)

 

but stereo feeds for your IEM can be very enjoyable as you can move sources around to match where they are on stage, or wherever you'd like. It doesn't matter if the sources are mono - they can still be moved around the stereo field. Your own bass will be mono of course unless you use stereo effect.

 

 

 

And now I'm realising that I'm responding to something posted in January...!!!

Posted
16 minutes ago, Ramirez said:

with M/S recording, M is not track 1 + 2. M is simply track 1 and S is simply track 2

You misunderstand... M is the sum of the 2 stereo tracks, call them tracks 1&2, or A&B, or X&Y, and S is the difference between the 2.

When I worked in BBC sound we didn't have a plug-in to do the coding or decoding for us, we'd do it via a jackfield with parallel blocks and a phase reverse lead...

If you've recorded in M&S, then one of the tracks is the mid signal, and the other is side.

Posted
2 minutes ago, Leonard Smalls said:

You misunderstand... M is the sum of the 2 stereo tracks, call them tracks 1&2, or A&B, or X&Y, and S is the difference between the 2.

When I worked in BBC sound we didn't have a plug-in to do the coding or decoding for us, we'd do it via a jackfield with parallel blocks and a phase reverse lead...

If you've recorded in M&S, then one of the tracks is the mid signal, and the other is side.

 

Sorry, yes, I was talking of recording - one channel is M and one is S. They are then decoded for convential L/R playback - if done on a desk or patchbay as you mentioned, the S signal is panned left, and is then duplicated, phase inverted, and panned right, then both are combined with the M signal. But almost all DAWs and a lot of hardware recorders and monitoring utilities now have the matrix built in, or offer it as a plugin, which is quicker and takes one track instead of three!

 

When auditioning L/R material through and M/S matrix (ie soloing the M or S elements) you're right, M=L+R and S=L-R., and boosting the S component will make things wider.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...