I would like to make a streaming audio service.
There are three main things that people will be doing:
1 - Uploading audio as they record it.
2 - Downloading audio as it's recorded.
3 - Downloading audio that was previously uploaded, whether the audio has actually finished being recorded or not.
I'm wondering... what kind of technologies should I be looking at for this? What languages/frameworks/libraries should I use to make the server portion of this? How well can I make this scale (or, better put, how many simultaneous users could I have with only my 2007 iMac or 2006 MacMini as servers? I figure I'll expand onto AWS if my userbase gets too large to support with just my personal machines for servers...)
I think I know how the client end (iOS) should work... it's the server end that I'm more confused about.
There are three main things that people will be doing:
1 - Uploading audio as they record it.
2 - Downloading audio as it's recorded.
3 - Downloading audio that was previously uploaded, whether the audio has actually finished being recorded or not.
I'm wondering... what kind of technologies should I be looking at for this? What languages/frameworks/libraries should I use to make the server portion of this? How well can I make this scale (or, better put, how many simultaneous users could I have with only my 2007 iMac or 2006 MacMini as servers? I figure I'll expand onto AWS if my userbase gets too large to support with just my personal machines for servers...)
I think I know how the client end (iOS) should work... it's the server end that I'm more confused about.