Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
So, for how long did you make music? And why did you quit doing it?

Wow, never would have guessed about you also doing FL Studio ! That's freaky cool !

I'm dead tired, so will respond in detail later, but want to leave you with a link to my latest (unfinished) music creation using FL Studio:
 
Wow, never would have guessed about you also doing FL Studio ! That's freaky cool !

I'm dead tired, so will respond in detail later, but want to leave you with a link to my latest (unfinished) music creation using FL Studio:

Nice one!
Well you can hear it's not a completely finished song, but i like the sound design and especially the ambience.

I can give you an old contest entry of mine back from 2014. Not really a song, but a score to a movie trailer:

Only used FL Studio for that, and only the video was provided without samples or any audio.
Was fun to participate back then, especially creating all those weird sounds. Was also very challenging to get it roughly in sync with the movie clip. Sadly didn't win tho :p

Most of my other stuff I haven't uploaded anywhere. But don't have any finished songs anyway. I'm like always getting trapped in a loop or sound design. So proably taking a break from producing was a right thing for me, which I used to spend time on music theory, song structuring and listening to a lot of music while making 'notes' in my mind.

But anyway, not sure if this is the right thread for this conversation, maybe PM would be more appropriate for this, so we don't spam this thread with unrelated stuff to the original topic :D
 
Last edited:
  • Like
Reactions: 0002378
So i tried optimizing your Aural UI using CA Instruments and were able to greatly reduce blending almost completely, thus increasing performance by a LOT. Scrolling is now super smooth.

For this I adjusted the CALayers, removed them were not neccessary and enabled them where gaining performance.
Also adjusted the size of the table text boxes to wrap around the content, so they are as small as possible, which also reduces blending and simplyfies/reduces autolayout at the same time.

Regarding expanding/collapsing groups the performance can also be improved, but probably by changing some code itself to avoid redrawing. Like it is now, all cells get redrawn upon expanding a group.

So here's a before and after (the red one beeing before):
ca_layer_comparison.png


- Red means blending which is bad for performance (the lighter the red color the more blending occurs)
- Green means no blending
- No color means no layer

Edit:
Turning back on layer for the main window reduces CPU spike when showing/hiding the effects panel and also gets rid of some white flashes when pressing specific buttons.

Also measured the performance differences on the playlist with my changes. Reduced the CPU load while scrolling on a maximized playlist by half (~25% CPU instead of ~50%). That's why it got so much smoother.
 
Last edited:
  • Like
Reactions: 0002378
Wow! Just ran across this app yesterday. I had not been in this part of the forum in a long time. This is an awesome app. Much nicer and simpler to use than VLC. Although I did not notice any volume fluctuations between songs (don't think there's any volume normalization that I noticed within the app) think there will be any volume "Normalization" option added in the future? :)
 
  • Like
Reactions: 0002378
Wow! Just ran across this app yesterday. I had not been in this part of the forum in a long time. This is an awesome app. Much nicer and simpler to use than VLC. Although I did not notice any volume fluctuations between songs (don't think there's any volume normalization that I noticed within the app) think there will be any volume "Normalization" option added in the future? :)

Hi, thanks for the great feedback. Glad you like this app.

To be honest, although it provides a decent bit of sound tuning functionality, this app is not as sophisticated as VLC or some other audio players out there, in terms of functionality or even configurability. Where it scores high marks, as you noticed, is in its ease of use (and that is its mission).

About volume normalization, you're right that it is not currently a feature, but I will add it to the TODOs list, and see if it is something that I can squeeze in in the future. (It is fairly complex to implement, from what I know so far, and I'm still learning Swift/MacOS programming).

Thanks again for the feedback.
[doublepost=1513035834][/doublepost]
So i tried optimizing your Aural UI using CA Instruments and were able to greatly reduce blending almost completely, thus increasing performance by a LOT. Scrolling is now super smooth.

Bravo ! Now how do we merge your changes with mine :D

Or, instead of merging, I could just study (and then replicate) your changes on your branch in GitHub. Is this on your test branch ?

Thanks !
 
Last edited:
Bravo ! Now how do we merge your changes with mine :D

well I could do a pull request if you want to.

Or, instead of merging, I could just study (and then replicate) your changes on your branch in GitHub. Is this on your test branch ?

no my clone of your branch isn't on github I guess, but I can change that.

If you want to adjust those changes and which way you do it is up to you.
So either pull request, me uploading my clone to github, or I can also explain the changes made here.

So you decide which way you prefer :p
 
well I could do a pull request if you want to.



no my clone of your branch isn't on github I guess, but I can change that.

If you want to adjust those changes and which way you do it is up to you.
So either pull request, me uploading my clone to github, or I can also explain the changes made here.

So you decide which way you prefer :p

Hmm, from my sordid experience with (automated) merges, I'm going to say a resounding "No !" to pull request.

If you could upload your clone to a new branch, that would be best, I suppose. Now, when you say clone, do you mean that it is otherwise identical to my Swift 3 branch (except for your performance tweaks) ?

So, yeah, I think that would be easiest for you too. Upload the clone to a new branch under your repo, maybe call it "Smoothie" (a pun for "super smooth playlist scrolling" and a reference to blending :p).

Then, I will see if I can do a code diff on the files that you've tweaked (assuming all the XIBs).

Danke.

------------------------------

P.S. You do realize you're making me look bad :p :D LOL (JK)
 
Last edited:
Hmm, from my sordid experience with (automated) merges, I'm going to say a resounding "No !" to pull request.

If you could upload your clone to a new branch, that would be best, I suppose. Now, when you say clone, do you mean that it is otherwise identical to my Swift 3 branch (except for your performance tweaks) ?

So, yeah, I think that would be easiest for you too. Upload the clone to a new branch under your repo, maybe call it "Smoothie" (a pun for "super smooth playlist scrolling" and a reference to blending :p).

Then, I will see if I can do a code diff on the files that you've tweaked (assuming all the XIBs).

Danke.

------------------------------

P.S. You do realize you're making me look bad :p :D LOL (JK)

I think it would be worth it getting more into the whole merging stuff, since with that you can very easily compare two versions, and then merge the stuff manually. This way you dont have to look for the changes since the version editor directly shows them. Also it should be possible to revert to the previous state in case something goes wrong with the merge (at least when doing an automated merge).

But if you want me to upload it separately I will see what I can do.
In the meantime I spent some hours with something I really wanted to try, since it's something I haven't done before in any way.

So, I got rid of the gif animations and recreated them with Core Animation and CAReplicationLayer.
To do this I have implemented an '@IBDesignable AnimatedView: NSView' Class.
The animations are super smooth and use way less CPU.

The Playlist and the NowPlayling View use the same Animation, but with different size and dirrefent amount of 'visualizer bands'. The amount of bands and gap between them can be fully adjusted with an @IBInspectable variable in IB or with by code with the initializer. Those @IBInspectables do now work yet, since this is something I have yet to solve.

With this, all the code for pausing/resuming the gif animation can probably be removed.
(at least for the NowPlaying view since here, the animation uses literally about 0% CPU)

In the Playlist however the animation uses about 3% CPU, which is acceptable I think. Maybe I'll find a way to also reduce the CPU for this one some more. Tried a few things without success, but have 1-2 more things in mind, which may work.
 
  • Like
Reactions: 0002378
I think it would be worth it getting more into the whole merging stuff, since with that you can very easily compare two versions, and then merge the stuff manually. This way you dont have to look for the changes since the version editor directly shows them. Also it should be possible to revert to the previous state in case something goes wrong with the merge (at least when doing an automated merge).

Ok, but merge from where ? Your Swift4 branch ?

And merge how ? Using a pull request ? Maybe you can send me a pull request, just so I can see the diffs, and then I can copy the changes manually. That is one possibility.

But, I'm really confused as to why you can't just upload the clone (assuming it is a near-clone of my code). Isn't that the easiest thing to do ? Or just zip the source and attach it here. I can then import it directly into my XCode and use the built-in diff tool (just tell me which files you worked on).

I need more information from you to decide what is best.
 
Last edited:
Ok, but merge from where ? Your Swift4 branch ?

The thing is, I have my swift4 branch as a fork. So the only remote on that one is my own github.
I have a second clone which is just a direct clone from your remote. So the remote is on your github.
And on the clone I did the changes. I'm not an expert on this, so I don't really know how this is done the best way.
I thought i could just push to your remote which would give a pull request for you. So no import needed, at least that's how i think this should work.

Maybe you can send me a pull request, just so I can see the diffs, and then I can copy the changes manually. That is one possibility.

that's what I think would be the best, but I'm again not exactly sure how to directly do a pull request form Xcode. Since here I can just commit to your remote. Not sure if this would create a PR.

Also finished the animations and added pause/resume to it.

I will try to make a PR (hopefully i don't **** up, but with git it should possible to restore any specific state anyway.
Then I will edit this post or reply again.
 
OMG! This whole git thing drives me totally crazy.
So I somehow cannot create a PR.
When creating a new repo and uploading it, it takes hours since its over 500MB and github upload is slow as ****.

So I tried reducing the size with GC, but made no difference. Then I did some research, found out about Shallow copying, but this doesn't work on a local repo, like WTF!! why shouldn't this work? why can't i just take my ****ing local .git repo and make a clone with only the latest commit. This is the most stupid **** I've ever seen.

Well, now after some ranting about git, i decided to manually create a new repo and upload only the plain .swift and .xib files with the changes. So hope this works out for you:

https://github.com/Dunkeeel/aural-player-clone

keep in mind that there will be some changes listed in the xib files because i use a newer version of Xcode. So just ignore there and don't merge them.
 
  • Like
Reactions: 0002378
i decided to manually create a new repo and upload only the plain .swift and .xib files with the changes. So hope this works out for you

Yes, this is the best solution. This tells me exactly which files I need to look at. And, I can do diffing locally within my XCode, so I don't really need a pull request for that.

I understand your frustration with Git. I've had similar 4-letter-word cursing sessions myself :D

Awesome work on the animations ! I will check 'em out, and see if I can add a little color gradient to the spectrogram bars to spice 'em up a bit ;)

BTW, you don't need to worry about writing code to pause the animation in the playlist; I already have that part done. I will absorb your animation code into my codebase, and upload the finished product. You can then do another pull on your end to see what it looks like.
[doublepost=1513117408][/doublepost]

BTW, last night, I played around with the AudioKit framework a bit and looked at their source. They're using AVAudioEngine under the covers for a lot of what they're doing, but they're also using plain Audio Units (which is lower level Obj-C / C code).

I have to do more research, but it is a possibility that Aural Player will have a Spectrogram visualization in the future, if AudioKit FFT analysis proves to be real-time. Since it looks like AK is using AVAudioEngine under the covers (and that is precisely what Aural Player currently uses), it may not be a lot of work to harness AudioKit for specific features like real-time analysis for Spectrograms or volume normalization.
 
  • Like
Reactions: Tobias_Dunkel
Yes, this is the best solution.

Yeah, at the end of the day, it probably is. Also means i have to upload only the relevant bits which saves quite a lot of upload time.

I understand your frustration with Git. I've had similar 4-letter-word cursing sessions myself

I'm glad I'm not the only one. This makes me feel less stupid xD

BTW, last night, I played around with the AudioKit framework a bit and looked at their source. They're using AVAudioEngine under the covers for a lot of what they're doing, but they're also using plain Audio Units (which is lower level Obj-C / C code).

Yeah I know AK is using underlying C Code for some parts. Already had a look at AudioKit in the past, and someday I wanted to tackle the whole Analyzer thing myself as well. Since I'm really interested in that kind of stuff, not only for this project, but in general.

We already did some FFT and Audio manipulation back in college. For example we analyzed engine sounds of a car with Matlab to see if there's engine knock occuring (not sure if thats the correct term in english). To think that you can diagnose a whole lot of stuff going on in car just with some recorded audio file is amazing.

I'm also thinking of doing a very simple audio player on my own, just to learn things, since working with the whole audio engine part on an existing project seems so hard to follow.

BTW, you don't need to worry about writing code to pause the animation in the playlist; I already have that part done.

That's good to know. Didn't have the time to do so earlier, even thought it's probably just couple of lines.
 
Last edited:
if there's engine knock occuring (not sure if thats the correct term in english)

Yes, the right word is "knocking" in English. You mean knocking that occurs when the combustion cycle is offset by the premature ignition of the air-fuel mixture (i.e. prior to ignition by the spark plugs at the end of the compression stroke) due to the use of a lower-octane-content fuel, right ? If so, yes, "knocking".

In aviation crash investigation, they also use audio analysis for computing engine thrust levels using the pitch of the engine sounds from the CVR (cockpit voice recorder) when the FDR (flight data recorder) is unavailable after a crash, to determine if the engines were operating normally at the time of impact. They used this kind of analysis post- Air Florida 90 (Potomac river crash). Similar in principle to knocking detection, I suppose.

And yes, it is amazing !

I'm also thinking of doing a very simple audio player on my own, just to learn things, since working with the whole audio engine part on an existing project seems so hard to follow.

That's cool. If you wanna see what I've done with AVAudioEngine, you really only need to look at 3 files: AudioGraph, Player, and BufferManager (Recorder if you're interested in recording). I've documented stuff quite well there. At least, looking at the code will give you pointers into the AVAudioEngine framework.

There are also some WWDC videos that talk about AVAudioEngine, but I found them lacking in depth. I had to figure things out the (really) hard way when I started Aural Player. For instance, nowhere on the "World Wide Web" is it documented that Apple's AVAudioEngine's AVAudioPlayerNode.scheduleFile() and scheduleSegment() simply cannot handle MP3 files with incorrect duration metadata ... resulting in buffer overruns, massive memory leaks, and consequent system freezes ! That is why the class BufferManager exists. I had to figure out on my own that I need to schedule audio buffers on my own to account for MP3 files that were ripped from YouTube (i.e. most of my collection), which may have inaccurate TLEN tags.
 
  • Like
Reactions: Tobias_Dunkel
You mean knocking that occurs when the combustion cycle is offset by the premature ignition of the air-fuel mixture

yeah that's exactly what I meant :D

In aviation crash investigation, they also use audio analysis for computing engine thrust levels using the pitch of the engine sounds

That's really interesting. Didn't knew that. Don't really have much knowledge in avaition stuff. Will also replay to your PM, but I'll do that when I have enough time to spare to do so, so don't worry, I already read that message. :p

I've documented stuff quite well there. At least, looking at the code will give you pointers into the AVAudioEngine framework.

Yeah I think having your code as a back up to look at stuff will speed up the learning process by a lot. Even though I try to do an unbiased implementation (not wanna just copy your code), at least as long as I can keep going forward and not hitting a wall at some point. Then I will definitely use of your nicely documented code to help me out ;)

Sometimes i even thing your code is too well documented :p like there's comments on some really obvious stuff sometimes where I don't think it's needed. But then again, you never know who looks at the code any maybe for someone it's helpful.

Thanks for the tipps with AVAudioEngine. The whole Buffer thing was probably hard to figure out on your own I guess. So I very much appreciate your help :)
 
Even though I try to do an unbiased implementation (not wanna just copy your code), at least as long as I can keep going forward and not hitting a wall at some point.

I know that you like to learn things on your own. Don't worry ... with AVAudioEngine, you will have no choice but to do just that :) There is little useful documentation for it, and like with anything else, you will have to sift through 999 pages of iOS horse$#!t for one single relevant page for MacOS.

It took me weeks ... let me say that again ... weeks ... no, you're not misreading this ... weeks ... wochen ... W E E K S ... to figure out what the heck was the problem with AVAudioEngine ****ing up when playing my MP3 files that have incorrect duration metadata. My rationale was as follows: Apple is a big company, and they pay their "super-smart" developers well. There's no reason they would not have anticipated an exceedingly easily reproducible, and disastrous (in terms of consequences) problem such as playing a file with inaccurate duration metadata ? Right ?!

Well, I made an ass out of myself with that ass-umption.

The Aural Player project pretty much ended in its first couple of weeks before it had hardly begun. I was ready to give up when I realized that I would have to do my own buffer reading/scheduling (because that IS ridiculous, and I still think it is). In the end, I'm glad I didn't give up, but that particular problem really really ... really ... tested my patience. Not one single page, to this day, I have found of anyone mentioning that fundamental bug in AVAudioEngine ... anywhere online.

If, someday, you do decide to write your own player, I would be very interested in seeing if you're able to reproduce the problem. So, let me know, and I will share an MP3 file with you. All you have to do is play it (using either playerNode.scheduleFile() or playerNode.scheduleSegment() ) and wait for it to get to the end of the file. What will likely happen is ... you will notice a sudden system freeze. If you happen to have your XCode performance monitoring on, you will see memory usage spiking to the GBs within seconds. If you record the spike with Instruments, you'll see a large number of audio buffers being scheduled.

So, yeah, I struggled with it for weeks ... you get to know about it for free ;)

EDIT - Holy crap ! I tried reproducing the problem just now - nope, it works fine ! :eek::rolleyes:

I was on Yosemite (and hence, using an older version of AVAudioEngine, which was still fairly new and untested when Yosemite came out) when I ran into that problem. I guess they've fixed it by now :D

I'm going to give their scheduleFile() and scheduleSegment() another try to see if they work reliably with different kinds of files.
 
Last edited:
to figure out what the heck was the problem with AVAudioEngine ****ing up when playing my MP3 files that have incorrect duration metadata

Well while this sucks, for apple the priority to fix this wasn't that big probably. I mean generally songs you buy somewhere are sure to have correct metadata. Why should Apple support playing audio extracted from Youtube. So I don't think Apple here can really be blamed here.

Also, I have music players on Windows who also don't play songs downloaded from Youtube.
So it's not macOS specific.

Holy crap ! I tried reproducing the problem just now - nope, it works fine ! :eek::rolleyes:

Damn this kind of sucks I guess. Maybe Apple came across Aural and copied your code :rolleyes:
But an own implementation is probably better anyway. This way you can fix stuff on your own in case something breaks again.

---

Made some visual changes to the effect tab buttons and also have some suggestions for the effect tabs itself.

screenshot_72.png

So I removed the labels since there are now icons anyway.
Adding a label to the tab content itself would be sufficient I think.
Plus a help button (like a simple question mark button in the corner of each tab) which opens the UserGuide for the selected effect. I mean, once you're familiar with effects and it's parameters you usually don't need the description anymore anyway, so it just clutters the UI.

As a consequence the box for the sliders can be removed and they can be added directly to the view making things look more clean and streamlined.

If you like my proposal I can make those changes and upload the changed Effects.xib :)
 
Made some visual changes to the effect tab buttons and also have some suggestions for the effect tabs itself.

I like the changes to the FX tabs. I assume you've added tool tips that tell the dummy user (hahahaha ... LOL at their expense) what each tab is ?

Lemme tell you something, kid ... you're too smart for your own good :p When are you coming to California so I can buy you a beer ? (a nice Oregon beer like Rogue ... if you love beer, you surely know that Oregon is USA's beer capital)

Wie alt bist du ? Just curious.
[doublepost=1513159215][/doublepost]
If you like my proposal I can make those changes and upload the changed Effects.xib

Go ahead and upload it. Lemme see what it looks like (by running it locally), and then we can make it final. I definitely like the tab button changes.
[doublepost=1513160178][/doublepost]
Why should Apple support playing audio extracted from Youtube. So I don't think Apple here can really be blamed here.

Agree to disagree on this, mein freund.

Are you really telling me that Apple can ignore support for music ripped from a provider as large as YouTube ? I mean ... my greatgrandmother, assuming she was still alive, has heard of YouTube, and likely uses it.

Everybody listens to songs on YouTube, and a good portion of users rip MP3s from there. Not everyone jumps to the conclusion, "Let's buy this song !" I don't think so.

And, we're not even talking about anything illegal. Ripping songs from YT is perfectly legal, at least for now.

YouTube is just one such music source.
 
Last edited:
I assume you've added tool tips that tell the dummy user (hahahaha ... LOL at their expense) what each tab is ?

Thats why adding the top of each tab which states the effects name :p
To stress that I could make the font size huge :D

When are you coming to California so I can buy you a beer ? (a nice Oregon beer like Rogue ... if you love beer, you surely know that Oregon is USA's beer capital)

Well I already traveled through the states this year in june/july (California, Nevada, Utah, Arizona and Missouri/Illinois), so will probably be a while when I come back.

Actually met some people from Oregon, so yeah they told me about the beer, even thought craft beer isn't really my thing.
But the American beer wasn't so bad at all, and there was a lot to choose from.

We also visited Anheuser-Busch brewery in St. Louis, which was amazing to see. Even thought I didn't like Budweiser so much.

So what beer you can recommend the most from Oregon? So I can try it when I visit the states again. ;)

haha, I see this going off-topic again :D

Wie alt bist du ? Just curious.

I'm 24, how about you?

Go ahead and upload it.

Will do ;)
 
someday I wanted to tackle the whole Analyzer thing myself as well.

In the past, I had played with Spectrogram rendering (using FFT code borrowed from different places), responding to a tap on the AVAudioEngine's main mixer node.

Now, I'm using AudioKit's AKFFTTap class to achieve the same thing. Honestly, I don't know that their tap is any more real-time than what I had tried in the past. It doesn't look to be more real-time (i.e. in sync with the music). Which is no surprise, given that their tap, under the covers, is the same AVAudioEngine tap I had used.

This is a pity :( If it ain't real-time, it may be a waste of time :)

But here's my first stab at the bar spectrogram rendering. This is just a shot in the dark at the moment. Several things need to be fleshed out. Need to refine the logic that picks the data points (there are 512 data points but only 10 bars drawn) and determines the height of the bar for each frequency band. Need to determine an appropriate multiplier to keep the bars within the appropriate height range (keeping in mind the varying player amplitude)

Finally, of course, I also need to work your CA layering magic on it to make it performant :) But here it is:

The song playing is Toby Emerson - Secret Life (Dub) ... the tempo is 128 BPM.

vis.gif
 
Last edited:
  • Like
Reactions: Tobias_Dunkel
finished the basic UI for my own player ;)

basic_player_UI.gif


On the bottom will be the playlist.
Next thing is to make it play a song. After that make it display the cover and the metadata.
And last step is to make the playlist work.

Will try to keep it as minimal as possible.
Top Toolbar (where the Toggle UI button is) will be customizable with some more functions.
Full title information will be visible when clicking on the Title/Artist area.
Album Cover will show in its original size upon clicking the cover art provided there is a cover art available.

Since the UI is totally scalable I will laso try to implement a status bar popout mode like you did, and will make it draggable to undock/dock it to the status bar. Docked mode will have a fixed width.

So, that's the plan for the first release of my player.

Then I'll probably release it in the store.
After that I'll try to hoefully make Soundcloud and Youtube integration, so I can get some experience on working with 3rd party APIs.

But that's just distant future.
Only thing I know is, there's totally awesome desktop clients for Soundcloud and Youtube on Windows as UWP app. But there is no really nice solution for macOS. Used to use VOX player but this one constantly runs in the background even when closed. Also don't like their business model.

[doublepost=1513260163][/doublepost]
Finally, of course, I also need to work your CA layering magic on it to make it performant :) But here it is:

Well, i'll have a shot at it myself. Will make a playground with a auto looping mp3 file (so I can learn how to play audio files for my own player as well) and, based on that, build the spectrum analyzer.

The playground was also where I built the now playing animation view, since there you can have live display of the rendered output.

To make the rendering smooth all you need is probably just use CAReplicatorLayer like I did with my animation.
Then scale each replicated CALayer by the amount of the corresponding frequenzy band volume. But I'll have a look at it when you have uploaded the analyzer
 
Last edited:
  • Like
Reactions: 0002378
so I can learn how to play audio files for my own player as well

With AudioKit, this is a 3 or 4 liner.

I'm considering redoing (i.e. simplifying, hopefully) Aural Player's back end to use AudioKit instead of AVAudioEngine directly. Their player already has all the ugliness (buffering/looping) that I had to write in BufferManager. And, supposedly, it works reliably, coz a lot of people seem to be using it.
 
  • Like
Reactions: Tobias_Dunkel
With AudioKit, this is a 3 or 4 liner.

I'm considering redoing (i.e. simplifying, hopefully) Aural Player's back end to use AudioKit instead of AVAudioEngine directly. Their player already has all the ugliness (buffering/looping) that I had to write in BufferManager. And, supposedly, it works reliably, coz a lot of people seem to be using it.

That's pretty cool. Well if you think it's worth changing the backend then it might be a try. So, what's the motivation? For future features being implemented easier? Or is there some other advantage, too?

Right now im building the AudioKit framework via Carthage so I can use it in some Sample Playgrounds and have some fun. :cool:
[doublepost=1513294288][/doublepost]Ok I guess some other advantages would be:
- by having the audio engine as a framework build times could get reduced
- plus you get all of the improvements/features other people add to AudioKit for free

Correct me if I'm wrong.
 
Then I'll probably release it in the store

You're aware that to release it in the store, you need an Apple developer membership, right ? It costs about USD 99 yearly.
[doublepost=1513300431][/doublepost]
That's pretty cool. Well if you think it's worth changing the backend then it might be a try. So, what's the motivation? For future features being implemented easier? Or is there some other advantage, too?

Right now im building the AudioKit framework via Carthage so I can use it in some Sample Playgrounds and have some fun. :cool:
[doublepost=1513294288][/doublepost]Ok I guess some other advantages would be:
- by having the audio engine as a framework build times could get reduced
- plus you get all of the improvements/features other people add to AudioKit for free

Correct me if I'm wrong.

I guess the main motivation is just the good ol' "Don't reinvent the wheel", and the possibility that I could cut my audio graph code in half. Not the fx code, but mostly the player code (buffer scheduling / segment looping).

When I started working on Aural Player, I had never heard of AudioKit. I started off with AVAudioEngine. And, frankly, there is nothing wrong with working directly with AVAudioEngine, because it gives you more control over the DSP chain. Also, I learned a lot about the fundamentals by working with AVAE. So, it was a good start.

However, at this point, having seen that AudioKit also works with AVAudioEngine under the covers, and having some confidence that their framework works, it may be worthwhile to just swap out my buffering code, i.e. the reinvented wheel, for a tested and trusted 3rd party framework that works, thus reducing my maintenance overhead (fewer lines of code).

From what I have seen, AudioKit exposes the underlying AVAudioEngine as a static var, so, if, at any time, I need access to it, I can grab it easily and work with it directly.

Yes, it is also nice for future extensibility and all the nice bonuses I get by plugging into AudioKit. For instance, maybe someday, I could add a tunable synth to play alongside the player track ;) If someone wants to do some live impromptu performance, adding a synth sound to the original track while it is playing, ... AK can do that. Who knows what might come of it someday :)

At this point, it is just an idea. I have to see and make sure this is possible, while not breaking any functionality. I suspect it should be fairly straightforward, but need to prove the concept.
 
Last edited:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.