Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

PetRock

macrumors newbie
Original poster
Mar 16, 2005
7
0
It seems to me that GCD might make it easy to screw up in new ways.

Look at John Siracusa’s review of GCD at ArsTechnica. (Really, look at it, because I’m not going to summarize it here.)

Siracusa correctly describes the frustrations end users might have with the pre-GCD code. Something like:
USER: <click>

MAC: [“Analyze” button visibly depresses for a moment. The spinning Beach Ball appears.]

USER: [drums fingers for a few seconds...] “HOW COME THIS MAC IS SO &@%^)@ SLOW?!!!”
Siracusa says GCD can be easily used to make the analysis asynchronous, with just two extra lines of code. However, he neglects to consider what the user experience would be like with his suggested changes:
USER: <click>

MAC: [“Analyze” button visibly depresses for a moment.]

USER: [drums fingers few a few seconds...] “Hey, how come nothing’s happening?” <click> [pause] “What’s going on?” <click><click><click>
Great, now that long analysis is going to be performed repeatedly. Also, Siracusa’s suggested change invites the user to continue editing the document while it is being analyzed, which is asking for trouble.

There are ways to deal with these problems, but it’s not trivial. If programmers try to dispatch the analysis with just two lines of code like the ArsTechnica review suggests, they’re going to introduce some nasty bugs.

If developers believe all the hype that is going around about how simple GCD is, and that they can dispatch the analysis to another thread with just two lines of code like ArsTechnica tells us, we’re in for a flood of new bugs. If programmers who don’t understand the issues involved in multi-threaded programming start to believethat they can write multi-threaded programs because everybody’s telling them that GCD makes it easy, they’re going to write bad code.

On the other hand, if developers are careful when they use GCD, they will quickly realize that it’s a lot of work, and the luster will come off of the hype that’s going around with everybody claiming that GCD makes multi-threaded programming easy. Easier than POSIX threads or other multi-threading libraries, sure, but nowhere near as easy as is generally being claimed. Creating, dispatching, and waiting for jobs takes less code with GCD than with POSIX threads, but really, dealing with the Pthreads API has never been the biggest challenge in designing thread-safe code. Claims like “GCD doesn't require developers to think about how best to split the work of their application into multiple concurrent threads” are dangerous.

I don’t mean to pick on Siracusa. ArsTechnica isn’t the only one claiming that the process of making your application multithreaded becomes almost trivial with GCD. It’s the same all over with everybody saying, “Look at all you can do with just two additional lines of code!”
 
GCD's appeal from my point of view is how extremely cheap it is. It doesn't make existing parallelization opportunities that much easier to exploit, but it lowers the "worth it" bar for new ones.
 
I didn't want to agree with you, but after thinking about it for a few minutes, you are very correct. Siracusa makes it sound easier than it is; and his own example betrays the fallacy of thinking that GCD does anything to solve concurrency problems. Hopefully his [myDoc analyze] method locks the document properties which it would be measuring before it executes, and unlocks those properties when its done. That's clearly glossed over.

But GCD still solves a very important problem -- pooling threads and sizing the pool to the available hardware.
 
Interesting stuff here...
I agree with the stuff that you have said to a certain extent. As mentioned below I think one of the main points of GCD was to centralize the control of thread pools. It makes it so the OS is in control instead of the developer having to figure it out what is going on for every kind of hardware that might be available.

It's not like the developer would not have to take into consideration the same problem of POSIX threading of the user interface stuff anyway. I think what Apple is trying to avoid, and rightly so, is the GUI freezing/beachballing at ANY point in time. If there is a chance that something is going to take any amount of time then it needs to be forked off using GCD.

I didn't want to agree with you, but after thinking about it for a few minutes, you are very correct. Siracusa makes it sound easier than it is; and his own example betrays the fallacy of thinking that GCD does anything to solve concurrency problems. Hopefully his [myDoc analyze] method locks the document properties which it would be measuring before it executes, and unlocks those properties when its done. That's clearly glossed over.

But GCD still solves a very important problem -- pooling threads and sizing the pool to the available hardware.

This is probably one of the greatest parts about GCD. It removes the guessing and checking part of setting up threading on each and every system. Taking that variable away from devs will be a good thing in the long run and as GCD is expanded and improved it will lead to better Mac OS X applications
 
It seems to me that GCD might make it easy to screw up in new ways.
...

I don't think too many programmers will mistake John Siracusa's review for a full-fledged developer's guide to GCD. Hopefully. He's mostly describing what GCD + blocks can do, not what it doesn't do. GCD+blocks takes care of some issues involved in making efficient use of available computational resources, but hardly all.
 
The fundamental problem...

... is that there is a process (in this example analyzing the document) that takes a noticeable amount of time to the user (in this example several seconds).

So the "old way" of doing that calculation in the main UI thread and locking up the interface is one way of handling this. The "new way" of using threads (I'll treat GCD as a special case of the "new way") allows the interface to respond during this analysis, but doesn't necessarily speed up the analysis.

However, when coupled with multi-core hardware, the analysis time is indeed sped up. It's not the thread/GCD that speeds it up, it's the combination of software (threads) and hardware (multiple cores).

In the above example, assume the analysis takes eight seconds on one core and that you've got an eight core machine. In the old way, you hit the button, the UI freezes for eight seconds, then you get your answer and you go on with your life (possibly complaining about how slow your mac is). In the new way, you hit button and get the answer back in approximately one second. Not enough time there to hit the request button multiple times.

But what if the analysis takes eighty seconds the old way? Then the new way takes about ten seconds (probably longer), which give the user plenty of time to hit the button multiple times.

And yes, you're right, giving everyone a shiny new power saw is going to result in more sawed off thumbs.
 
... is that there is a process (in this example analyzing the document) that takes a noticeable amount of time to the user (in this example several seconds).

So the "old way" of doing that calculation in the main UI thread and locking up the interface is one way of handling this. The "new way" of using threads (I'll treat GCD as a special case of the "new way") allows the interface to respond during this analysis, but doesn't necessarily speed up the analysis.

However, when coupled with multi-core hardware, the analysis time is indeed sped up. It's not the thread/GCD that speeds it up, it's the combination of software (threads) and hardware (multiple cores).

In the above example, assume the analysis takes eight seconds on one core and that you've got an eight core machine. In the old way, you hit the button, the UI freezes for eight seconds, then you get your answer and you go on with your life (possibly complaining about how slow your mac is). In the new way, you hit button and get the answer back in approximately one second. Not enough time there to hit the request button multiple times.

But what if the analysis takes eighty seconds the old way? Then the new way takes about ten seconds (probably longer), which give the user plenty of time to hit the button multiple times.

And yes, you're right, giving everyone a shiny new power saw is going to result in more sawed off thumbs.

Really the "correct" way of doing business is to spin off the work to a separate thread while you use the main thread to give the user some idea of the progress, usually through a progress bar. Even before GCD spinning off another thread with a selector was one line of code, however the OS had to allocate that thread(and like the article says, os x threads are pretty heavy) and worry about how it may be scheduled. With GCD allocating the thread is a lot cheaper and the scheduling a lot more multiprocessor aware.
 
I see the use of progress bars becoming more of a standard thing in the future. Some devs would not fork off the work and just use the beachball as a way to indicate to users that something is being done, which is not how it should be done. With GCD and the centralization of threads that are available to the system I think it will make it much easier for devs to utilize everything that is available.

I don't think anyone that read that article took his explanations of what GCD does as the full explanation of what needs to be done.
 
I don't think this is going to be a real problem.

First, real developers don't adopt technology based solely on John Siracusa's "Look how easy it is!" article. Writers who comment on technology may criticize developers based on "Look how easy it is" articles, but that doesn't seem like a new problem to me. I remember similar responses for OpenGL, VRML, XML, Java, and so on. The sequence is pretty predictable if you've been around the industry for maybe 10 years or so.

Second, developers who didn't use NSThread or NSOperationQueue before, and are thus the ones whose apps beachball when clicking the "Analyze" button, aren't suddenly going to snap their fingers and be able to push a block onto a dispatch queue. Seriously, a developer who doesn't already know how to use NSThread or NSOperationQueue is going to be just as stymied by GCD. Maybe they'll try it, watch their app fail because it doesn't handle concurrency, then go back to having it beachball. GCD doesn't give anyone any better knowledge about how to solve concurrency problems in their app.

Third, any real developer is going to have to RTFM before doing anything non-trivial with GCD. NSOperationQueue was present all through Leopard, and did you hear about it (except when it had bugs)? No, it was just another tool that developers could use, along with NSThread and other things.

There's a nice summary that outlines transitioning from threads to GCD, and one of the first things it says is that GCD can't solve every concurrency problem.

http://developer.apple.com/mac/libr...ingGuide/ThreadMigration/ThreadMigration.html

First paragraph, 2nd sentence, "Although moving away from threads may not be possible in all cases ...".

Note that this is only a summary, and a real developer should read all the other things that came before, in order to really understand the benefits.

Personally, I'm not expecting GCD by itself to make a huge difference one way or the other, but GCD in conjunction with OpenCL might make the "Analyze" button run in 100 ms on a GPU instead of 30 seconds on a multicore CPU.
 
I see the use of progress bars becoming more of a standard thing in the future. Some devs would not fork off the work and just use the beachball as a way to indicate to users that something is being done, which is not how it should be done. With GCD and the centralization of threads that are available to the system I think it will make it much easier for devs to utilize everything that is available.

I don't think anyone that read that article took his explanations of what GCD does as the full explanation of what needs to be done.

Which is why Apple really should come up with a conveinience method for creating progress bars, ie you just hand it a selector and a couple of parameters to control the style and Cocoa takes care if the rest(sort of like NSRunAlertPanel). I know it's not THAT much code to load the separate window and fire off an NSTimer, but it is still a decent amount of code for an operation that as you said will become increasingly commonplace.
 
I don't think anyone that read that article took his explanations of what GCD does as the full explanation of what needs to be done.
That is certainly the claim he was making, but I guess any decent coder should know better and lousy coders are going to write lousy code no matter what anybody tells them.

I think the upshot is that if you’re already writing multithreaded coded, GCD will make it a little easier, more pleasant, and more efficient, but if you have serial code that you want to parallelize, you have much bigger hurdles to overcome than the ones GCD helps you with.
 
That is certainly the claim he was making, but I guess any decent coder should know better and lousy coders are going to write lousy code no matter what anybody tells them.

I don't see where he makes that claim. Look, his article is an extremely detailed review of SL. However, it's not an application developer's guide to GCD and he does not present it as such. I don't see any deception going on here, I think you've just misunderstood the scope of the article.

The first problem you raise is a UI problem so it has a UI solution: display some kind of "working..." indicator to the user + disable the button while the analysis is taking place.

The next problem, re editing the document while the analysis is taking place has a few possible solutions: (1) make a copy of the document to analyze, if that's a cheap operation; (2) serialize operations on the shared portions of the document using either traditional locking or, better yet, a GCD serial queue.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.