Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

mdeh

macrumors 6502
Original poster
Jan 3, 2009
345
2
I know Kochan will be covering this in much greater detail later, but I want to clarify one thing now :)

Firstly, here is the code:

Code:
Fraction *aFraction = [ [Fraction alloc] init];
	Fraction *sum = [ [ Fraction alloc] init], *sum2;
	int i, n, pow2;

	[sum setTo: 0 over: 1];
	NSLog(@"Enter your value for n");
	scanf("%i", &n);
	
	pow2 = 2;
	for ( i = 1; i <= n; i++){
		[aFraction setTo: 1 over: pow2];  /* A */
		sum2 =[aFraction add: sum];
		[sum release];
		sum = sum2; /* B */
		pow2 *= 2;
	}


So, here is the question.
There are 2 circumstances in the above code where a new value is assigned to a Fraction object, A and B as marked.
In "A" it seems that the new values are being assigned to the same object, so no "memory leak"?
But in B, sum2 is a different object from which "sum" is currently assigned to. Here is my question. If the above scenerio is correct, **if sum were **not** freed** why would the memory of "sum" simply not be handed back to the system? Does the system not intuitively know when memory is no longer being "used". If the words are rather inarticulate, it's probably because I am missing something quite basic here, but would appreciate some input.
Thanks in advance.
 

OlyaGracheva

macrumors newbie
Oct 17, 2008
13
0
Hi there!

I hope I'm able to answer your question. I'm still learning myself but I hope I can be of assistance.

Mdeh, you asked:

If the above scenerio is correct, **if sum were **not** freed** why would the memory of "sum" simply not be handed back to the system? Does the system not intuitively know when memory is no longer being "used".

On page 39 of Steve Kochan's book, he writes:

"The last message in the program

[myFraction free];

frees the memory that was used for the fraction object. This is a critical part of good programming style. Whenever you create a new object, you are asking for memory to be allocated for that object. Also when you're done with the object, you are responsible for releasing the memory is uses."

Steve goes on to say:

"Although it's true that the memory will be released when your program terminates anyway, after you start developing more sophisiticated applications, you can end up working with hundreds (or thousands) of objects that consume a lot of memory. Waiting for the program to terminate for the memory to be released is wasteful of memory, can slow your program's execution, and is not good programming style."

Adding to this I think it would be nice for the program to dynamically know when memory is no longer needed but I think I'd much rather be in a position where I can release the memory myself. Why? Well, I wouldn't want the system to hold onto memory for longer than is necessary and also I wouldn't want the system to release the memory before time. So, yes, perhaps, one day, it may be possible for the program to dynamically release memory but it would need to be a bullit-proof program.

Hope that helps.

Olya.

PS From what I understand there are 4 instances where a value is assigned to a Fraction object.

The first one:

[sum setTo: 0 over: 1];

This initialises the fraction object that sum points to, to a value of 0.

The second one is as you mention:

[aFraction setTo: 1 over: pow2];

However, I like to think of this as the setTo: method being invoked and the arguments "1" and "pow2" being forwarded to the method. So, no, there would be no chance of a memory leak here.

The third one:

sum2 =[aFraction add: sum];

This assigns the result of the addition of the two fractions. And actually in Steve's book it is written as follows (well at least in my book anyway):

sum2 = [sum add: aFraction];

The fourth place where a value is assigned to a Fraction object is:

sum = sum2;

Here the address that sum2 points to is assigned to sum. In other words, sum2 isn't really an object and nor is sum. These are both pointers that point to objects.
 

kpua

macrumors 6502
Jul 25, 2006
294
0
Does the system not intuitively know when memory is no longer being "used".

In general, no. Computers, of course, don't have intuition about anything. In regular C or Objective-C, once you allocate memory the operating system hands off all responsibility for that memory to you, the programmer. You must free it. The OS can't just take it back.

Of course, for humans it is intuitive to know that when you no longer need something, it can be disposed of. This is why automatic garbage collection was invented. Many languages including Ruby, Python, Java, and ObjC 2.0 have this feature, so you can give responsibility for memory back to to computer (again, not the OS, but the language runtime).
 

mdeh

macrumors 6502
Original poster
Jan 3, 2009
345
2
In general, no. Computers, of course, don't have intuition about anything. In regular C or Objective-C, once you allocate memory the operating system hands off all responsibility for that memory to you, the programmer. You must free it. The OS can't just take it back.

The thing about asking questions is quite often it one is not sure exactly what it is that is troubling one. Your answer focuses in on the issue that is my puzzle, and I am not exactly sure it is an Obj-C issue or one in general. It is clear now that ** if one wishes ** then one **can** be responsible for freeing memory. So, having said that, I can finally better frame my question. In using ObjC, what is the best argument for controlling the freeing of memory as the programmer versus the best argument for allowing this to be done automatically ( garbage collection...assuming I understand garbage collection correctly).
 

lee1210

macrumors 68040
Jan 10, 2005
3,182
3
Dallas, TX
<snip>In using ObjC, what is the best argument for controlling the freeing of memory as the programmer versus the best argument for allowing this to be done automatically ( garbage collection...assuming I understand garbage collection correctly).

Some arguments for doing things yourself:
1) If you ever want to move to the iPhone (which is popular with the cool kids these days), you won't have GC at your disposal, so you'll have to manage things yourself.
2) Learning can sometimes be an end unto itself. Even though Obj-C 2.0 has GC, it doesn't mean it will hurt you to learn manual memory management. Someday you may find yourself wishing to use straight C, C++, or another language that is not GC'd, and knowing manual memory management techniques in advance will be of use.
3) Having some inkling of when things will be deallocated can be very important in performance-intensive situations. With GC, who knows when your unused objects will be deallocated? Maybe never (probably not, but that's not up to you).
4) You can target older versions of OS X that don't support Obj-C 2.0.

Some arguments for letting the Garbage Collector handle things:
1) It's easier. Keeping track of all those *things*! What a hassle.
2) Your code may be more concise and readable without all of the overhead of memory management.
3) You may spend less time thinking of creative ways to keep track of pointers you need to free/release, and more time actually solving the problem at hand.
4) You'll have to learn that leaking references in GC'd languages is leaking memory, which will serve you well in other GC'd languages like C# and Java, if you ever make a foray into such things.

There are more arguments (many more) on both sides. I don't feel very strongly either way; it's nice to be used to both styles so you can implement them when you're working in a language that only supports one.

-Lee
 

mdeh

macrumors 6502
Original poster
Jan 3, 2009
345
2
Some arguments for doing things yourself:

........snip..........

Some arguments for letting the Garbage Collector handle things:

.......snip...........

-Lee

Thanks Lee. At least that puts it in perspective for me. Kochan presents it as being very important, which it clearly is when you are doing your own management, but did not ( so far at least ) go into the pros and cons. I would assume that discussion occurs later in the book.
 

4409723

Suspended
Jun 22, 2001
2,221
0
Another consideration is that GC algorithms take time. You don't want your runtimes GC system kicking in and delaying calculations in your car's abs systems. You want deterministic performance of time critical systems like that!
 

GorillaPaws

macrumors 6502a
Oct 26, 2003
932
8
Richmond, VA
I've been wondering for a while now why there isn't a way to implement a manual GC. What I mean by this is the programmer would have a method that would tell the GC that you're pretty sure you're done with that memory and for it to check to see if it can be removed or not. It wouldn't free the memory if there is still a reference to that object out there that you may have forgotten about, but it would help the Garbage Collector along by giving it cues on when to do its cleanup.

In a system like this the GC would act like a safety net that catches memory leaks, but isn't as hands off as the current implementation is now. I'm guessing there are good reasons why this either isn't possible or isn't desirable, but I can't think of what they might be.
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
I've been wondering for a while now why there isn't a way to implement a manual GC. What I mean by this is the programmer would have a method that would tell the GC that you're pretty sure you're done with that memory and for it to check to see if it can be removed or not. It wouldn't free the memory if there is still a reference to that object out there that you may have forgotten about, but it would help the Garbage Collector along by giving it cues on when to do its cleanup.

What you describe is a system that Cocoa already has: a retain/release mechanism. Instead of doing a malloc/free or new/delete (or new/free in the case of Obj-C), you alloc a new object, and then use retain/release after that to control when you are done with an object.

This has the advantage of simplifying your memory management, but still leaving it in the hands of the programmer. Instead, when you get a pointer passed to your object you want to keep in a member variable, you call [obj retain] on it. When you are done with that reference, you call [obj release] on it. The runtime will figure out when all the references are gone, and dealloc it (usually by just seeing if the reference count is 0).

But really, compared to a modern garbage collector, the only positive is that you have better control over when an object is deallocated. Beyond that, the modern garbage collector will still leak less. Both the Obj-C 2.0 GC and the .NET GC can collect objects that would normally leak under a retain/release model (circular references for example).
 

GorillaPaws

macrumors 6502a
Oct 26, 2003
932
8
Richmond, VA
What you describe is a system that Cocoa already has: a retain/release mechanism. Instead of doing a malloc/free or new/delete (or new/free in the case of Obj-C), you alloc a new object, and then use retain/release after that to control when you are done with an object.

Right, but if the programmer makes an error by releasing something when they shouldn't or adds a retain when they shouldn't that memory can be prematurely deallocated (or not deallocated when it should) and cause issues. What I was saying was using a hybrid where you have the performance benefit of the programmer telling the GC retains/releases but having the GC double check to make sure the programmer isn't forgetting something.

I know you can set the value of an object to null when you're done with it and that's a cue to the GC to free the memory when it gets around to doing it's thing. As I understand it however, the GC does its cleanup periodically, so that object pointing to a null value could potentially hang around for a while--but I could very well be wrong about this.
 

lee1210

macrumors 68040
Jan 10, 2005
3,182
3
Dallas, TX
<snip>
I know you can set the value of an object to null when you're done with it and that's a cue to the GC to free the memory when it gets around to doing it's thing. As I understand it however, the GC does its cleanup periodically, so that object pointing to a null value could potentially hang around for a while--but I could very well be wrong about this.

I felt like I should respond to this, because what you seem to be describing is setting a pointer to an object to null, not the object itself. In a GC'd system, this would simply alter the object graph such that the GC may find that the object pointed to previously by that pointer cannot be accessed and can be deallocated. The object itself, in the meantime, is still out there on the heap, happily storing its values, etc. I just wanted to make it clear that an object doesn't point to anything, but a pointer might point to an object, and nulling that pointer is simply going to change the reference count/accessibility of the object it was pointing to.

Re: your last point, about when the GC is going to run... that's completely at the discretion of the runtime/VM/etc. An object that is inaccessible/has a 0 reference count/etc. could technically never be deallocated. It seems somewhat unlikely, but there is no guarantee that the GC will ever run.

-Lee
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
Right, but if the programmer makes an error by releasing something when they shouldn't or adds a retain when they shouldn't that memory can be prematurely deallocated (or not deallocated when it should) and cause issues. What I was saying was using a hybrid where you have the performance benefit of the programmer telling the GC retains/releases but having the GC double check to make sure the programmer isn't forgetting something.

I know you can set the value of an object to null when you're done with it and that's a cue to the GC to free the memory when it gets around to doing it's thing. As I understand it however, the GC does its cleanup periodically, so that object pointing to a null value could potentially hang around for a while--but I could very well be wrong about this.

What would be the point of managing memory yourself, just to have a GC check everything anyways? Save the hassle and just run a full GC. It'd actually have less overhead than this hybrid design you propose. That is probably why it was never done.

And yes, you are right that an object you aren't using will sit around for a little while before it goes poof. We are talking on the order of seconds though. In most average applications, this is just fine.
 

GorillaPaws

macrumors 6502a
Oct 26, 2003
932
8
Richmond, VA
It'd actually have less overhead than this hybrid design you propose. That is probably why it was never done.

And yes, you are right that an object you aren't using will sit around for a little while before it goes poof. We are talking on the order of seconds though. In most average applications, this is just fine.

Thanks for the reply. As you can tell I'm still learning, and what you say does make sense now that I think about it. Sorry for any confusion I may have caused by my poorly chosen wording. I appreciate you clarifying what I was trying to say lee1210.
 

Krevnik

macrumors 601
Sep 8, 2003
4,101
1,312
No problem. I'm actually glad when someone is asking questions like this. It means they are interested in the how/why something works, rather than just how to 'do X task'. I find that those who are interested in the why in programming and search for the answer tend to write better code, and are able to actually explain why it is better to others. This is especially important as software gets more and more complex, and members on a development team may not all know the entire product from head to toe.

Don't ignore your curiosity on why something works, it will help you if you do decide to work in the industry, and IMO is a good quality to have as a developer. At the very least, it keeps you valuable, and more able to move around between languages/platforms in the long-run.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.