Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

iOrbit

macrumors 6502a
Original poster
Mar 8, 2012
569
30
It's likely his only symptom is dry eyes from staring at Activity Monitor for too long.

If he is truly experiencing massive paging, then it must be from something he has installed. I expect that anyone who goes through the trouble of installing a an app to "free memory" is also the type who will have installed other questionable software. This behaviour goes all the way back to Windows 98, when it was usually the software intended to "speed things up" that caused the most problems.

and you're another person who probably wears apple coloured glasses.
 

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
False.

As of 2012, LLVM-Clang is as fast as GCC, actually *FASTER* for certain code, slower for others.

Overall they are equal, some is faster for certain tests, some is faster for others, while the average is within 1% difference.

In the future, it's quite probably given the LLVM-Clang more modern and flexible approach that we will see LLVM-Clang improve at a faster pace than GCC, as it has been in the last 4-5 years, catching up with GCC while GCC was struggling improving as fast as LLVM has been.

But even *TODAY* GCC is not faster than LLVM/Clang, they are the same.

References:
http://www.phoronix.com/scan.php?page=news_item&px=MTA5Nzc
http://openbenchmarking.org/result/1204215-SU-LLVMCLANG23

Please stop spreading non-sense about LLVM-Clang as a compiler.

It's a compiler, has nothing to do with VMs, it compiles to native code, the only thing that is important is how fast it compiles code and how fast is the code that it compiles.

And LLVM-Clang produces code it's already as fast as GCC, while being a lot faster and using less disk/memory to compile.

Even major performance-oriented projects such as FreeBSD are switching from GCC to LLVM-Clang. It's simply the future :)

Finally someone posts something worthy of mention.
Look at the link:

http://openbenchmarking.org/result/1204215-SU-LLVMCLANG23


Now before I continue, I want to say is that I am not trying to criticize LLVM. And if that is your thought, you are missing the point. The main topic is slowness of operating systems and games and how the least common denominator affects them. LLVM can be useful in certain areas as well as Java and virtual machines. You have to look at case scenarios and
use appropriate tools. Now with that out of the way...

You will find the link supports the Clang generally 10% slower than GCC, but increases to 100% or more in certain cases. The ONLY case where Clang is faster than GCC over 10% is compiling time, which is irrelevant during
runtime native code. I would spend a whole week compiling and optimizing final code so it runs (runtime) 400% faster in a shipped product, rather than gloat about being able compile (prepare) the code 10% faster. The users
SEE the runtime, THAT is what is important. An engineer can spend 1 year
making a F1 car or 1 week. The speed of the car is important, not how
long it takes him to create the car.

Here are the relevant things that Clang baggage affect performance:

Timed HMMer Search v2.3.2 (database lookup of objects)
20% slower.

Smallpt v1.0 (3D graphical display)
400% slower.


John The Ripper v1.7.9 (Blowfish algorithm)
400% slower.

Why is Clang close to 400% slower in both cases? Now this is CLang mind you, trying to do C! The only way it can be slower is if it got interpretation baggage like in Java and C# and python. This is
like comparing Java to C! Blowfish is encryption where you need to have a very fast loops modifying tables over and over again. 3D displays requires fast lookup and display algorithms also requiring fast loops.

How can CLang increase its speed? Allow a direct path to native code without the required intermediate representation that carries baggage from supporting other languages and virtual machines. If Clang can do that, then
you won't see things like above. Now having said this, I think it IS
their priority now to fix the above, or it will carry over into OSX since
Apple uses it! In fact, swap out the slow parts and code things in assembly or C. Is Lion 400% slower than Snow Leopard because of Clang was used in certain parts
where it was very weak at? Was something left so that it needs to run in virtual machine?

I'd say go the Sony route in console OS. Make the operating system more efficient each release, taking smaller footprint
for same functions, running faster. This means putting more and more pieces into assembly,
get rid of interpretation, virtualization, and things that slow performance. Most importantly, choose performance over other criteria because it is the lowest common
denominator (the bottleneck) of all programs that run on top of it.


Unfortunately, a good benchmark is missing for CLang:
BYTE Unix Benchmark v3.6 (can show basic operating system performance)
TTSIOD 3D Renderer v2.2w (can show basic 3D performance)

The first shows basic operating system performance.
The second shows again 3D games performance.

I am hoping the second case is not also 400% slower.
 
Last edited:

ElectricSheep

macrumors 6502
Feb 18, 2004
498
4
Wilmington, DE
Why is Clang close to 400% slower in both cases? Now this is CLang mind you, trying to do C! The only way it can be slower is if it got interpretation baggage like in Java and C# and python. This is
like comparing Java to C! Blowfish is encryption where you need to have a very fast loops modifying tables over and over again. 3D displays requires fast lookup and display algorithms also requiring fast loops.
.

I don't know where you got the notion that anything is running in some "Virtual Machine" or is being "Virtualized". The 'Virtual' in LLVM is in name only. This has been repeated several times in this thread, but you continue to ignore it.

How can CLang increase its speed? Allow a direct path to native code without the required intermediate representation that carries baggage from supporting other languages and virtual machines. If Clang can do that, then
you won't see things like above.

I think your understanding of how compilers work is lacking. Pretty much every compiler produces an intermediate representation of the high-level language that is being compiled. This representation is then optimized and passed to a machine code emitter for the specified target architecture (where it could be further optimized). LLVM simply splits the intermediate representations and the back-end machine-code emitter into a separate, open, and well-documented entity. Anyone is now free to write their own front-end to interpret whatever language they want—even one of their own creation—and can produce a full-fledged compiler without having to be experts in operating systems, CPU-architectures, and object-graphs.

Once again, there is no virtual machine or virtual environment involved.
 

SlCKB0Y

macrumors 68040
Feb 25, 2012
3,431
557
Sydney, Australia
Now before I continue...

This guy has to be trolling, nobody can be this retarded.

Either way, I can no longer stand to read your posts as the stupidity is actually causing my brain to hurt. Welcome to my block list.

I have no problem with people not knowing stuff or people getting stuff wrong. What I have a problem with is people who refuse to recognise when they might be wrong, even in the face of overwhelming evidence that contradicts their viewpoint. No intelligent person would display that kind of absolute inflexibility in thinking.

It demonstrates an inability to process novel information, critically analyse and integrate that information and adapt their existing set of information.
 
Last edited:

ender land

macrumors 6502a
Oct 26, 2010
876
0
OP, I agree regarding memory management - I've had problems often (especially with Virtual Machines) and inactive memory not being freed up correctly, though I'm using Snow Leopard.

I can totally empathize with feeling frustrated that NO one else seems to acknowledge these sorts of problems :)
 

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
.

I don't know where you got the notion that anything is running in some "Virtual Machine" or is being "Virtualized". The 'Virtual' in LLVM is in name only. This has been repeated several times in this thread, but you continue to ignore it.



I think your understanding of how compilers work is lacking. Pretty much every compiler produces an intermediate representation of the high-level language that is being compiled. This representation is then optimized and passed to a machine code emitter for the specified target architecture (where it could be further optimized). LLVM simply splits the intermediate representations and the back-end machine-code emitter into a separate, open, and well-documented entity. Anyone is now free to write their own front-end to interpret whatever language they want—even one of their own creation—and can produce a full-fledged compiler without having to be experts in operating systems, CPU-architectures, and object-graphs.

Once again, there is no virtual machine or virtual environment involved.

I will ignore the flame posts... ad hominem is standard technique for
some posters here. If you can't refute the facts, calling someone names
doesn't make you right.

But I'll answer this post because it has some legitimate argument.
"The only way it can be slower is if it got interpretation baggage like in Java and C# and python." The key word is baggage. A program in Clang may not be using a virtual machine in final running state, but because of the compilation procedure it is turned into LLVM IR, which SUPPORTS interpreted languages. That intermediate form is more distantly removed than
a standard GCC compiler intermediate state that you can compile and link to the target machine code. It is more distantly removed and more abstract carrying more baggage because of its support of interpreted language standard features mentioned earlier (garbage collection, referencing counting, etc)

Please explain to me how a C program can run 400% slower? If the
intermediate step is basically parsing a token tree and substituting symbols with
CPU instructions from a lookup table? 400% is not a small percentage.
A program running 30fps is only going to be running 8fps. Most CPU
manufacturers compete in the 3 to 4 fps on hardware. If you have software
that is slowing it down 22fps, you fix the software first.

Also, stop putting my words out of context. When I mention virtualization I was talking about operating systems, not LLVM. Virtualization is not allowing direct access to hardware. You put another layer between the program
and the hardware. It slows the programs down because of this
middle layer. Here is where I mention virtualization:

"I'd say go the Sony route in console OS. Make the operating system more efficient each release, taking smaller footprint
for same functions, running faster. This means putting more and more pieces into assembly,
get rid of interpretation, virtualization, and things that slow performance. Most importantly, choose performance over other criteria because it is the lowest common
denominator (the bottleneck) of all programs that run on top of it.
"
 
Last edited:

ElectricSheep

macrumors 6502
Feb 18, 2004
498
4
Wilmington, DE
I will ignore the flame posts... ad hominem is standard technique for
some posters here. If you can't refute the facts, calling someone names
doesn't make you right.

But I'll answer this post because it has some legitimate argument.
"The only way it can be slower is if it got interpretation baggage like in Java and C# and python." The key word is baggage. A program in Clang may not be using a virtual machine in final running state, but because of the compilation procedure it is turned into LLVM IR, which SUPPORTS interpreted languages. That intermediate form is more distantly removed than
a standard GCC compiler intermediate state that you can compile and link to the target machine code. It is more distantly removed and more abstract carrying more baggage because of its support of interpreted language standard features mentioned earlier (garbage collection, referencing counting, etc)

Really what you are talking about is Abstraction. Abstraction is not a bad thing. Its the reason why we can write programs in higher-level languages like Objective C instead of handwriting machine code. Its the reason why applications can crash and not take out the entire machine. It is the fundamental reason why we can take for granted many of the great features of modern software that without abstraction would have been insanely difficult if not impossible to bring to reality.

As far as 'baggage' is concerned, that is debately. LLVM is certainly capable of emitting executable binaries which are smaller than those produced by other compilers, so there isn't anything going into the final program.

Please explain to me how a C program can run 400% slower? If the
intermediate step is basically parsing a token tree and substituting symbols with
CPU instructions from a lookup table? 400% is not a small percentage.
A program running 30fps is only going to be running 8fps. Most CPU
manufacturers compete in the 3 to 4 fps on hardware. If you have software
that is slowing it down 22fps, you fix the software first.

That is a gross oversimplification of what compilers do. The most basic, trivial compiler "basically parses a token tree and substitutes symbols with CPU instructions from a lookup table". The resulting executable code will be extremely inefficient.

The reality is that turning a high level language like C into efficient, optimized machine code is an NP-Hard problem. Mature compilers use a lot of tricks and apply a number of different hueristics to optmize code as best they can. The difference between one hueristic and another can account for a 4x performance factor in a given case.

To take such few cases, however, and use them to generalize across the vast landscape of code is extremely shortsighted.

Also, stop putting my words out of context. When I mention virtualization I was talking about operating systems, not LLVM. Virtualization is not allowing direct access to hardware. You put another layer between the program
and the hardware. It slows the programs down because of this
middle layer. Here is where I mention virtualization:

"I'd say go the Sony route in console OS. Make the operating system more efficient each release, taking smaller footprint
for same functions, running faster. This means putting more and more pieces into assembly,
get rid of interpretation, virtualization, and things that slow performance. Most importantly, choose performance over other criteria because it is the lowest common
denominator (the bottleneck) of all programs that run on top of it.
"

That is good for consoles, but not good for PCs. Consoles are built around a single uniform hardware definition that does not change. Consoles are designed to focus on a single task at a time: Play a game. Watch a movier. Yes, underneath there are other tasks running, but they are all to support the lead task. In order to perform this task as efficiently as possible, you allow direct access to the metal. Given that every console is identical, this isn't really much of a problem. Developers are free to cut corners and make assumptions. But, if the game crashes, the whole box goes down and you have to reset. I had enough of doing that to my PC back in the nineties.
 

bitsoda

macrumors member
Mar 23, 2011
47
0
Lion is a miserable experience for me.

I know exactly what OP is talking about because my computer suffers from the same affliction. There are times when I want to clutch my MacBook Pro, spiral around three times, and release it at a high velocity just to see it smash against a nice, concrete wall. I own an early 2011 MacBook Pro running 10.7.4 and this thing performs like a walrus on a tar floor. If I don't restart the machine at least twice a day, the machine is rendered unusable. Something as simple as opening a new tab in Chrome will bring about the ******** beachball for a good ~10 seconds before I can do anything else.

Originally, I thought my problem was related to the fact that I upgraded from SL to Lion. But after a clean install, the problem persists. Right now I have about 900 MB of inactive memory and my swap is 800 MB. I only open Activity Monitor once I notice sluggish performance to confirm my suspicions of pure failure to manage memory by the OS. Snow Leopard -- or any OS I've used in the past decade -- never behaved like this.

I'm not sure what to do. Running iTunes, Chrome (with ~15 tabs), Dictionary, Transmission, Spotify, Terminal, and SublimeText 2 is ostensibly too much for my MacBook Pro to handle. I'm at my wit's end with Lion, and nobody has been able to offer a solution.
 

luigi.lauro

macrumors member
Jun 18, 2012
81
48
Milan, Italy
You will find the link supports the Clang generally 10% slower than GCC, but increases to 100% or more in certain cases. The ONLY case where Clang is faster than GCC over 10% is compiling time, which is irrelevant during
runtime native code. I would spend a whole week compiling and optimizing final code so it runs (runtime) 400% faster in a shipped product, rather than gloat about being able compile (prepare) the code 10% faster. The users
SEE the runtime, THAT is what is important. An engineer can spend 1 year
making a F1 car or 1 week. The speed of the car is important, not how
long it takes him to create the car.

Again, false.

This was probably true some months/years ago, but now LLVM/Clang is as good and as fast as GCC. Actually *FASTER*, in several scenarios.

And I'm not talking about speed in compilation, but speed of the COMPILED application, which is what matters.

I showed you a RECENT unbiased open-source benchmark of latest GCC vs latest CLANG, that shows that neither is faster of the other, they are on-par, with negligible speed differences in all cases.

Show me a RECENT unbiased benchmark (Clang 3.1+ vs GCC 4.7+) that shows that 100%/400% difference, and then I'll second what you say, but until you have provided one (like I did), you are just a troll with very little knowledge about the CURRENT state of the compilers.

But the truth is that you WILL NOT FIND ANY, because it's a simple fact that Clang 3.1 is AS FAST as GCC 4.7, in compiled application speed.

And I'm not talking about corner cases such as a single application with very bad code written that behaves correctly only with GCC idiosincracies (such as smallpt), but to see this 400% in at least a 5-10% of the cases.

You will ALWAYS find corner cases that will not behave 'good' with a new generation compiler, heck, if they had to do a GCC 5.0 with a new architecture, you can be 100000% sure that you will find application going 100 times slower before the compiler settles down and the application don't fix the issue they have with it.

But this has nothing to related with the performance of the compiler: it's just a 'compatibility' issue, which will be solved by the compiler or the application code sooner or later.

You would never say a certain given Nvidia GPU is 400% slower in certain games, because you find 2 games out of 300 where for a compatibility issue the GPU run at a much reduced performance. You would flag that as a bug/compatibilty problem and work around it.

Full stop.
 
Last edited:

ElectricSheep

macrumors 6502
Feb 18, 2004
498
4
Wilmington, DE
I've seen a few links to the general Apple Support article on Activity Monitor and Memory Usage; [URL="Apple's own developer documentation]https://developer.apple.com/library/mac/#documentation/performance/conceptual/managingmemory/articles/aboutmemory.html[/URL] contains deeper insight as to how the virtual memory subsystem works and what these page lists actually represent.

(Quoted from the above)
Page Lists in the Kernel

The kernel maintains and queries three system-wide lists of physical memory pages:

The active list contains pages that are currently mapped into memory and have been recently accessed.
The inactive list contains pages that are currently resident in physical memory but have not been accessed recently. These pages contain valid data but may be removed from memory at any time.
The free list contains pages of physical memory that are not associated with any address space of VM object. These pages are available for immediate use by any process that needs them.
When the number of pages on the free list falls below a threshold (determined by the size of physical memory), the pager attempts to balance the queues. It does this by pulling pages from the inactive list. If a page has been accessed recently, it is reactivated and placed on the end of the active list. In Mac OS X, if an inactive page contains data that has not been written to the backing store recently, its contents must be paged out to disk before it can be placed on the free list. (In iOS, modified but inactive pages must remain in memory and be cleaned up by the application that owns them.) If an inactive page has not been modified and is not permanently resident (wired), it is stolen (any current virtual mappings to it are destroyed) and added to the free list. Once the free list size exceeds the target threshold, the pager rests.

The kernel moves pages from the active list to the inactive list if they are not accessed; it moves pages from the inactive list to the active list on a soft fault (see “Paging In Process”). When virtual pages are swapped out, the associated physical pages are placed in the free list. Also, when processes explicitly free memory, the kernel moves the affected pages to the free list.



Paging Out Process

In Mac OS X, when the number of pages in the free list dips below a computed threshold, the kernel reclaims physical pages for the free list by swapping inactive pages out of memory. To do this, the kernel iterates all resident pages in the active and inactive lists, performing the following steps:

If a page in the active list is not recently touched, it is moved to the inactive list.
If a page in the inactive list is not recently touched, the kernel finds the page’s VM object.
If the VM object has never been paged before, the kernel calls an initialization routine that creates and assigns a default pager object.
The VM object’s default pager attempts to write the page out to the backing store.
If the pager succeeds, the kernel frees the physical memory occupied by the page and moves the page from the inactive to the free list.

Note: In iOS, the kernel does not write pages out to a backing store. When the amount of free memory dips below the computed threshold, the kernel flushes pages that are inactive and unmodified and may also ask the running application to free up memory directly. For more information on responding to these notifications, see “Responding to Low-Memory Warnings in iOS.”
Paging In Process



The final phase of virtual memory management moves pages into physical memory, either from the backing store or from the file containing the page data. A memory access fault initiates the page-in process. A memory access fault occurs when code tries to access data at a virtual address that is not mapped to physical memory. There are two kinds of faults:

A soft fault occurs when the page of the referenced address is resident in physical memory but is currently not mapped into the address space of this process.
A hard fault occurs when the page of the referenced address is not in physical memory but is swapped out to backing store (or is available from a mapped file). This is what is typically known as a page fault.
When any type of fault occurs, the kernel locates the map entry and VM object for the accessed region. The kernel then goes through the VM object’s list of resident pages. If the desired page is in the list of resident pages, the kernel generates a soft fault. If the page is not in the list of resident pages, it generates a hard fault.

For soft faults, the kernel maps the physical memory containing the pages to the virtual address space of the process. The kernel then marks the specific page as active. If the fault involved a write operation, the page is also marked as modified so that it will be written to backing store if it needs to be freed later.

For hard faults, the VM object’s pager finds the page in the backing store or from the file on disk, depending on the type of pager. After making the appropriate adjustments to the map information, the pager moves the page into physical memory and places the page on the active list. As with a soft fault, if the fault involved a write operation, the page is marked as modified.

It is important to understand that pages on the inactive list are still mapped to valid VM Objects. The kernel cannot simply move them at a whim to the free-page list; Applications must explicitly release their memory. If an inactive page has not been written to a backing store since being changed (dirty), it must be swapped before it can be freed. Failure to do so destroys valid memory objects in userland, and could cause application crashes as well as lost-data/data corruption.

Additionally, the kernel will not actively traverse page lists looking to move inactive pages to free until the target free-page count has hit a certain threshold, which you can view with sysctl. The default target free-page list size is 2000. For a 4kb sizer per page, this works out to 8 megabytes.

While that number seems stupidly low, it echoes the understanding that 1) unused memory is wasted memory, and 2) applications know best about which of their memory objects should be in memory and which should not, not the kernel. So, the kernel takes the approach that unless more memory that is available on the free-page list is being requested, leave the pages alone. The fact that a page is on the inactive list because it has not been accessed in some time does not guarantee that it will not be accessed in the near future. In such a case, it is better to have a soft fault (reactivate the page) than a hard fault (re-read the page back from the disk).
 

mabaker

macrumors 65816
Jan 19, 2008
1,215
580
Not sure that this means anything, but I just opened a butt ton of applications, several games, and so on running Mountain Lion GM with 12 GB of RAM and I still have 6 GB free.

That is nice. Thanks.
 

djrod

macrumors 65816
Sep 16, 2008
1,012
33
Madrid - Spain
im tired having to monitor my system and grab screenshots. i'v seen this answer several times before, but it simply doesn't ring true for performance.

os x doesnt do what its supposed to do,

once i am with no free memory left, it does not free the inactive ram, instead it goes to page outs, and everything becomes absolutely awful.

if it was managing properly, it would 'use the inactive memory' but id doesnt, it let it become terrible, like a system with no more memory.

i always run app store, mail, safari, address book, ical, itunes, iphoto and sometimes imovie.

in addition i will run steam (which is a memory leaker its self)

other times i wil run photoshop cs5

iv found so many discussions on google with people who seem to know what they are talking about more, and backing up my experience. i don't know why others dont experience it, is it the way they use their machine? ssds? faults in ours? i dont know.

photoshop working with a file with quite a few layers at 6000 pixel images, will eat up to 2.5 gigs of ram.

steam up to a gig or even a little more.

generally though, my system can run my apps with 4 gigs or even nearly 5 gigs ram free when they are opened.

its after using them for awhile that all free memory is finished and is then described as inactive memory, which is never freed unless apps are quit.

fi i dont purge, then i cant get my memory back without quitting all or restarting.


Are you quiting the apps ( cmd Q ) or just closing the windows ( cmd W or the red light button ) because Photoshop for example eats all the ram it can and the memory remains used until you completely quit the app, it does not go to inactive memory.
 

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
Again, false.

This was probably true some months/years ago, but now LLVM/Clang is as good and as fast as GCC. Actually *FASTER*, in several scenarios.

And I'm not talking about speed in compilation, but speed of the COMPILED application, which is what matters.

I showed you a RECENT unbiased open-source benchmark of latest GCC vs latest CLANG, that shows that neither is faster of the other, they are on-par, with negligible speed differences in all cases.

Show me a RECENT unbiased benchmark (Clang 3.1+ vs GCC 4.7+) that shows that 100%/400% difference, and then I'll second what you say, but until you have provided one (like I did), you are just a troll with very little knowledge about the CURRENT state of the compilers.

But the truth is that you WILL NOT FIND ANY, because it's a simple fact that Clang 3.1 is AS FAST as GCC 4.7, in compiled application speed.

And I'm not talking about corner cases such as a single application with very bad code written that behaves correctly only with GCC idiosincracies (such as smallpt), but to see this 400% in at least a 5-10% of the cases.

You will ALWAYS find corner cases that will not behave 'good' with a new generation compiler, heck, if they had to do a GCC 5.0 with a new architecture, you can be 100000% sure that you will find application going 100 times slower before the compiler settles down and the application don't fix the issue they have with it.

But this has nothing to related with the performance of the compiler: it's just a 'compatibility' issue, which will be solved by the compiler or the application code sooner or later.

You would never say a certain given Nvidia GPU is 400% slower in certain games, because you find 2 games out of 300 where for a compatibility issue the GPU run at a much reduced performance. You would flag that as a bug/compatibilty problem and work around it.

Full stop.

I used YOUR link. None of the Clang was faster than GCC by 10% except one "compiling" one.

In YOUR link, it showed GCC faster than Clang by 10% on average,

In YOUR link, it ALSO showed the GCC faster than Clang 20%.

In YOUR link, it also showed GCC faster than Clang by 400% not on just one
but two benchmarks.

Again, NONE of the benchmarks in YOUR link showed Clang 10% or more faster than GCC EXCEPT the "compiling" one.

I will accept 10% for maybe timing differences or errors on either side. But 20% and 400% is no laughing matter. Obviously I'm not here to argue with you. Perhaps you are part of llvm, and if you feel you must have last say on this, go ahead. I am sure others can look at the benchmarks themselves. I am just one of the OSX users with no ulterior motives than to have a faster operating system. What you say won't change the fact that Lion is damn slow and if you wish to push the blame elsewhere at least read my earlier responses and acknowledge the problem exists. 400% is not a minor problem. It is game breaking, people looking elsewhere for another platform breaking problem.

And before you start blaming the slowness on other things (like memory manager), note this fact:

On Snow Leopard, the default compiler is GCC 4.2 (WITH NO LLVM)
On Lion, the default compiler is GCC-LLVM, then later Clang-LLVM (because GCC-LLVM was actually
half broken).

So the major changes from Snow Leopard to Lion is the mandatory use of LLVM.
In the case of LLVM backend producing code running in virtual machine, it has support for
grabbing chunks of memory to do memory allocation (they need to in order to do automatic
reference counting and garbage collection). Kernel bloat? Could it be that LLVM, in its support of interpreted language features, carried these baggage, which resulted in bloated and slow code even
if you are compiling static CLang or GCC code? Remember, LLVM intermediate IR
(byte code in java) is VERY FAR REMOVED from standard GCC intermediate code. It is more abstract,
to the point where you can actually run the llvm IR inside a virtual machine (no different than C# or Java).
So the process from llvm IR to regular .o or regular binary executable is not as cleancut as GCC.

In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.na...me-consumption-for-larger-files-td683717.html

Remember in OpenGL, there were some code left in intermediate state? When that kicks in the llvm
compiler starts up. We don't know what other parts of the OSX were left in this state that REQUIRES
compilation at runtime, and JIT compilation (like Java JIT). Perhaps more and more pieces in
Snow Leopard and cumulating in full blown llvm requirement in Lion. Only in Lion was full
LLVM required everywhere. This could lead to kernel
bloat because OpenGL (driver) is near the Kernel level when running when these non-compiled
code needs to be JIT compiled. It could be other low level pieces. Remember this is 5 TIMES the required memory. So something
that normally requires 1GB would now require 5GB during runtime. A Mac Mini only has 4GB and
so are lots of earlier Mac machines. A lot of disk thrashing will occur as more things are moved
back and forth to the harddrive to accomodate that startup of the llvm virtual machine backend just
to compile, and if virtual machine is used, the amount of memory never dissipates.
 
Last edited:

theosib

macrumors member
Aug 30, 2009
71
8
Also experience OS X memory management problems

I have an early 2011 MacBook Pro with 8GB of RAM, and I too have been plagued by Lion's memory management bugs.

I'll typically have a handful of apps open, including Safari, Mail, Smultron (a lightweight code editor), Terminal, and MS Word. Sometimes I'll also have open a news reader, and maybe an IRC client. It takes a few days, but eventually, my computer would just grind to a crawl. It would be completely unusable. Just using any application required a lot of patience because it would start beach balling while I was typing. Switching apps could take 30 seconds to a minute.

And when Time Machine would start backing up... time to walk away, because the computer basically grinds to a halt. The simplest things would take 5 to 10 minutes. I'm not exaggerating this.

Sometimes, I'd like to run Windows in Parallels. I assign 2GB to the VM. If I want to do that, I have to have NO other Mac apps running. For instance, if I want to look things up in a web browser, I have to run IE in Windows instead of Safari on the Mac host, otherwise, everything will slow down. If I run anything on the Mac host, everything slows down.

The only explanation I've been able to find for this is that the kernel is swapping out anonymous pages, favoring disk caching. And it does this even if there is only one or two apps running.

I've noticed some strange things. The OS X kernel still typically reach a gigabyte and hover around there. Safari will often go well over a GB, even if there aren't that many pages open. So those are eating up memory like it's water.

Just to emphasize this: I'm not saying that the system gets slightly slow. I'm saying that it will stop responding to user input for minutes at a time. If I'm lucky enough to get the dock to respond, I can alt-tab all I want, and the only app that will quickly take focus is MAYBE Terminal.

And you're not going to convince me that I'm "holding it wrong" by running too many apps, because when I was running Snow Leopard, I could have a LOT more apps open at once with no performance problems. Although I've seen people complain about this as far back as Leopard, the problems for me started with Lion. Others complaining about this with Lion have tried doing clean installs to no avail, BTW.

About a week ago, I broke down and bought at 16GB memory upgrade. The effects have been dramatic. I can run Parallels and all my apps at once. The system slows down noticeably while Time Machine is running, but it's usable. So far.

I've reported this to Apple, and I've been asked to provide various information and run various tools. Hopefully they're taking it seriously. For me, this problem was so easily reproducible that I think they found my computer to be a good source of information. One tool they had me run captured I/O activity. The performance problem is caused in part by a massive amount of swapping activity, and as a result, this tracing tool ended up with huge gaps in its trace while logging to the internal drive. I had to connect a USB drive just to get a workable trace. The trace was massive, and I had to get Apple to give me a temporary FTP account just to upload it.


BTW, the guy comparing the Java VM to LLVM has no clue what he's talking about. LLVM plays a role similar to GIMPLE, in that it is an intermediate representation of code being compiled, between the source code and the target machine language. Among the major advantages of LLVM is that LLVM code has well-defined textual and binary representations, allowing the front end and back end of the compiler to be run separately. You can compile to LLVM and then compile later from LLVM to the target machine. Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented. Because LLVM is a well-defined intermediate language, it has facilitated research in optimizing compilers, leading to better results, in many cases, than GCC. The reason that Java is memory-hungry has to do with the garbage-collected memory management. And while it's certainly true that interpreted languages will be slower than compiled languages, comparing C, C++, Assembly, and even Java isn't nearly so straightforward.
 

Puevlo

macrumors 6502a
Oct 21, 2011
633
1
Apple have already admitted that Lion lacked proper memory management. It should be fixed for Mountain Lion.
 

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
I have an early 2011 MacBook Pro with 8GB of RAM, and I too have been plagued by Lion's memory management bugs.

I'll typically have a handful of apps open, including Safari, Mail, Smultron (a lightweight code editor), Terminal, and MS Word. Sometimes I'll also have open a news reader, and maybe an IRC client. It takes a few days, but eventually, my computer would just grind to a crawl. It would be completely unusable. Just using any application required a lot of patience because it would start beach balling while I was typing. Switching apps could take 30 seconds to a minute.

And when Time Machine would start backing up... time to walk away, because the computer basically grinds to a halt. The simplest things would take 5 to 10 minutes. I'm not exaggerating this.

Sometimes, I'd like to run Windows in Parallels. I assign 2GB to the VM. If I want to do that, I have to have NO other Mac apps running. For instance, if I want to look things up in a web browser, I have to run IE in Windows instead of Safari on the Mac host, otherwise, everything will slow down. If I run anything on the Mac host, everything slows down.

The only explanation I've been able to find for this is that the kernel is swapping out anonymous pages, favoring disk caching. And it does this even if there is only one or two apps running.

I've noticed some strange things. The OS X kernel still typically reach a gigabyte and hover around there. Safari will often go well over a GB, even if there aren't that many pages open. So those are eating up memory like it's water.

Just to emphasize this: I'm not saying that the system gets slightly slow. I'm saying that it will stop responding to user input for minutes at a time. If I'm lucky enough to get the dock to respond, I can alt-tab all I want, and the only app that will quickly take focus is MAYBE Terminal.

And you're not going to convince me that I'm "holding it wrong" by running too many apps, because when I was running Snow Leopard, I could have a LOT more apps open at once with no performance problems. Although I've seen people complain about this as far back as Leopard, the problems for me started with Lion. Others complaining about this with Lion have tried doing clean installs to no avail, BTW.

About a week ago, I broke down and bought at 16GB memory upgrade. The effects have been dramatic. I can run Parallels and all my apps at once. The system slows down noticeably while Time Machine is running, but it's usable. So far.

I've reported this to Apple, and I've been asked to provide various information and run various tools. Hopefully they're taking it seriously. For me, this problem was so easily reproducible that I think they found my computer to be a good source of information. One tool they had me run captured I/O activity. The performance problem is caused in part by a massive amount of swapping activity, and as a result, this tracing tool ended up with huge gaps in its trace while logging to the internal drive. I had to connect a USB drive just to get a workable trace. The trace was massive, and I had to get Apple to give me a temporary FTP account just to upload it.


BTW, the guy comparing the Java VM to LLVM has no clue what he's talking about. LLVM plays a role similar to GIMPLE, in that it is an intermediate representation of code being compiled, between the source code and the target machine language. Among the major advantages of LLVM is that LLVM code has well-defined textual and binary representations, allowing the front end and back end of the compiler to be run separately. You can compile to LLVM and then compile later from LLVM to the target machine. Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented. Because LLVM is a well-defined intermediate language, it has facilitated research in optimizing compilers, leading to better results, in many cases, than GCC. The reason that Java is memory-hungry has to do with the garbage-collected memory management. And while it's certainly true that interpreted languages will be slower than compiled languages, comparing C, C++, Assembly, and even Java isn't nearly so straightforward.


If you feel you have something to contribute, feel free to state what you feel is not correct. Otherwise, your statements are basically a rehash of what I said, but not contradicting anything. The ONLY thing that may seem different is this line:

"Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented."

But you are not even sure yourself. It is pretty funny the way you write it...

"IF I recall correctly...". "can (BUT NEEDN'T) use...". "I BELIEVE llvm...". "no... although it COULD..."

So you are not contributing any facts, just your opinions. I'll answer them
for you. Running the back end DOES have something to do with a virtual
machine. You obviously didn't look at the whole thread. In case you
missed it:

http://lists.cs.uiuc.edu/pipermail/l...st/006492.html

Now, in case you are not technically inclined. I'll pull the documentation for you:

"Code that is available in LLVM IR can have a wide variety of tools applied to it. For example, you can run optimizations on it (as we did above), you can dump it out in textual or binary forms, you can compile the code to an assembly file (.s) for some target, or you can JIT compile it."

See that? binary... OR JIT compile it. Either you create a binary, OR you JIT compile it. Lets continue...


"In order to do this, we first declare and initialize the JIT. This is done by adding a global variable and a call in main:

...
let main () =
...
(* Create the JIT. *)
let the_execution_engine = ExecutionEngine.create Codegen.the_module in
...
This creates an abstract "Execution Engine" which can be either a JIT compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler for you if one is available for your platform, otherwise it will fall back to the interpreter."

See that? ExecutionEngine is either the JIT or the interpreter (exact same
thing in Java and C# world). We are now inside a virtual machine either
just in time compiled or interpreted on the fly.

Virtual Machines are memory hogs due to supporting garbage collections and automated reference counting, in addition to implementing a whole CPU virtually. In addition, the LLVM backend IS a virtual machine. It needs to
in order to do JIT and interpretation of the LLVM IR. So anytime that llvm backend runs, IT IS IN VIRTUAL MACHINE mode.

In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.nab...-td683717.html

See that? 5 TIMES the required memory. The kernel pulls in drivers into
itself and if that driver needs to run inside a virtual machine, it is going
to eat up memory fast. If something takes 1 GB to compile in GCC, but now
takes 7GB to compile when going with LLVM, how can a Mac that only
has 4GB memory going to come up with that memory?

No amount of memory management will work if there is no memory to manage. Why? Because they are all EATEN UP by the compiler! It is going to go the the
harddrive to offload some stuff so it has some real main memory to work with.

Now the main point in this post is about the baggage Clang left in the LLVM IR. It is more abstracted than an efficient C compiler intermediate state. Trying to support all those garbage collection, reference counting, etc removes you so far from the CPU instructions that by the time you are going into CPU machine code generation, it ends up NOT faster. Which the benchmark shows. 400% IS NOT a small problem. It shows up in games, which is VERY IMPORTANT criteria when people buy computers (especially when it runs on windows or osx).

Lastly, when did Apple say they goofed up on the memory management? Provide references. Looks like the only one providing facts and references is me. The rest just "guesses" and trolling. So what should
Apple do? Dump llvm? It can simply start moving more and more pieces away from JIT or interpreter.
Start with all static binaries. Get away from Objective-C and use C, later start moving pieces into assembly for those things that are not going to
be changed. Objective-C is too slow for performance critical areas like operating systems (message passing is just plain slower than procedural calls). Start looking at performance as a higher criteria in selecting languages, compiler, etc in
the kernel and operating systems to start moving away from SLOW stuff. This includes dumping llvm if llvm's
goals is starting to go after C#'s multiple languages multiple targets and not performance. There are tradeoffs when you try to be everything to everyone. The XBOX third party games using C# is a failure.
Battlefield and Call of Duty and all AAA games run in low level C/C++ or assembly on PS3, PC, and XBox.
Imagine Apple putting a slow layer between games and the hardware. If that happens, no amount of
coding on top of OSX is going to reach AAA games because the OS is slowing them down! This is why
Windows games run faster than OSX games. Its the operating system's fault.

Remember NeXT? It had a period where they were all excited over writable optical disks instead of
harddrives? Guess what happened? Yep, they dumped it. It was just plain too slow. NeXT also
failed as a company. Overpriced and slow. Them moving to harddrives in later models didn't save
them, and required being merged into Apple. Similar with the CPU (motorola, not being able to
keep up in performance with Intel). So instead of making mistakes again and again, just plain
put performance as a criteria in the beginning. Corel failed trying to move to Java (too slow). Android lacks AAA games because of the Java requirement, which is so sad for the game developers using
C/C++ on it. They not only need to deal with the two languages (slow java OS/wrappers and C/C++), but
do all the operating systems' job of compatibility between different devices.

Here is the post again, please state what you think is wrong and provide references.:

I used YOUR link. None of the Clang was faster than GCC by 10% except one "compiling" one.

In YOUR link, it showed GCC faster than Clang by 10% on average,

In YOUR link, it ALSO showed the GCC faster than Clang 20%.

In YOUR link, it also showed GCC faster than Clang by 400% not on just one
but two benchmarks.

Again, NONE of the benchmarks in YOUR link showed Clang 10% or more faster than GCC EXCEPT the "compiling" one.

I will accept 10% for maybe timing differences or errors on either side. But 20% and 400% is no laughing matter. Obviously I'm not here to argue with you. Perhaps you are part of llvm, and if you feel you must have last say on this, go ahead. I am sure others can look at the benchmarks themselves. I am just one of the OSX users with no ulterior motives than to have a faster operating system. What you say won't change the fact that Lion is damn slow and if you wish to push the blame elsewhere at least read my earlier responses and acknowledge the problem exists. 400% is not a minor problem. It is game breaking, people looking elsewhere for another platform breaking problem.

And before you start blaming the slowness on other things (like memory manager), note this fact:

On Snow Leopard, the default compiler is GCC 4.2 (WITH NO LLVM)
On Lion, the default compiler is GCC-LLVM, then later Clang-LLVM (because GCC-LLVM was actually
half broken).

So the major changes from Snow Leopard to Lion is the mandatory use of LLVM.
In the case of LLVM backend producing code running in virtual machine, it has support for
grabbing chunks of memory to do memory allocation (they need to in order to do automatic
reference counting and garbage collection). Kernel bloat? Could it be that LLVM, in its support of interpreted language features, carried these baggage, which resulted in bloated and slow code even
if you are compiling static CLang or GCC code? Remember, LLVM intermediate IR
(byte code in java) is VERY FAR REMOVED from standard GCC intermediate code. It is more abstract,
to the point where you can actually run the llvm IR inside a virtual machine (no different than C# or Java).
So the process from llvm IR to regular .o or regular binary executable is not as cleancut as GCC.

In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.nab...-td683717.html

Remember in OpenGL, there were some code left in intermediate state? When that kicks in the llvm
compiler starts up. We don't know what other parts of the OSX were left in this state that REQUIRES
compilation at runtime, and JIT compilation (like Java JIT). Perhaps more and more pieces in
Snow Leopard and cumulating in full blown llvm requirement in Lion. Only in Lion was full
LLVM required everywhere. This could lead to kernel
bloat because OpenGL (driver) is near the Kernel level when running when these non-compiled
code needs to be JIT compiled. It could be other low level pieces. Remember this is 5 TIMES the required memory. So something
that normally requires 1GB would now require 5GB during runtime. A Mac Mini only has 4GB and
so are lots of earlier Mac machines. A lot of disk thrashing will occur as more things are moved
back and forth to the harddrive to accomodate that startup of the llvm virtual machine backend just
to compile, and if virtual machine is used, the amount of memory never dissipates.
 
Last edited:

a3vr

macrumors member
Jun 28, 2012
33
11
I've also experienced this memory issue, tends to happen with Lightroom open while doing large imports and processing. Lion doesn't release the inactive and when that happens it basically goes all to page out, Gigs worth in a matter of minutes. A quick purge and everything goes back to normal. With that said, it's only happened on a couple of occasions and rarely is an issue, but it's still a memory problem that needs to be fixed.
 

Michaelgtrusa

macrumors 604
Oct 13, 2008
7,900
1,821
Then why did Apple sell lion to the public? Well the same reason the sold the 2009 27" iMac and the old timecapsuls etc.
 

ElectricSheep

macrumors 6502
Feb 18, 2004
498
4
Wilmington, DE
I've also experienced this memory issue, tends to happen with Lightroom open while doing large imports and processing. Lion doesn't release the inactive and when that happens it basically goes all to page out, Gigs worth in a matter of minutes. A quick purge and everything goes back to normal. With that said, it's only happened on a couple of occasions and rarely is an issue, but it's still a memory problem that needs to be fixed.

This is exactly the behavior you will see if an application is leaking memory. As I have said before, inactive memory is still mapped to valid objects allocated by running applications. They have not been accessed recently, but the kernel cannot simply throw them out without destroying the integrity of the application runtime. They must be paged out to disk before the memory can be moved to the free list.
 

nontroppo

macrumors 6502
Mar 11, 2009
430
22
Why only some people?

And when Time Machine would start backing up... time to walk away, because the computer basically grinds to a halt. The simplest things would take 5 to 10 minutes. I'm not exaggerating this.

I do wonder if Time Machine is behind a lot of these problems. I've never seen swapping in Lion on an 8GB 2010 MBP or a large block of different Mac Pros (from 4 to 12GB RAM), running heavy computational analyses in interpreted Matlab (a Java-based behemoth), Parallels, Creative suite, Office etc. -- but we never use Time Machine.

No one has actually discovered what has changed in mountain lion, it would be great to understand the technical changes that seem to have alleviated problems for some of you...
 

Paradoxally

macrumors 68000
Feb 4, 2011
1,987
2,898
ML is definitely better. Look at my MB Pro 13" mid-2009, I just upgraded to 8 GB last week because 4 was just not enough for anything after SL, and opened a ton of apps just to check how memory was doing.

It's pretty amazing.



----------

I'm not sure what to do. Running iTunes, Chrome (with ~15 tabs), Dictionary, Transmission, Spotify, Terminal, and SublimeText 2 is ostensibly too much for my MacBook Pro to handle. I'm at my wit's end with Lion, and nobody has been able to offer a solution.

There is, it's called Mountain Lion. :) You can get it tomorrow (most likely). Be sure to have 8 GB of RAM (as I said before, 4 GB is not enough for anything above SL because you'll page out a lot).
 

RoelJuun

macrumors 6502
Aug 31, 2010
450
209
Netherlands
Wirelessly posted

Paradoxally said:
ML is definitely better. Look at my MB Pro 13" mid-2009, I just upgraded to 8 GB last week because 4 was just not enough for anything after SL, and opened a ton of apps just to check how memory was doing.

It's pretty amazing.



----------

I'm not sure what to do. Running iTunes, Chrome (with ~15 tabs), Dictionary, Transmission, Spotify, Terminal, and SublimeText 2 is ostensibly too much for my MacBook Pro to handle. I'm at my wit's end with Lion, and nobody has been able to offer a solution.

There is, it's called Mountain Lion. :) You can get it tomorrow (most likely). Be sure to have 8 GB of RAM (as I said before, 4 GB is not enough for anything above SL because you'll page out a lot).

You do realize that it's ridiculous to need at least > 4 gigs of ram for basic functionality?? And Apple still sells computers with 2 gigs of ram and a 5400 rpm disk..
 

nuckinfutz

macrumors 603
Jul 3, 2002
5,542
406
Middle Earth
Wirelessly posted



You do realize that it's ridiculous to need at least > 4 gigs of ram for basic functionality?? And Apple still sells computers with 2 gigs of ram and a 5400 rpm disk..

It is ridiculous. Lucky we don't need 4 GB of RAM for Mountain Lion. Of course it is recommended but not necessary.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.