I have an early 2011 MacBook Pro with 8GB of RAM, and I too have been plagued by Lion's memory management bugs.
I'll typically have a handful of apps open, including Safari, Mail, Smultron (a lightweight code editor), Terminal, and MS Word. Sometimes I'll also have open a news reader, and maybe an IRC client. It takes a few days, but eventually, my computer would just grind to a crawl. It would be completely unusable. Just using any application required a lot of patience because it would start beach balling while I was typing. Switching apps could take 30 seconds to a minute.
And when Time Machine would start backing up... time to walk away, because the computer basically grinds to a halt. The simplest things would take 5 to 10 minutes. I'm not exaggerating this.
Sometimes, I'd like to run Windows in Parallels. I assign 2GB to the VM. If I want to do that, I have to have NO other Mac apps running. For instance, if I want to look things up in a web browser, I have to run IE in Windows instead of Safari on the Mac host, otherwise, everything will slow down. If I run anything on the Mac host, everything slows down.
The only explanation I've been able to find for this is that the kernel is swapping out anonymous pages, favoring disk caching. And it does this even if there is only one or two apps running.
I've noticed some strange things. The OS X kernel still typically reach a gigabyte and hover around there. Safari will often go well over a GB, even if there aren't that many pages open. So those are eating up memory like it's water.
Just to emphasize this: I'm not saying that the system gets slightly slow. I'm saying that it will stop responding to user input for minutes at a time. If I'm lucky enough to get the dock to respond, I can alt-tab all I want, and the only app that will quickly take focus is MAYBE Terminal.
And you're not going to convince me that I'm "holding it wrong" by running too many apps, because when I was running Snow Leopard, I could have a LOT more apps open at once with no performance problems. Although I've seen people complain about this as far back as Leopard, the problems for me started with Lion. Others complaining about this with Lion have tried doing clean installs to no avail, BTW.
About a week ago, I broke down and bought at 16GB memory upgrade. The effects have been dramatic. I can run Parallels and all my apps at once. The system slows down noticeably while Time Machine is running, but it's usable. So far.
I've reported this to Apple, and I've been asked to provide various information and run various tools. Hopefully they're taking it seriously. For me, this problem was so easily reproducible that I think they found my computer to be a good source of information. One tool they had me run captured I/O activity. The performance problem is caused in part by a massive amount of swapping activity, and as a result, this tracing tool ended up with huge gaps in its trace while logging to the internal drive. I had to connect a USB drive just to get a workable trace. The trace was massive, and I had to get Apple to give me a temporary FTP account just to upload it.
BTW, the guy comparing the Java VM to LLVM has no clue what he's talking about. LLVM plays a role similar to GIMPLE, in that it is an intermediate representation of code being compiled, between the source code and the target machine language. Among the major advantages of LLVM is that LLVM code has well-defined textual and binary representations, allowing the front end and back end of the compiler to be run separately. You can compile to LLVM and then compile later from LLVM to the target machine. Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented. Because LLVM is a well-defined intermediate language, it has facilitated research in optimizing compilers, leading to better results, in many cases, than GCC. The reason that Java is memory-hungry has to do with the garbage-collected memory management. And while it's certainly true that interpreted languages will be slower than compiled languages, comparing C, C++, Assembly, and even Java isn't nearly so straightforward.
If you feel you have something to contribute, feel free to state what you feel is not correct. Otherwise, your statements are basically a rehash of what I said, but not contradicting anything. The ONLY thing that may seem different is this line:
"Running the back end later is essentially JIT, but that has nothing to do with using a virtual machine. IIRC, unlike Java, which CAN (but needn't) use an on-demand JIT compiler, I believe LLVM finishes the whole compilation step just before running it. There is no real-time compiling, although it could probably be implemented."
But you are not even sure yourself. It is pretty funny the way you write it...
"IF I recall correctly...". "can (BUT NEEDN'T) use...". "I BELIEVE llvm...". "no... although it COULD..."
So you are not contributing any facts, just your opinions. I'll answer them
for you. Running the back end DOES have something to do with a virtual
machine. You obviously didn't look at the whole thread. In case you
missed it:
http://lists.cs.uiuc.edu/pipermail/l...st/006492.html
Now, in case you are not technically inclined. I'll pull the documentation for you:
"Code that is available in LLVM IR can have a wide variety of tools applied to it. For example, you can run optimizations on it (as we did above), you can dump it out in textual or binary forms, you can compile the code to an assembly file (.s) for some target, or you can JIT compile it."
See that? binary... OR JIT compile it. Either you create a binary, OR you JIT compile it. Lets continue...
"In order to do this, we first declare and initialize the JIT. This is done by adding a global variable and a call in main:
...
let main () =
...
(* Create the JIT. *)
let the_execution_engine = ExecutionEngine.create Codegen.the_module in
...
This creates an abstract "Execution Engine" which can be either a JIT compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler for you if one is available for your platform, otherwise it will fall back to the interpreter."
See that? ExecutionEngine is either the JIT or the interpreter (exact same
thing in Java and C# world). We are now inside a virtual machine either
just in time compiled or interpreted on the fly.
Virtual Machines are memory hogs due to supporting garbage collections and automated reference counting, in addition to implementing a whole CPU virtually. In addition, the LLVM backend IS a virtual machine. It needs to
in order to do JIT and interpretation of the LLVM IR. So anytime that llvm backend runs, IT IS IN VIRTUAL MACHINE mode.
In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.nab...-td683717.html
See that? 5 TIMES the required memory. The kernel pulls in drivers into
itself and if that driver needs to run inside a virtual machine, it is going
to eat up memory fast. If something takes 1 GB to compile in GCC, but now
takes 7GB to compile when going with LLVM, how can a Mac that only
has 4GB memory going to come up with that memory?
No amount of memory management will work if there is no memory to manage. Why? Because they are all EATEN UP by the compiler! It is going to go the the
harddrive to offload some stuff so it has some real main memory to work with.
Now the main point in this post is about the baggage Clang left in the LLVM IR. It is more abstracted than an efficient C compiler intermediate state. Trying to support all those garbage collection, reference counting, etc removes you so far from the CPU instructions that by the time you are going into CPU machine code generation, it ends up NOT faster. Which the benchmark shows. 400% IS NOT a small problem. It shows up in games, which is VERY IMPORTANT criteria when people buy computers (especially when it runs on windows or osx).
Lastly, when did Apple say they goofed up on the memory management? Provide references. Looks like the only one providing facts and references is me. The rest just "guesses" and trolling. So what should
Apple do? Dump llvm? It can simply start moving more and more pieces away from JIT or interpreter.
Start with all static binaries. Get away from Objective-C and use C, later start moving pieces into assembly for those things that are not going to
be changed. Objective-C is too slow for performance critical areas like operating systems (message passing is just plain slower than procedural calls). Start looking at performance as a higher criteria in selecting languages, compiler, etc in
the kernel and operating systems to start moving away from SLOW stuff. This includes dumping llvm if llvm's
goals is starting to go after C#'s multiple languages multiple targets and not performance. There are tradeoffs when you try to be everything to everyone. The XBOX third party games using C# is a failure.
Battlefield and Call of Duty and all AAA games run in low level C/C++ or assembly on PS3, PC, and XBox.
Imagine Apple putting a slow layer between games and the hardware. If that happens, no amount of
coding on top of OSX is going to reach AAA games because the OS is slowing them down! This is why
Windows games run faster than OSX games. Its the operating system's fault.
Remember NeXT? It had a period where they were all excited over writable optical disks instead of
harddrives? Guess what happened? Yep, they dumped it. It was just plain too slow. NeXT also
failed as a company. Overpriced and slow. Them moving to harddrives in later models didn't save
them, and required being merged into Apple. Similar with the CPU (motorola, not being able to
keep up in performance with Intel). So instead of making mistakes again and again, just plain
put performance as a criteria in the beginning. Corel failed trying to move to Java (too slow). Android lacks AAA games because of the Java requirement, which is so sad for the game developers using
C/C++ on it. They not only need to deal with the two languages (slow java OS/wrappers and C/C++), but
do all the operating systems' job of compatibility between different devices.
Here is the post again, please state what you think is wrong and provide references.:
I used YOUR link. None of the Clang was faster than GCC by 10% except one "compiling" one.
In YOUR link, it showed GCC faster than Clang by 10% on average,
In YOUR link, it ALSO showed the GCC faster than Clang 20%.
In YOUR link, it also showed GCC faster than Clang by 400% not on just one
but two benchmarks.
Again, NONE of the benchmarks in YOUR link showed Clang 10% or more faster than GCC EXCEPT the "compiling" one.
I will accept 10% for maybe timing differences or errors on either side. But 20% and 400% is no laughing matter. Obviously I'm not here to argue with you. Perhaps you are part of llvm, and if you feel you must have last say on this, go ahead. I am sure others can look at the benchmarks themselves. I am just one of the OSX users with no ulterior motives than to have a faster operating system. What you say won't change the fact that Lion is damn slow and if you wish to push the blame elsewhere at least read my earlier responses and acknowledge the problem exists. 400% is not a minor problem. It is game breaking, people looking elsewhere for another platform breaking problem.
And before you start blaming the slowness on other things (like memory manager), note this fact:
On Snow Leopard, the default compiler is GCC 4.2 (WITH NO LLVM)
On Lion, the default compiler is GCC-LLVM, then later Clang-LLVM (because GCC-LLVM was actually
half broken).
So the major changes from Snow Leopard to Lion is the mandatory use of LLVM.
In the case of LLVM backend producing code running in virtual machine, it has support for
grabbing chunks of memory to do memory allocation (they need to in order to do automatic
reference counting and garbage collection). Kernel bloat? Could it be that LLVM, in its support of interpreted language features, carried these baggage, which resulted in bloated and slow code even
if you are compiling static CLang or GCC code? Remember, LLVM intermediate IR
(byte code in java) is VERY FAR REMOVED from standard GCC intermediate code. It is more abstract,
to the point where you can actually run the llvm IR inside a virtual machine (no different than C# or Java).
So the process from llvm IR to regular .o or regular binary executable is not as cleancut as GCC.
In addition, LLVM takes about 5 times more main memory than GCC:
http://clang-developers.42468.n3.nab...-td683717.html
Remember in OpenGL, there were some code left in intermediate state? When that kicks in the llvm
compiler starts up. We don't know what other parts of the OSX were left in this state that REQUIRES
compilation at runtime, and JIT compilation (like Java JIT). Perhaps more and more pieces in
Snow Leopard and cumulating in full blown llvm requirement in Lion. Only in Lion was full
LLVM required everywhere. This could lead to kernel
bloat because OpenGL (driver) is near the Kernel level when running when these non-compiled
code needs to be JIT compiled. It could be other low level pieces. Remember this is 5 TIMES the required memory. So something
that normally requires 1GB would now require 5GB during runtime. A Mac Mini only has 4GB and
so are lots of earlier Mac machines. A lot of disk thrashing will occur as more things are moved
back and forth to the harddrive to accomodate that startup of the llvm virtual machine backend just
to compile, and if virtual machine is used, the amount of memory never dissipates.