Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
problem

There are a lot of inherently bad things happening underneath it all.

First of all, you need to understand why OSX gets into these "laggy" situations in the first place. One is the move to LLVM, where the compiler is not optimized for performance, but for supporting multiple languages. It has virtual machine written even inside its name! It is a byproduct of competition with .NET, which was a competitor with Java. These technologies are BAD for performance. They force slow as hell garbage collection and automatic reference counting on programmers to save programmer noobs from leaking memory. To do that you just create object without worrying about when to release them (free up the memory used by objects). The garbage collector runs when you eventually take up all the main memory. What it does is GO THROUGH EVERY memory and release those that are not held by any active programs. It is a slow process, and it takes a lot of CPU time. No AAA games can survive it, so no AAA games will use JAVA, or .NET languages (like C#). Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING. The compiler will insert code "thinking" it is the right time to create and release memory. To be safe, it will only release usually when the program exits.
That is what most Java programs do anyways, grab all the memory, and don't run garbage collector until you have used up all virtual memory. It never runs. When it does the game crawls, people notice the bad performance of Java, so the garbage collector essentially does nothing until ALL memory is used up.

Now that you have reached here. How does this relate to Mountain Lion? OSX uses Objective-C, which has object oriented stuff patched onto regular C. Instead of using C++ which uses method calls, Objective-C uses message passing. SLOW! It needs to parse the message to find out what object to call. Whereas C++ just has pointers to the actual object, no parsing. And the biggest bummer? Garbage collection is default on Objective-C. You don't allocate objects, you just eat up all the memory and the garbage collector runs (the same Java, .NET, etc technology that is bad for performance). In order to save themselves Apple tries to get away from garbage collectors, by using ARC (AutomaticReferenceCounting) in Mountain Lion, and garbage collectors are deprecated. (In Lion, Garbage Collectors are not deprecated). The situation gets bad here. Now that garbage collectors are not default, the model has changed. Programmers now need to explicitly tell the OS when it is ok to release memory in ARC, or Mountain Lion will assume they want to keep using memory. If you don't program your program telling OSX that it is ok to be released, it is NOT going to be released. So all programs that were programmed from Lion and earlier ((using ARC only) )will keep leaking memory in Mountain Lion. Because they don't even have code to tell Mountain Lion it is ok to free memory. BINGO! Why is the disk swapping so much? Why am I out of memory?

Now this is not the main problem. The main problem is that garbage collection and ARC is still supported, and the fact the OSX still uses Objective-C, which is stuck with such slow technology from a by-gone era. Message passing is too slow. Garbage collection is slow. ARC is slow. Only C++ and C with manual allocation and release is fast. Do you know why garbage collection is not supported in iOS? Yep, bad for mobile battery life with CPU draining all the juice and low main memory. Instead of fixing the problem (bad technology), they are trying to patch the technology. The move to garbage collectors to ARC, is moving more of the responsibility to programmers on memory management, back to the original way programmers did it in the first place (manual management using C/C++). But the problems is that Objective-C is stuck with this ARC that is supposed to be an improvement, but is still not as fast and good to memory management as plain programmer created/released memory. The only way is to go to the lower level and use C/C++ where you can actually touch the memory and malloc/release the memory yourselves. Since llvm is supposed to support c and c++, they still have hope if you start moving chunks of the operating system to c/c++, and remove all the objective-c code that relies on ARC or garbage collection, which is keeps around a virtual machine handling the memory management.

.NET and these interpreted technology is so bad for business that XNA is being dumped and Windows Phone 8 no longer requires it. You can now do C++ directly on top of Direct3D, instead of that SLOW .NET C# layer that destroyed their third party gaming business on XBOX360. Yes it is that bad. Apple will try to cover it up, but eventually, the technology will show itself in ugly places. All these complaints on performance are a byproduct of bandaid fixes.
 
Last edited:

GGJstudios

macrumors Westmere
May 16, 2008
44,556
950
If inactive memory is indeed available for anyone, OS X should never create virtual memory (and thus increasing page out count) in the first place.
That's not true because there isn't always sufficient free or inactive memory available, thus page outs occur. If memory demands exceed all available free and inactive memory, paging is to be expected.
I never used all the free memory but after the update my mac was running so slow I checked and I had 75MB of free memory and 4 GB of inactive? I think theres a problem...
No, that doesn't represent a problem. It simply shows that you used most of your free memory at some time and those apps have been closed, leaving the memory available to other apps. It's marked as inactive to improve performance, in case you re-launch the same apps. If you don't, your inactive memory is just like free memory.
 

dyn

macrumors 68030
Aug 8, 2009
2,708
388
.nl
Actually, it swaps when it runs out of physical memory, not virtual.
Actually no because it is only half the story you are quoting:

Virtual memory allows an operating system to escape the limitations of physical RAM. The virtual memory manager creates a logical address space (or “virtual” address space) for each process and divides it up into uniformly-sized chunks of memory called pages. The processor and its memory management unit (MMU) maintain a page table to map pages in the program’s logical address space to hardware addresses in the computer’s RAM. When a program’s code accesses an address in memory, the MMU uses the page table to translate the specified logical address into the actual hardware memory address. This translation occurs automatically and is transparent to the running application.
This is what precedes your quote.

Maybe you should go read up on how it really works.

The system never runs out of "virtual memory addresses" as you call them.
Ah in that case you should start reading that link because with that last sentence you show you have not done this at all... It is not me who is calling it that way, it is Apple. Big difference!

That documentation also says there is a limitation:
Both OS X and iOS include a fully-integrated virtual memory system that you cannot turn off; it is always on. Both systems also provide up to 4 gigabytes of addressable space per 32-bit process. In addition, OS X provides approximately 18 exabytes of addressable space for 64-bit processes. Even for computers that have 4 or more gigabytes of RAM available, the system rarely dedicates this much RAM to a single process.

To give processes access to their entire 4 gigabyte or 18 exabyte address space, OS X uses the hard disk to hold data that is not currently in use. As memory gets full, sections of memory that are not being used are written to disk to make room for data that is needed now. The portion of the disk that stores the unused data is known as the backing store because it provides the backup storage for main memory.


Each process has a logical (virtual) address space created for it by the virtual memory manager. This space is chopped up into 4KB pages. This logical address space is always available to the process.

What the system can do is run out of physical RAM. That is when these pages can get swapped out of memory onto disk and vice versa.

So this sentence of yours:

The OS will swap when it runs out of virtual memory addresses.

Is completely incorrect.

S-
Nope that is not how it works as you can clearly read in the documentation. When the system is out of physical memory it will shift around memory which will eventually lead to swapping. This is explained a couple of times (!) in the documentation.
 

sidewinder

macrumors 68020
Dec 10, 2008
2,425
130
Northern California
dyn,

Admit you are wrong!!! Here is what you said one more time:

"The OS will swap when it runs out of virtual memory addresses."

Please note that you said the OS runs out of virtual memory addresses, not a process. Also note you said nothing about physical memory.

The OS does not "run out" of virtual address space. The OS can assign virtual address space to as many processes as can be run.

Each process is limited to the size of virtual address space available to it. Each 32-bit process has 4 gigabytes of virtual address space. 64-bit processes have ~18 exabytes of virtual address space. It is logical address space but it has a finite size. If a process were to use up all its virtual address space, that's it. There would be no more to assign to that process.

Let's take a 32-bit process. It is assigned 4GB of virtual address space. No matter what, a 32-bit process cannot access more than 4GB of address space. If a 32-bit process uses up it's 4GB of virtual address space, whether the pages are in real memory or paged out, it cannot get any more address space.

The OS swaps (pages) when physical memory limits come into play. Not when a process utilizes the entire virtual address space assigned to it.

S-
 

Jenni8

macrumors member
Apr 27, 2011
87
0
CA
There are a lot of inherently bad things happening underneath it all.

First of all, you need to understand why OSX gets into these "laggy" situations in the first place. One is the move to LLVM, where the compiler is not optimized for performance, but for supporting multiple languages. It has virtual machine written even inside its name! It is a byproduct of competition with .NET, which was a competitor with Java. These technologies are BAD for performance. They force slow as hell garbage collection and automatic reference counting on programmers to save programmer noobs from leaking memory. To do that you just create object without worrying about when to release them (free up the memory used by objects). The garbage collector runs when you eventually take up all the main memory. What it does is GO THROUGH EVERY memory and release those that are not held by any active programs. It is a slow process, and it takes a lot of CPU time. No AAA games can survive it, so no AAA games will use JAVA, or .NET languages (like C#). Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING. The compiler will insert code "thinking" it is the right time to create and release memory. To be safe, it will only release usually when the program exits.
That is what most Java programs do anyways, grab all the memory, and don't run garbage collector until you have used up all virtual memory. It never runs. When it does the game crawls, people notice the bad performance of Java, so the garbage collector essentially does nothing until ALL memory is used up.

Now that you have reached here. How does this relate to Mountain Lion? OSX uses Objective-C, which has object oriented stuff patched onto regular C. Instead of using C++ which uses method calls, Objective-C uses message passing. SLOW! It needs to parse the message to find out what object to call. Whereas C++ just has pointers to the actual object, no parsing. And the biggest bummer? Garbage collection is default on Objective-C. You don't allocate objects, you just eat up all the memory and the garbage collector runs (the same Java, .NET, etc technology that is bad for performance). In order to save themselves Apple tries to get away from garbage collectors, by using ARC (AutomaticReferenceCounting) in Mountain Lion, and garbage collectors are deprecated. (In Lion, Garbage Collectors are not deprecated). The situation gets bad here. Now that garbage collectors are not default, the model has changed. Programmers now need to explicitly tell the OS when it is ok to release memory in ARC, or Mountain Lion will assume they want to keep using memory. If you don't program your program telling OSX that it is ok to be released, it is NOT going to be released. So all programs that were programmed from Lion and earlier ((using ARC only) )will keep leaking memory in Mountain Lion. Because they don't even have code to tell Mountain Lion it is ok to free memory. BINGO! Why is the disk swapping so much? Why am I out of memory?

Now this is not the main problem. The main problem is that garbage collection and ARC is still supported, and the fact the OSX still uses Objective-C, which is stuck with such slow technology from a by-gone era. Message passing is too slow. Garbage collection is slow. ARC is slow. Only C++ and C with manual allocation and release is fast. Do you know why garbage collection is not supported in iOS? Yep, bad for mobile battery life with CPU draining all the juice and low main memory. Instead of fixing the problem (bad technology), they are trying to patch the technology. The move to garbage collectors to ARC, is moving more of the responsibility to programmers on memory management, back to the original way programmers did it in the first place (manual management using C/C++). But the problems is that Objective-C is stuck with this ARC that is supposed to be an improvement, but is still not as fast and good to memory management as plain programmer created/released memory. The only way is to go to the lower level and use C/C++ where you can actually touch the memory and malloc/release the memory yourselves. Since llvm is supposed to support c and c++, they still have hope if you start moving chunks of the operating system to c/c++, and remove all the objective-c code that relies on ARC or garbage collection, which is keeps around a virtual machine handling the memory management.

.NET and these interpreted technology is so bad for business that XNA is being dumped and Windows Phone 8 no longer requires it. You can now do C++ directly on top of Direct3D, instead of that SLOW .NET C# layer that destroyed their third party gaming business on XBOX360. Yes it is that bad. Apple will try to cover it up, but eventually, the technology will show itself in ugly places. All these complaints on performance are a byproduct of bandaid fixes.

To get this straight, you are saying that the operating system isn't automatically dumbing its inactive memory as it used to and unless the program is closed or is setup to dump its own memory when needed. Then that's why a lag occurs because the inactive memory doesn't "appear" available because the software isn't compatible to.

Now here is a question, how do we get around this without using some app to free all the inactive memory? Which isn't really what I like to do cause it will make the rest of the system run slow when it comes to Launchpad and such. I think I've fixed some of my memory issues until I'm ready to get more ram. But even if I get more ram will I still have issues running Photoshop and Lightroom together at 12gb?
 

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
To get this straight, you are saying that the operating system isn't automatically dumbing its inactive memory as it used to and unless the program is closed or is setup to dump its own memory when needed. Then that's why a lag occurs because the inactive memory doesn't "appear" available because the software isn't compatible to.

Now here is a question, how do we get around this without using some app to free all the inactive memory? Which isn't really what I like to do cause it will make the rest of the system run slow when it comes to Launchpad and such. I think I've fixed some of my memory issues until I'm ready to get more ram. But even if I get more ram will I still have issues running Photoshop and Lightroom together at 12gb?


They are trying to move iOS methodology into Mountain Lion I think. In iOS when you close programs they are "saved state" into the flash. So in Mountain Lion, every program will not actually clear, but stuck in memory/virtual memory. So soon everything will fill up to the brim.

Well, if you want, you can always turn off virtual memory (but it may crash when you run out of memory). That way you interrupt the "ios" behavior. Or you can get a program to be some sort of garbage collector, grabbing memory until all the other programs gets dumped from virtual memory, and then freeing itself.
 

Ledgem

macrumors 68020
Jan 18, 2008
2,042
936
Hawaii, USA
They are trying to move iOS methodology into Mountain Lion I think. In iOS when you close programs they are "saved state" into the flash. So in Mountain Lion, every program will not actually clear, but stuck in memory/virtual memory. So soon everything will fill up to the brim.
If you monitor your memory usage regularly, you'll notice that the behavior is not as you are describing it.
 

Ledgem

macrumors 68020
Jan 18, 2008
2,042
936
Hawaii, USA
Err... The keyword is "trying" to be like iOS. They get stuck with large caches though. Here, here is another way of restating the "problem":

http://www.bechte.de/tag/inactive-memory/
There's a lot of hysteria about Apple porting features between iOS and OS X, and I'm a bit worried that's where you're going with this. There is no reason to make the comparison between iOS and OS X in terms of memory management because the hardware that each are designed for differs quite significantly, as do the multitasking expectations. iOS seems content to keep programs loaded in the memory until the memory fills up, given the expectation that you're only using one at a time; as of OS X 10.7, the default behavior was that programs would automatically be closed (taken out of memory) if it detected that they weren't being used. I'm not sure if it required that all documents be closed, as well; I never had that behavior enabled. Suffice it to say, when a program is closed, it's closed. Some remnants will remain as inactive memory, but otherwise much is recycled back to "free" memory.
 

petsounds

macrumors 65816
Jun 30, 2007
1,493
519
Well, I updated from Snow Leopard to Mountain Lion a few weeks ago, and so far ML seems more competent at memory management than SL. Not a lot, but I haven't had to do a purge from the command line like I did often under SL. Though I find it strange that I'm looking at Activity Monitor right now and see 170MB of Swap used but no Page Outs. Usually those went hand-in-hand.

I think for me I suffer from RAM problems because OS X seems to hold on to app memory a lot longer than it should. This is fine if you only run two or three applications, but I use a wide range of memory-intensive programs each day -- Photoshop, Illustrator, Xcode, Logic Pro, et al. As I use more applications, it fills up the Used RAM until it teeters on the edge of using all my physical RAM (10 GB), and often this eventually results in Page Outs and Swap being used.

This is all compounded by applications which either have memory leaks, or just never give up RAM. Java apps are terrible at this. PS3 Media Server (which I believe is Java-based) can quickly burn through my physical RAM streaming a couple 720p movies, and OS X will never reclaim it until I run a manual purge.
 

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
Well, I have read this entire thread and everything is as clear as mud :eek:

Don't worry, half of the posters are on a paid agenda to prettify the problems when you reveal negative things about apple products. Just focus on the people who describe the problems and ignore those that just seem to want to explain the problem away. That way you will know the truth.
 

AlexJaye

macrumors 6502a
Jul 13, 2010
613
1,091
As I said before, the after picture shows that page outs occurred since the last restart, but does not prove that they occurred at a time when there was inactive memory available. I would be happy to concede that this is happening, but I've never seen any proof.

You're being difficult. The other poster is correct. OSX sucks with memory management. I've seen pageout's and beach-balls on my Mac with 0 free ram but a gig of inactive ram available.
 

Mr. Retrofire

macrumors 603
Mar 2, 2010
5,064
519
www.emiliana.cl/en
There are a lot of inherently bad things happening underneath it all.

First of all, you need to understand why OSX gets into these "laggy" situations in the first place. One is the move to LLVM, where the compiler is not optimized for performance, but for supporting multiple languages. It has virtual machine written even inside its name!
-1

You are misinformed. The LLVM does not run on the target computer (i.e. your Mac). The LLVM-project is a code translation, code generation and code optimization project. The goal of this project is one compiler and one optimizer for all programming languages. LLVM generates highly optimized code, which does NOT run inside a VM. Apple used LLVM for all kernel extensions and system frameworks in OS X 10.7 and 10.8, and this code does NOT run inside a VM.

Benchmark GCC v4.8 vs. Clang (part of LLVM):
http://www.phoronix.com/scan.php?page=article&item=gcc48_llvm32_svn1&num=3
(see how fast LLVM is)

How the LLVM Compiler Infrastructure Works
http://www.informit.com/articles/article.aspx?p=1215438
 

Kashsystems

macrumors 6502
Jul 23, 2012
358
1
Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING.

I did both quotes because the poster is very misinformed.

ARC is not garbage collection nor has it ever been garbage collection.

All ARC does is do the reference counting for you when the program is compiled so you do not have to manual figure it out yourself. It calculates and inserts code for retain, release, and auto release. It does not do this during runtime nor is it slower.


Don't worry, half of the posters are on a paid agenda to prettify the problems when you reveal negative things about apple products. Just focus on the people who describe the problems and ignore those that just seem to want to explain the problem away. That way you will know the truth.

So far what you have revealed is a lack of understanding of how this really works.

NET and these interpreted technology is so bad for business that XNA is being dumped and Windows Phone 8 no longer requires it. You can now do C++ directly on top of Direct3D, instead of that SLOW .NET C# layer that destroyed their third party gaming business on XBOX360.

Once again, misinformed. Let me quote the head of windows graphics dev for windows phone and XNA head developer Shawn Hargreaves.

http://xboxforums.create.msdn.com/forums/p/91616/549344.aspx#549344
It is correct that XNA is not supported for developing the new style Metro applications in Windows 8.

But XNA remains fully supported and recommended for developing on Xbox and Windows Phone, not to mention for creating classic Windows applications (which run on XP, Vista, Win7, and also Win8 in classic mode).

So basically XNA is not dead and if you use windows 8 classic mode, still fully supported.

Also I do not see how xbox 360 has killed their 3rd party game business when 99.99 percent of their games are 3rd party.

Your information just seems really off and I can not imagine where you got all this misinformation from.
 

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
-1

You are misinformed. The LLVM does not run on the target computer (i.e. your Mac). The LLVM-project is a code translation, code generation and code optimization project. The goal of this project is one compiler and one optimizer for all programming languages. LLVM generates highly optimized code, which does NOT run inside a VM. Apple used LLVM for all kernel extensions and system frameworks in OS X 10.7 and 10.8, and this code does NOT run inside a VM.

[/url]

LLVM does use a VM on Mac.

Here look at this:

http://webcache.googleusercontent.c...-August/006492.html+&cd=2&hl=en&ct=clnk&gl=us


[LLVMdev] A cool use of LLVM at Apple: the OpenGL stack

Chris Lattner sabre at nondot.org
Tue Aug 15 15:52:19 CDT 2006
Previous message: [LLVMdev] OOPLSA 2006 Call for Participation
Next message: [LLVMdev] Re: A cool use of LLVM at Apple: the OpenGL stack
Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
[I just got official okay to mention this in public. This was previously
announced at Apple's WWDC conference last week.]

For those who are interested, Apple announced that they are using the LLVM
optimizer and JIT within their Mac OS 10.5 'Leopard' OpenGL stack (which
was distributed in beta form to WWDC attendees).

LLVM is used in two different ways, at runtime:

1. Runtime code specialization within the fixed-function vertex-processing
pipeline. Basically, the OpenGL pipeline has many parameters (is fog
enabled? do vertices have texture info? etc) which rarely change:
executing the fully branchy code swamps the branch predictors and
performs poorly. To solve this, the code is precompiled to LLVM .bc
form, from which specializations of the code are made, optimized,
and JIT compiled as they are needed at runtime.

2. OpenGL vertex shaders are small programs written using a family of
programming langauges with highly domain-specific features (e.g. dot
product, texture lookup, etc). At runtime, the OpenGL stack translates
vertex programs into LLVM form, runs LLVM optimizer passes and then JIT
compiles the code.

Both of these approaches make heavy use of manually vectorized code using
SSE/Altivec intrinsics, and they use the LLVM x86-32/x86-64/ppc/ppc64
targets. LLVM replaces existing special purpose JIT compilers built by
the OpenGL team.

LLVM is currently used when hardware support is disabled or when the
current hardware does not support a feature requested by the user app.
This happens most often on low-end graphics chips (e.g. integrated
graphics), but can happen even with the high-end graphics when advanced
capabilities are used.

Like any good compiler, the only impact that LLVM has on the OpenGL stack
is better performance (there are no user-visible knobs). However, if you
sample a program using shark, you will occasionally see LLVM methods in
the stack traces. :)

[ENDQUOTE]


The technical manual states:


"Code that is available in LLVM IR can have a wide variety of tools applied to it. For example, you can run optimizations on it (as we did above), you can dump it out in textual or binary forms, you can compile the code to an assembly file (.s) for some target, or you can JIT compile it."

Therefore, it is either binary... OR JIT compiled Either you create a binary, OR you JIT compile it. Lets continue...


"In order to do this, we first declare and initialize the JIT. This is done by adding a global variable and a call in main:

...
let main () =
...
(* Create the JIT. *)
let the_execution_engine = ExecutionEngine.create Codegen.the_module in
...
This creates an abstract "Execution Engine" which can be either a JIT compiler or the LLVM interpreter. LLVM will automatically pick a JIT compiler for you if one is available for your platform, otherwise it will fall back to the interpreter."


ExecutionEngine is either the JIT or the interpreter (exact same
thing in Java and C# world). We are now inside a virtual machine either
just in time compiled or interpreted on the fly.

Virtual Machines are memory hogs due to supporting garbage collections and automated reference counting, in addition to implementing a whole CPU virtually. In addition, the LLVM backend IS a virtual machine. It needs to
in order to do JIT and interpretation of the LLVM IR. So anytime that llvm backend runs, IT IS IN VIRTUAL MACHINE mode.

In addition, LLVM takes about 5 times more main memory than GCC:
http://webcache.googleusercontent.c...arger-files-td683717.html+&cd=3&hl=en&ct=clnk

See that? 5 TIMES the required memory. The kernel pulls in drivers into
itself and if that driver needs to run inside a virtual machine, it is going
to eat up memory fast. If something takes 1 GB to compile in GCC, but now
takes 7GB to compile when going with LLVM, how can a Mac that only
has 4GB memory going to come up with that memory?

No amount of memory management will work if there is no memory to manage. Why? Because they are all EATEN UP by the compiler! It is going to go the the
harddrive to offload some stuff so it has some real main memory to work with.

Now the main point in this post is about the baggage Clang left in the LLVM IR. It is more abstracted than an efficient C compiler intermediate state. Trying to support all those garbage collection, reference counting, etc removes you so far from the CPU instructions that by the time you are going into CPU machine code generation, it ends up NOT faster. Which the benchmark shows. 400% IS NOT a small problem. It shows up in games, which is VERY IMPORTANT criteria when people buy computers (especially when it runs on windows or osx).


Some posts seem to disappear (perhaps they are negative?) So you need to use google cache. For example, here is the GOOGLE CACHE of the LLVM needing 8GB compared to 2.6GB of GCC. Please like to hide these things, but I feel they are better exposed so you know the tradeoffs and negative aspects instead of being force-fed only what they want you to see.

Mar 29, 2010; 11:37pm Memory and time consumption for larger files

Hello,
recently I encountered one rather unpleasant issue that
I would like to share with you.
Me and few colleagues are working on a project where we want to
create a development environment for application-specific processor
processors (web pages, currently not much up-to-date are here:
http://merlin.fit.vutbr.cz/Lissom/).
One part of this project is a compiler generator.
To generate instruction selection patterns from our
architecture description language ISAC, one function that describes
semantics of each
instruction is generated. File that contains these functions is then
compiled and contents of functions are optimized, so I get something quite
close
to instruction selection patterns.
For some architectures like ARM, the count of generated functions
is huge (e.g. 50000) and the resulting C file is huge too.

The problem here is that the compilation to LLVM IR using frontend takes
enormous amount of time and memory.

---------------------------------------------------------------------------------

Experiments are shown for C file or size 12 MB, functions have approx. 30
lines each,
preprocessed file can be downloaded here:
http://lissom.aps-brno.cz/tmp/clang-large-source.c.zip

Tests were run on Fedora 11 64-bit, Pentium Quad Core, 2.83GHz, 4GB of
memory.
Latest llvm and clang from llvm, rev. 99810, configured and compiled with
--enable-optimized (uses -O2).
clang version 1.0
(https://llvm.org/svn/llvm-project/cfe/branches/release_26 exported)

Using GCC, gcc (GCC) 4.4.1 20090725 (Red Hat 4.4.1-2), time is only
illustrative,
because also compilation into object file is included:
The top memory is just approximation observed from output of the top
command.


1) g++ -DLISSOM_SEM -O0 -c -o tst.o clang-large-source.c
(time is only illustrative, because object code file is generated)
time: 12m17.064s
top memory approx: 2.6 GB


2) llvm-g++ -DLISSOM_SEM -O0 -c --emit-llvm -o tst.bc clang-large-source.c
time: 6m28.518s
top memory approx: 8 GB

3a) clang -DLISSOM_SEM -DCLANG -c -O0 -o tst.bc clang-large-source.c
time: 11m15.332s
top memory approx 8 GB


Resulting file tst.bc with debug info has 250 MB.
Without debug info (-g0), compilation seems to be even slower, but it was
maybe because
some swapping collision occurred, I was not patient enough to let it
finish,
resulting file for llvm-g++ had 181 MB.

Note also that on a 32-bit machine, the compilation would fail because
of lack
of memory space.


If I run then the opt -O3 on bc file generated with debug info,
it also consumes 7GB of memory and finishes in 9 minutes.

In my opinion, 12 MB of source code is not so much and the compilation
could be almost
immediate (or at least to be less than one minute), because no
optimizations are made.
Especially, what i don't understand, is the big difference between code
size and
the needed memory. After preprocessing, the C file has still 12MB, so
roughly, each
byte from source file needs 660 bytes in memory, 20 bytes in resulting
bytecode
and 100 bytes in disassembled bytecode.

-------------------------------------------------------------------------------------

Maybe there could be some batch mode that would parse the file by
smaller pieces, so the top memory usage would be lower.

If I divide the file into smaller files, compilation takes much less
time.
The question is, whether it is necessary, for example when -O0
is selected, to keep the whole program representation in memory.

time clang -DLISSOM_SEM -DCLANG -c -O0 -o tst.bc cg_instrsem_incl.c

for 2,5 MB file:
g++ (with obj. code generation): 1m 6s
llvm-g++: 7 s
clang: 2m2.501s

for 1 MB file:

g++ (with obj. code generation): 23 secs
llvm-g++: 2.5 s
clang time: 42 secs

Here I do not much understand, why is clang so much slower than llvm-g++.
I checked, that it was configured with --enable-optimized more than once
(does this affect also the clang?).
Testing files can be found here:
http://lissom.aps-brno.cz/tmp/clang_test.zip

------------------------------------------------------------------------------

Probably should this text go into bugzilla, but I thought it would be
better
that more people would see it and maybe would be interested in the reason,
why
clang behaves this way.
Anyway, clang is a great piece of software and I am very looking forward
to
see it replace gcc frontend with its cryptic error messages.
However as the abstraction level of program description is
moving higher and higher, I am afraid it will not be uncommon to generate
such
huge files from other higher-level languages that will use C as some kind
of
universal assembler (as currently is done with Matlab or
some graphical languages).
Such high memory and time requirements could pose problem for using
clang as
compiler for generated C code.


Or, do you have any ideas, when I would like to use clang, how to
make the compilation faster? (and of course, I already ordered more memory
for my computer:).
Also, if anyone would be more interested, what do I need to do with
these files i need
to compile, you can write me an email.

Have a nice day
Adam H.

_______________________________________________
cfe-dev mailing list
[hidden email]
http://lists.cs.uiuc.edu/mailman/listinfo/cfe-dev
 
Last edited:

VinegarTasters

macrumors 6502
Nov 20, 2007
278
71
I did both quotes because the poster is very misinformed.

ARC is not garbage collection nor has it ever been garbage collection.

All ARC does is do the reference counting for you when the program is compiled so you do not have to manual figure it out yourself. It calculates and inserts code for retain, release, and auto release. It does not do this during runtime nor is it slower.




So far what you have revealed is a lack of understanding of how this really works.



Once again, misinformed. Let me quote the head of windows graphics dev for windows phone and XNA head developer Shawn Hargreaves.

http://xboxforums.create.msdn.com/forums/p/91616/549344.aspx#549344


So basically XNA is not dead and if you use windows 8 classic mode, still fully supported.

Also I do not see how xbox 360 has killed their 3rd party game business when 99.99 percent of their games are 3rd party.

Your information just seems really off and I can not imagine where you got all this misinformation from.

Instead of putting my words out of context, you can try quoting the whole thing. Here you go:

"Automatic reference counting is supposed to be a faster variation of garbage collection, but it is the SAME THING. The compiler will insert code "thinking" it is the right time to create and release memory. To be safe, it will only release usually when the program exits.
That is what most Java programs do anyways, grab all the memory, and don't run garbage collector until you have used up all virtual memory. It never runs. When it does the game crawls, people notice the bad performance of Java, so the garbage collector essentially does nothing until ALL memory is used up."

How is that misinformed? You just repeated what I said, but you pulled one line out of my statement and tried to criticize it, by repeating what I said.

About the XNA, the head guy just repeated what I said. The "NEW" metro style interface, don't use XNA library (C# code), but use C++. Sure, you can fall back on "classic" old deprecated SLOW XNA technology (C#), but that is not the future.

Yeah OK. There are also lurkers on these forums from Microsoft and LLVM.
 
Last edited:

Mr. Retrofire

macrumors 603
Mar 2, 2010
5,064
519
www.emiliana.cl/en
LLVM does use a VM on Mac.

[LLVMdev] A cool use of LLVM at Apple: the OpenGL stack
This is a different version (LLVM JIT or LLVM Just-In-Time compiler) which is not comparable with the LLVM/Clang, which Apple uses for applications, kernel extensions, system frameworks and so on. And i doubt that the memory management of OS X has something to do with LLVM JIT/OpenGL (see topic).
 

Puevlo

macrumors 6502a
Oct 21, 2011
633
1
Sure it does. Inactive memory is not directly available to applications. OS X will use free memory for disk cache, which then becomes inactive memory. At its discretion (e.g., when free memory is running low), it will release inactive memory back to free memory pool, by doing things like flushing disk cache.

Unfortunately, Lion and (to less extent) Mountain Lion do not release inactive memory very well. Murphy's Law is that as inactive memory rises, free memory will decline. You just can't have both.

When you run out of free memory, OS X will start swapping memory (page out) into disk (virtual memory). And when you start paging out, you will start to get things like beach balls (less severe on flash storage/SSD). The only workaround at this point is to reboot your Mac.

Here's before:
Image
Here's after:
Image
In theory, this after state shouldn't happen.

Nice photoshop but no. Page outs cannot occur when there is inactive memory.
 

GGJstudios

macrumors Westmere
May 16, 2008
44,556
950
lol oooook. Both of us very long time members just get a kick out of trolling I guess
They're not photoshopped and you're definitely not trolling, but Puevlo has repeatedly posted nonsense in various threads. I would ignore anything posted by them.
 

Jenni8

macrumors member
Apr 27, 2011
87
0
CA
They are trying to move iOS methodology into Mountain Lion I think. In iOS when you close programs they are "saved state" into the flash. So in Mountain Lion, every program will not actually clear, but stuck in memory/virtual memory. So soon everything will fill up to the brim.

Well, if you want, you can always turn off virtual memory (but it may crash when you run out of memory). That way you interrupt the "ios" behavior. Or you can get a program to be some sort of garbage collector, grabbing memory until all the other programs gets dumped from virtual memory, and then freeing itself.

So what kind of program would work like that without resetting the icons on launchpad or other OS X areas? I did find an app to "free memory" as their are apparently many, but I don't like how it resets EVERYTHING.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.