Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.
Status
Not open for further replies.

wallysb01

macrumors 68000
Jun 30, 2011
1,589
809
It's highly possible that the iMac will have a Pro model once they can get a decent wide gamut display suitable for all workflows and a Thunderbolt 3 expansion capability to make everyone happy. A three PCIE slot mini tower with drive bays connected to an iMac? Not bad.

Throw me a E3 Xeon and 64GB RAM and I could be convinced.
 

wallysb01

macrumors 68000
Jun 30, 2011
1,589
809
The worst thing about my E5 v2 Xeon is the 128 GiB RAM limit. I keep pricing the 32 GiB DIMMs to go to 256 GiB, but haven't pulled the trigger yet.

Exactly, if using E5s, its a real waste to be limited to 128GB. I recently actually went down from 128 to 64 in my machine, and consolidated RAM in a coworkers machine (so at least one would be high-ish ram with 256), and I haven't really had too many memory issues yet. Basically, my jobs are either small, low ram and numerous (so my machine is fine), or huge, high ram and infrequent (then I run it one desk over instead). So, I'd certainly take a hit going from 12 cores to 4 cores, but so long as I have the bigger high ram machine anyway I'd be fine. And if the current nMP can't be that high ram machine, then welp, might be worth while getting the iMac "Pro".

But much would depend on pricing, of course. If I can get a 20-core/128GB HP Z for say $5.5K, while the E3 iMac with 65GB ECC might run nearly $4K with the 64GB of DDR3L SO-DIMM, then the iMac Pro might have priced itself out of usefulness when I could get maybe 4 times the machine 50% more cost. The iMac Pro is still going to need to be cheap enough to have significant price separation vs just getting another big machine.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Ram is tricky question, at my operation I had to use some third party app, that was single threaded and required huge memory to output our final product before hours or days processing, this app was a bottleneck, then we decided to build our own version specific for our product (I have some Input file generated from XYZ calculations, this app processes it into an Industry digestible ".???" file we can be deliver to production facilities), the result impacted us, while took about 9 intense weeks programming it on python (pyCL), and we spend a lot (accounting my team time dedicated on this endeavor it cost much more than all our hardware), we achieved our objective our app does the same we expected from the other more general purpose app, with huge savings on time, before we typically required about 90GB and 4 to 9 hours of single core cpu (no way to multithread), now we need just 16GB and and about 10 minutes on our mac pro (using all 8 xeon cores and a D700 gpu), the best part is we can run it on the iMac where burns only 27 minutes, we do this because our customers were concerned on the response time as the required usable output cost thousand dollars on Unable Time, this app will provide all us accumulative savings breaking even quickly, and even we can run it on a relatively cheap dedicated linux box, now our workstations are free for development off-loathing this a much cheaper machine, or even as background process using only 4 cores on the mac pro still under 20 minutes.

We suffered something similar to Logic Pro, this implies on the short time actually we need no more a mac pro for this kind of processing, it was a very specific situation, maybe we could need the same time-consuming app again in the future.

We use to rely on CPU power to overcome code inefficiency, this become a culture we should fight.
 

Zarniwoop

macrumors 65816
Aug 12, 2009
1,038
760
West coast, Finland
In Computex 1st of June AMD will launch Bristol Ridge ja Stoney Ridge APU's with DDR4 support, and that's about it. Polaris probably not, just some more info. At least I got this impression from the press release. Perhaps Polaris at WWDC?

http://www.amd.com/en-us/press-releases/Pages/computex-2016-2016may19.aspx

And like with Nvidia, Europeans have the awesome 3.00am CET viewing time for live broadcast. I think china has showed our place in the food chain...
 
Last edited:

H2SO4

macrumors 603
Nov 4, 2008
5,828
7,103
We use to rely on CPU power to overcome code inefficiency, this become a culture we should fight.
You’ll never beat this status quo. Why, well because the people that write code will balance the time it takes to debug and optimise against the return they get from selling as is more quickly.
It’s no different from any other product. They ask themselves how much it will cost to do it properly and compare with the support costs of dealing with problem customers.
 
  • Like
Reactions: tuxon86

linuxcooldude

macrumors 68020
Mar 1, 2010
2,480
7,232
You’ll never beat this status quo. Why, well because the people that write code will balance the time it takes to debug and optimise against the return they get from selling as is more quickly.
It’s no different from any other product. They ask themselves how much it will cost to do it properly and compare with the support costs of dealing with problem customers.

Thats the sad truth sometimes. Thats why specs don't always matter and often things like the iPhone out performs on less hardware for example. Better software optimization. Of course Android is in a different position than iOS.
 

H2SO4

macrumors 603
Nov 4, 2008
5,828
7,103
Thats the sad truth sometimes. Thats why specs don't always matter and often things like the iPhone out performs on less hardware for example. Better software optimization. Of course Android is in a different position than iOS.
Not sure that’s what I meant although it may be true.
Why, cos manufacturers are very clever at selling you the illusion of something. Like quality. The amount of margin Apple have on products means that should afford to spend more time debugging.
 
Last edited:

wallysb01

macrumors 68000
Jun 30, 2011
1,589
809
Ram is tricky question, at my operation I had to use some third party app, that was single threaded and required huge memory to output our final product before hours or days processing, this app was a bottleneck, then we decided to build our own version specific for our product (I have some Input file generated from XYZ calculations, this app processes it into an Industry digestible ".???" file we can be deliver to production facilities), the result impacted us, while took about 9 intense weeks programming it on python (pyCL), and we spend a lot (accounting my team time dedicated on this endeavor it cost much more than all our hardware), we achieved our objective our app does the same we expected from the other more general purpose app, with huge savings on time, before we typically required about 90GB and 4 to 9 hours of single core cpu (no way to multithread), now we need just 16GB and and about 10 minutes on our mac pro (using all 8 xeon cores and a D700 gpu), the best part is we can run it on the iMac where burns only 27 minutes, we do this because our customers were concerned on the response time as the required usable output cost thousand dollars on Unable Time, this app will provide all us accumulative savings breaking even quickly, and even we can run it on a relatively cheap dedicated linux box, now our workstations are free for development off-loathing this a much cheaper machine, or even as background process using only 4 cores on the mac pro still under 20 minutes.

We suffered something similar to Logic Pro, this implies on the short time actually we need no more a mac pro for this kind of processing, it was a very specific situation, maybe we could need the same time-consuming app again in the future.

We use to rely on CPU power to overcome code inefficiency, this become a culture we should fight.

I'd love to agree with you, but I just can't. Many of the programs I use are already very efficient. Tremendous gains have been made in memory usage or multithreading things, even in some cases using GPUs, but some problems just need to store a lot of data. You're never going to do things like de novo genome assembly or analyze 1000 human genomes at the same time (and yes doing them all at the same time is actually very useful) with 16GB of RAM. Certain parts of the pipeline, maybe, but in many key steps the only way to make things more memory efficient is to make assumptions that can compromise the final output.

And look at your own example, you had a team of people working for 2 months to improve something. Worthwhile as it maybe in your case, not everyone is going to have the disposable resources to do that. They'd rather just put up with slow runtimes while they do something else or just buy more machines to get around the problem. Added computers is really a very trivial cost compared to multiple people working full time for a couple months. If you had 6 devs at $120K/year working on this for 2 months you have your $120K in salary sunk into that, along with what ever other benefits plus the opportunity cost to the company (since presumably their profitability to the company out strips their salary), you're talking something like $200K. Even if it was just two people, you're looking at what, $30K in total cost for this in man-hours. So while this project might have been worth it to you, I'd rather just buy another couple computers. $30K would buy you a very nice little mini-cluster....

Investing in coding can be risky too. What happens if you spend all this time and money, then a couple months latter someone else does something similar, even better, and you could have just bought the program from them (or in my case it would typically even be free)? All that time coding was now more or less wasted, but if you just bought computers instead, then they can be repurposed or sold.

I'm not saying just dumping cash at computers is the best way around every problem, but lets be honest, you can buy a TON of computing power for relatively cheap these days, especially compared to people salaries. So lets not just pretend fighting inefficient code is the end all be all, somehow even moral thing to do here.
 
  • Like
Reactions: Mago

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
Fresh leaks from Darknet (take with a huge grain of salt), new Macbooks will have "target mode" enabling them to act as both Display/Keyboard/TouchPad/storage for other Macs with TB3 (mini, Pro, iMac)...

Other Leaks, said about TB3 External Compute Accelerators on emulated "Network", none specific but seems refer to Xeon Phy which emulates NE2000.

Take with a giant grain of salt, there were more leaks, but most are consistent with rumours, so dont account, only these two seem to me interesting and unexpected, hope both being truth
 
Last edited:

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
I'd love to agree with you, but I just can't. [.................]but lets be honest, you can buy a TON of computing power for relatively cheap these days, especially compared to people salaries. So lets not just pretend fighting inefficient code is the end all be all, somehow even moral thing to do here.

As I said, its just a culture, we have to fight, I agree with you, my case was special, and we where aware the bootleneck of that application also before using it, but it saved us a lot of time in development, just when our customers where concerned on the delay it caused they commissioned to finish the solution.

Itś like Airplanes, most Airplanes fly as fast as they can no matter they burn twice fuel on a leg, because fuel its cheaper than maintenaince, this doesnt means Airframe developers wont effort to make those planes more efficient.

Android is suffering this, from the begin Android phones requires at least twice the ram as the iOS device, this because their Dalvik machine (java jit) never was as efficient as GCC on iOS, this was a design choice to enable tons of code available thru java and an attempt to make the os soc-agnostic, the prrice at the begin at least Android phones use twice or more the ram on the same apps, and seems have the half cpu speed as comparable iPhones (the latest has improved a lot on Android 6/7, but still memory hogs).

Itś just a culture we have to fight.
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Fresh leaks from Darknet (take with a huge grain of salt), new Macbooks will have "target mode" enabling them to act as both Display/Keyboard/TouchPad/storage for other Macs with TB3 (mini, Pro, iMac)...

Other Leaks, said about TB3 External Compute Accelerators on emulated "Network", none specific but seems refer to Xeon Phy which emulates NE2000.

Take with a giant grain of salt, there were more leaks, but most are consistent with rumours, so dont account, only these two seem to me interesting and unexpected, hope both being truth

This is a fun one to think about. Connect any 2+ macs and all of a sudden you have a mini cluster. I wonder if this could be done seamlessly or would it require specific APIs for apps to take advantage of the additional resources. I imagine a scenario where you could connect a macbook to a mac pro and grab all of its resources but still work with files and applications on the macbook. This could work with an external graphics card as well.

This could turn that mac pro into a physical shared resource. Imagine a team of people who could physically walk up to a mac pro and connect their laptop if they need more computing resources.

Itś like Airplanes, most Airplanes fly as fast as they can no matter they burn twice fuel on a leg, because fuel its cheaper than maintenaince, this doesnt means Airframe developers wont effort to make those planes more efficient.

This is a bad metaphor and not true. Airplanes have a very specific cruising speed to minimize fuel consumption. Airlines very much prefer to fly slower and more efficient. Flying faster would increase maintenance cost because running engines at full throttle increases their wear.

A better metaphor might be something like you look at a Toyota Prius and say, look, this car can do 50 miles per gallon. Any car that can't do that is just due to lazy engineering. The reality is the prius is more efficient but it comes at a higher up front cost to buy the car along with more expensive maintenance. The difference between a Prius and a Camry might not be worth it depending on your use case.

As others have said, software optimization is very case specific as to whether its worth it or not. If your application is Final Cut Pro then it makes sense to optimize it as much as possible because speedups in user's workflow is very much worth it and users will pay for it. If you have an app that is not very expensive and potential optimizations are minor then its probably not worth it.

Apple is hoping that the Mac Pro with its dual GPUs pushes people to use GPU compute. Turns out GPU programming is hard and its something most programmers are not familiar with. Not to mention speedups are possible in only a few specific cases. It will take time before this approach sees broader support.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
This is a fun one to think about. Connect any 2+ macs and all of a sudden you have a mini cluster. I wonder if this could be done seamlessly or would it require specific APIs for apps to take advantage of the additional resources. I imagine a scenario where you could connect a macbook to a mac pro and grab all of its resources but still work with files and applications on the macbook. This could work with an external graphics card as well.

This could turn that mac pro into a physical shared resource. Imagine a team of people who could physically walk up to a mac pro and connect their laptop if they need more computing resources.

Actually thatś whats does the Xeon Phi Knigths Landing, its seen as a network node on the fabric (from software perspective), whatever this is an nice improvements, we could use a MBP as emergency display/keyboard or move with the mac pro and a mbp on short field deployments, hope itś being true.

This is a bad metaphor and not true. Airplanes have a very specific cruising speed to minimize fuel consumption. Airlines very much prefer to fly slower and more efficient. Flying faster would increase maintenance cost because running engines at full throttle increases their wear.

A better metaphor might be something like you look at a Toyota Prius and say, look, this car can do 50 miles per gallon. Any car that can't do that is just due to lazy engineering. The reality is the prius is more efficient but it comes at a higher up front cost to buy the car along with more expensive maintenance. The difference between a Prius and a Camry might not be worth it depending on your use case.

As others have said, software optimization is very case specific as to whether its worth it or not. If your application is Final Cut Pro then it makes sense to optimize it as much as possible because speedups in user's workflow is very much worth it and users will pay for it. If you have an app that is not very expensive and potential optimizations are minor then its probably not worth it.

Apple is hoping that the Mac Pro with its dual GPUs pushes people to use GPU compute. Turns out GPU programming is hard and its something most programmers are not familiar with. Not to mention speedups are possible in only a few specific cases. It will take time before this approach sees broader support.

Sadly not a metaphor, Flight planners calculate Fuel/Speed/Flight Time in order to lower Direct Operating Cost, Speed use to win as factor, Airframe Wear is by long more expensive than engine, a tipical Bizjet has a 10000hr capable airframe, most jetliners have 40 to 50K hr, engines use to be overhauled at 3000-6000 flight ours or equivalent cycles its always much cheaper than Airframe overhaul and don't implies keep the AC on ground (it can fly on leased engines), a Type-C check usually is scheduled at 4000H depend on airframe and age, and implies take apart and check every AC component, no need to say its huge expensive, I'm very familiar with planes.
 

Stacc

macrumors 6502a
Jun 22, 2005
888
353
Sadly not a metaphor, Flight planners calculate Fuel/Speed/Flight Time in order to lower Direct Operating Cost, Speed use to win as factor, Airframe Wear is by long more expensive than engine, a tipical Bizjet has a 10000hr capable airframe, most jetliners have 40 to 50K hr, engines use to be overhauled at 3000-6000 flight ours or equivalent cycles its always much cheaper than Airframe overhaul and don't implies keep the AC on ground (it can fly on leased engines), a Type-C check usually is scheduled at 4000H depend on airframe and age, and implies take apart and check every AC component, no need to say its huge expensive, I'm very familiar with planes.

I don't want to derail the conversation, but check out the wikipedia page on the boeing 787. Initially Boeing proposed an aircraft 15% faster with the same fuel economy as similar sized aircraft but the concerns in fuel efficiency and operating costs led them to scrap the project and propose a more efficient design that flies at the same speed. If aircraft speed and reducing flight times was the most important factor in operating costs we would all be flying around in Concordes.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
I don't want to derail the conversation, but check out the wikipedia page on the boeing 787.

Best thing you can say, my source isn't wikipedia, I'm very close to this topic, only I can say you, one thing is operate an plane, another is design it, A/P are designed to be as efficient as possible, but are operated at the lowest cost possible, the latest not always its possible at theirs most efficient speed, that's a fact.

Do you wanna know the reality on A/P operating cost, go to www.conklindd.com concklin & de Decker, they know a couple of things on this theme.

In case efficiency was a determinant factor, all planes will flow on piston diesel engines at 200 Knots those planes actually use less fuel than an average SUV (Diamond Da62 a.e.) transporting same people the same distance.

Even turbo props use less fuel, we have high-speed efficient turbo props, the Avanty P180, why didn't sell as well as much more hungy similar sized Jets? Turbo Props MX is bigger than its fuel saving cost, until fuel skyrocket to 150$ BL Avanti has no future (check concklin)
 
Last edited:

flat five

macrumors 603
Feb 6, 2007
5,580
2,657
newyorkcity
This is a fun one to think about. Connect any 2+ macs and all of a sudden you have a mini cluster. I wonder if this could be done seamlessly or would it require specific APIs for apps to take advantage of the additional resources. I imagine a scenario where you could connect a macbook to a mac pro and grab all of its resources but still work with files and applications on the macbook. This could work with an external graphics card as well.
hey Stacc..
this is already possible with os x.. not so much on a system wide handling but on a per application basis, it's entirely possible.

OS X does have some very user friendly networking capabilities (like- all i do to join up my computers is have them within range (wi-fi) of each other and have them both turned on)..

then the render networking is handled by each particular application (generally, when you license a rendering application, you'll get a couple free nodes with it.. a node is basically a smaller app you'll install on the other computers in your network in order to utilize their resources for the processes occurring on the master computer)

anyway, with os x 10.11 in conjunction with my rendering application (indigo), all i have to do for the mini cluster you speak of is open my laptop then select 'network rendering' in indigo (i.e. push a button)
 

ShadovvMoon

macrumors 6502
May 22, 2015
376
1,074
Brisbane, Queensland, Australia
As I said, its just a culture, we have to fight, I agree with you, my case was special, and we where aware the bootleneck of that application also before using it, but it saved us a lot of time in development, just when our customers where concerned on the delay it caused they commissioned to finish the solution.
I too dislike this new culture developing and unfortunately it has started to invade Swift :mad: In the proposal thread http://thread.gmane.org/gmane.comp.lang.swift.evolution/9744

"In a quick and dirty test, the second [new approach] is approximately 34% slower. I'd say that'€™s more than acceptable for the readability gain."
"IMO, we should design languages around performance concerns only when a construct has an _inherent_
performance limitation"

It seems the new thinking is that performance is the least important aspect of a language. I blame Python :p
 

koyoot

macrumors 603
Jun 5, 2012
5,939
1,853
This is a fun one to think about. Connect any 2+ macs and all of a sudden you have a mini cluster. I wonder if this could be done seamlessly or would it require specific APIs for apps to take advantage of the additional resources. I imagine a scenario where you could connect a macbook to a mac pro and grab all of its resources but still work with files and applications on the macbook. This could work with an external graphics card as well.
All of what you discussed with Mago is implementation of HSA 2.0, and one of ideas of computing that I was writing for a long time on this forum. API already is in OS X that has HSA 2.0 capabilities, and it will allow to see all of compute units cluster as one big computer, regardless of what it uses as hardware.

And yes, this will work with external graphics cards.

P.S. Now you know why there will be AI in next macOS ;).
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
I too dislike this new culture developing and unfortunately it has started to invade Swift :mad: In the proposal thread http://thread.gmane.org/gmane.comp.lang.swift.evolution/9744

"In a quick and dirty test, the second [new approach] is approximately 34% slower. I'd say that'€™s more than acceptable for the readability gain."
"IMO, we should design languages around performance concerns only when a construct has an _inherent_
performance limitation"

It seems the new thinking is that performance is the least important aspect of a language. I blame Python :p
Python certainly fails the "readability" aspect!

But, are there many commercial applications written in Python? It's certainly popular for the end-user programmers (e.g. science, AI, ML, ...)
 

tomvos

macrumors 6502
Jul 7, 2005
345
119
In the Nexus.
I too dislike this new culture developing and unfortunately it has started to invade Swift :mad: In the proposal thread http://thread.gmane.org/gmane.comp.lang.swift.evolution/9744

"In a quick and dirty test, the second [new approach] is approximately 34% slower. I'd say that'€™s more than acceptable for the readability gain."
"IMO, we should design languages around performance concerns only when a construct has an _inherent_
performance limitation"

It seems the new thinking is that performance is the least important aspect of a language. I blame Python :p

I think the remark about “we should design languages around performance concerns only when a construct has an inherent performance limitation” is the key to this issue. As long as there are methods to avoid or deal with a performance limitation it's OK. Whether you deal with the performance limitation by throwing more hardware or by optimizing the code does not matter. If it's relevant to you, you can write fast applications ... if you accept a certain cost.

On the other hand, an inherent performance problem can not be solved by a good coder (at least not in reasonable time) neither can it be solved by adding more hardware to accelerate the problem. These are the issues which have to be avoided in a language.
Besides, Swift is a young language and I assume it has lot's of potential to optimize the language performance. As a young language its features and syntax are in a fluid state at the moment. I'd rather prefer that the the features and syntax mature first and after this the performance issues are addressed.
Simply because changing the language will most likely break your code, while optimizing the speed of a mature and well defined language should not break your code.
 

Mago

macrumors 68030
Aug 16, 2011
2,789
912
Beyond the Thunderdome
check out the wikipedia page on the boeing 787. Initially Boeing proposed an aircraft 15% faster with the same fuel economy as similar sized aircraft but the concerns in fuel efficiency and operating costs led them to scrap the project and propose a more efficient design that flies at the same speed

I didn't check this, I was very involved in aviation at these time, not at Boeing, I was partner on an now defunct A/P Maintenance and upgrade operation, I used to read on the SonicCruiser news, Boeing never show nothing more than little mock-ups on the sonic cruiser, at the same time was expected Airbus to decide Build an Two-story super jumbo or a, A300 successor direct competitor to B767, the most profitable A/C class (twin-aisle), Boeing created the hype about sonic cruiser and upgrades on B767, but actually seemed very involved on SonicCruiser, then Airbus most due public (politicians) pressure on build an uber-plane-pride-flagship as was the American 747, as son the A380 confirmed orders and wasn't possible to frozen or cancel, Boeing unveiled the secretly developed B787, an quantum leap on the category, Airbus have no enough capital neither engineers to develop two all-new a/c families at same time, giving up to Boeing the twin-aisle market, at end Boeing booked about 1200 planes at 250M$ each while Airbus booked about 200 of the 450M$ A380 the same time (Boeing should have much more B787 booked but program challenges delayed it beyond foreseeable ), and the plane Airbus should developed instead the A380 the A350 (true B787 competitor) entered in service two years ago, the Market for Super-Jumbos shrieked at the point Boeing is considering ends B747 program, while makes room to speed up B787/B767/B737.

Boeing has studies to replace at some point the B747 by an smaller two-decker A380-like TwinJet "B797" but this program wont start until 2020 and deliveries 2025 at least.
 
Last edited:
Status
Not open for further replies.
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.