Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Hyper-X

macrumors 6502a
Jul 1, 2011
581
1
Sorry, but please enlighten me as to which Mac hardware/software is unsupported 1 year after its release and has had only a 1 year life cycle (if you can't, then I will accept your explanation of which OS X product has had only a 2 year life cycle).
It depends on when the product was purchased, if you were one of the unfortunate to have bought it within the latter half before the next release of OS X, the lifecycle has ended. A great example is Snow Leopard. SL 10.6.8 is arguably one of the best (performing) release version of OS X thus far.

Now it is true the Mountain Lion does require 2009 or later Mac Minis, but OTOH, it will also run on 2007 iMacs, and the reason that it will not run on older machines is that the transition to a 64-bit kernel can not support the 32-bit graphics drivers used by older video cards.
Then that would suggest that Apple's incapable of writing software for older cards which can support 64 bit drivers. This is admission that Apple can't do what others have done, to include major Linux distros and Microsoft. You're discounting the possibility that the decision to stop support on older hardware is to force people to buy a newer machine.

The single biggest difference between SL and Lion is ASLR, with that Apple cut support for Rosetta and implemented a full ASLR code. Compared to Windows XP launched in Oct 2001 (still supported today), that platform did not support full ASLR, neither did the initial release of Windows Vista. Still users could upgrade older machines to Vista and install SP2 and have full ASLR support that used to run XP. I think you're making excuses for Apple and not recognizing when they deserve harsh criticism.

This is the price of transitioning to a completely 64-bit kernel for improved memory addressing and performance, and it is not the kind of transition that is frequently made (the only previous transition that was equivalent to this one was the transition from PowerPC to Intel Core2Duo). Prior to Mountain Lion, new versions of OS X could usually run on 5 year old hardware (and in the case of the iMac and MBP, they still can), and I anticipate that this will again be the case, now that the transition is complete.
Pure 64 bit kernels have been around for a very long time, it's just that Apple's simply late to the event.

Apple isn't guilty of being the first to stop support altogether for the sole purpose of generating more revenue/sales and I'm certain they won't be the last. I didn't have to upgrade from SL to ML for 8GB of RAM support for my MBP, the laptop's RAM support limit is less about what OS it's running and more about what the limitations of the hardware is unless you're using a dinosaur of a computer.

I do agree that Thunderbolt has had a very disappointing launch, although I'm still hoping that wider adoption will eventually drive down prices.
No matter what good people has to say about Apple computers, when it comes to their laptops all I can say is adapters, adapters, adapters. When you buy a MBP you think you can simply take it home like a PC, connect it to an external monitor VGA/DVI or HDMI, plug in both USB ports with your peripherals, it's not quite that simple.

Majority of PC's don't require adapters for external monitors, the USB ports on the MBP are too close together as to only allow the slimmest of connectors to work side-by-side. God forbid you have a DVI monitor at home then go on the road for a presentation to find out you need a HDMI connection, you'd need to have 2-3 adapters in your bag to support all the common monitor/projector connections. Then if you're going to pack a Thunderbolt external HDD, how are you going to connect the external monitor and the HDD at the same time? The only 2 ways I know of is to buy that expensive Matrox/Belkin unit that acts like a MM box or an expensive external Thunderbolt HDD with a pass-thru feature.

Even on older PC's, the cheaper ones, a lot of them are USB 2.0 but they do include an Express Card slot, which can support a USB 3.0 card + dongle.
 
Last edited by a moderator:

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
It just doesn't work like that.

T-Bolt is not a network - it's a private peripheral bus (PCIe). You can't simply put two processors on a private peripheral bus - two masters only works in kinky porn videos, it doesn't work for a peripheral bus.

You would need to overlay a protocol to handle it I guess. This has already been done with nodes in Logic Pro over fire wire.
 
Last edited:

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
The new Mac Pro - "pretty", but "glacial"

You would need to overlay a protocol to handle it I guess. This has already been done with nodes in Logic Pro over fire wire.

Even if possible, you'd still be several orders of magnitude slower than the shared memory system of a multi-socket computer.

"Works, but glacially slow" is not a good slogan for the ads.
 
Last edited:

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Even if possible, you'd still be several orders of magnitude slower than the shared memory system of a multi-socket computer.

"Works, but glacially slow" is not a good slogan for the ads.

It doesn't necessarily matter, because the need for more compute power may indicate that your problem is not I/O bound. The same is true over HPC clusters, the data is divided and distributed to the nodes where the computation happens. In this case, the computation is what takes time, hence the need for several computers to solve the problem. In the case of Logic you have a latency that is determined by your buffer size, that is time you have to move data around, and in CPU terms that latency amounts to a very long time. The same is true for graphics cards, often an expensive operation is moving the data to and from the card.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
You would need to overlay a protocol to handle it I guess. This has already been done with nodes in Logic Pro over fire wire.

PCIe is a much lower level bus than 1394 - and networking over 1394 is an advertised, supported feature. 1394 can be "peer to peer", not so easy with PCIe (although there's a "slave to slave" feature in PCIe).

Anyway, a lot of work that's very difficult to get right for limited benefit. That's not Apple's modus operandi.


It doesn't necessarily matter, because the need for more compute power may indicate that your problem is not I/O bound. The same is true over HPC clusters, the data is divided and distributed to the nodes where the computation happens. In this case, the computation is what takes time, hence the need for several computers to solve the problem. In the case of Logic you have a latency that is determined by your buffer size, that is time you have to move data around, and in CPU terms that latency amounts to a very long time. The same is true for graphics cards, often an expensive operation is moving the data to and from the card.

For an embarrassingly parallel application, sure. For the rest, however, clustering simply can't compete with a multi-threaded application running on a fast shared memory multiprocessing system - especially considering the relative ease of writing multi-thread code vs. clustered code.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
PCIe is a much lower level bus than 1394 - and networking over 1394 is an advertised, supported feature. 1394 can be "peer to peer", not so easy with PCIe (although there's a "slave to slave" feature in PCIe).

Anyway, a lot of work that's very difficult to get right for limited benefit. That's not Apple's modus operandi.

Second time you answer that quote now. Thunderbolt isn't strictly PCIe though, it already handles daisy chaining of several devices and keep in mind that although ethernet is used for networking, ethernet itself connects a source and destination, the networking is built on top. Apple already has Xgrid for this purpose, the use of nodes is also a feature built into Logic. Whether this is Apple's modus operandi, well why did they add Thunderbolt at all if it isn't, it's specifically a high end feature. Now that USB3 has arrived, perhaps people can stop whining about Thunderbolt as it's clear it's not there to replace usb.

For an embarrassingly parallel application, sure. For the rest, however, clustering simply can't compete with a multi-threaded application running on a fast shared memory multiprocessing system - especially considering the relative ease of writing multi-thread code vs. clustered code.

For anything that can be solved on a cluster or grid, my example may be simple, but it's an example. Anything non "realtime" that can be solved in parallel, rendering for example.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Second time you answer that quote now. Thunderbolt isn't strictly PCIe though, it already handles daisy chaining of several devices and keep in mind that although ethernet is used for networking, ethernet itself connects a source and destination, the networking is built on top. Apple already has Xgrid for this purpose, the use of nodes is also a feature built into Logic. Whether this is Apple's modus operandi, well why did they add Thunderbolt at all if it isn't, it's specifically a high end feature. Now that USB3 has arrived, perhaps people can stop whining about Thunderbolt as it's clear it's not there to replace usb.

Can you rephrase this - it makes no sense to me.

T-Bolt is a PCIe extender - how does adding daisy-chaining make it not PCIe?

PCIe is not networking....

Does Apple still support Xgrid? Wikipedia says that "The Xgrid client was not included in Mac OS X 10.8". Ouch.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Can you rephrase this - it makes no sense to me.

T-Bolt is a PCIe extender - how does adding daisy-chaining make it not PCIe?

PCIe is not networking....

Eh, daisy-chaining is not a feature of PCIe. Thunderbolt carries PCIe and DisplayPort and handles daisy-chaining and hot swap even though neither of them are native PCIe features. Therefor you can no derive what is possible in Thunderbolt from PCIe directly. The daisy-chaining support means that it already is possible to communicate with different entities on one cable. I know that PCIe is not networking, the point I was trying to make was that (for example) Ethernet, even though used in networking only handles source to destination, the rest is a layered on top with tcp/ip.

Does Apple still support Xgrid? Wikipedia says that "The Xgrid client was not included in Mac OS X 10.8". Ouch.

It also says it's a separate download, I tried to address your "modus operandi" comment, it's clearly "know how" they have. Also, the feature built into Logic doesn't need a client application not sure if it's related to Xgrid at all.
 
Last edited:

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
It also says it's a separate download, I tried to address your "modus operandi" comment, it's clearly "know how" they have. Also, the feature built into Logic doesn't need a client application not sure if it's related to Xgrid at all.

Gee. I remember the arguments that when Apple made Rosetta a "separate download" (or "optional install" - essentially the same thing) everyone should have realized that it would soon be dead.

Xgrid is on life support - and the plug will soon be pulled.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
I see you now have abandoned your PCIe argument and went in a different direction. Well, I'm not playing.
 

KnightWRX

macrumors Pentium
Jan 28, 2009
15,046
4
Quebec, Canada
I see you now have abandoned your PCIe argument and went in a different direction. Well, I'm not playing.

He abandonned it because you will argue ad nauseum even though you are wrong about it. Daisy-chaining is a feature of all BUS topologies... which PCIe is. The fact that Thunderbolt also carries Display Port 1.1a doesn't mean Thunderbolt can be used to network 2 computers.

At this point, I don't know how it can be made much clearer to you. Thunderbolt is not Firewire or Ethernet.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
He abandonned it because you will argue ad nauseum even though you are wrong about it. Daisy-chaining is a feature of all BUS topologies... which PCIe is.

The original comment:

Code:
T-Bolt is not a network - it's a private peripheral bus (PCIe). You can't simply put two processors on a private peripheral bus - two masters only works in kinky porn videos, it doesn't work for a peripheral bus.

Your comment only supports that it may be possible... The fact that you can daisy chain with Thunderbolt regardless if it's possible purely on PCIe should make networking possible with the help of a network protocol. The daisy chaining means that the devices can address each other and communicate on the same line.

The fact that Thunderbolt also carries Display Port 1.1a doesn't mean Thunderbolt can be used to network 2 computers.

At this point, I don't know how it can be made much clearer to you. Thunderbolt is not Firewire or Ethernet.

I never said anything remotely like it.
 
Last edited:

KnightWRX

macrumors Pentium
Jan 28, 2009
15,046
4
Quebec, Canada
Your comment only supports that it may be possible... The fact that you can daisy chain with Thunderbolt regardless if it's possible purely on PCIe should make networking possible with the help of a network protocol. The daisy chaining means that the devices can address each other and communicate on the same line.

Look, if I plug 2 Ethernet adapters on 2 computer's PCIe bus and link them up using a Cat 5e cable and set up a proper software network stack... is it the PCIe bus that is networking both computers ? Of course not, it's the Ethernet network I just built.

That's the point Aiden is trying to make. Thunderbolt, a pure external PCIe bus, cannot network 2 computers. You would need to plug a device that is able to do host to host communication on it. But then, it's not the Thunderbolt that's doing the networking, it's the device, be it IEEE 1394 or Ethernet adapters, etc..

Are you really this dense that you can't understand this ? Saying you can do Host to Host communication over Thunderbolt would be like me saying that I can simply solder wires 1 to 1 from the pins in a PCI slot to another computer's PCI slot... It doesn't work that way, it doesn't even start to make sense.

And again : This is why I think at this point, it's best to abandon the conversation. You always do this, post wrong information and argue and argue until people just plain give up, then you claim to have won the argument. In the end, you're still wrong, we just don't care to argue it with you after a while.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Look, if I plug 2 Ethernet adapters on 2 computer's PCIe bus and link them up using a Cat 5e cable and set up a proper software network stack... is it the PCIe bus that is networking both computers ? Of course not, it's the Ethernet network I just built.

Of course, but Ethernet can be replaced by another physical link, as I have tried to explain ethernet itself knows nothing about networks, it connects a source and destination. The networking capability is added on top of the physical layer (which may or may not be ethernet).

That's the point Aiden is trying to make. Thunderbolt, a pure external PCIe bus, cannot network 2 computers. You would need to plug a device that is able to do host to host communication on it. But then, it's not the Thunderbolt that's doing the networking, it's the device, be it IEEE 1394 or Ethernet adapters, etc..

First, a daisy chain is a network. Secondly Thunderbolt is not a a pure external PCIe bus, PCIe and Displayport are mapped onto a thunderbolt transport layer that sends the content in packets to a destination. Each thunderbolt controller in the chain contains a switch and either pass it forward or accepts it, if it is the destination.

From intel:

DisplayPort and PCI Express protocols are mapped onto the transport layer. The mapping function is provided by a protocol adapter which is responsible for efficient encapsulation of the mapped protocol information into transport layer packets. Mapped protocol packets between a source device and a desti- nation device may be routed over a path that may cross multiple Thunderbolt controllers. At the destination device, a protocol adapter recreates the mapped protocol in a way that is indistinguishable from what was received by the source device.

http://www.intel.com/content/dam/doc/technology-brief/thunderbolt-technology-brief.pdf

Also, two Macs can already communicate over Thunderbolt in target disk mode.
 

Attachments

  • thunderbolt.png
    thunderbolt.png
    50.1 KB · Views: 285

KnightWRX

macrumors Pentium
Jan 28, 2009
15,046
4
Quebec, Canada
Of course, but Ethernet can be replaced by another physical link, as I have tried to explain ethernet itself knows nothing about networks, it connects a source and destination. The networking capability is added on top of the physical layer (which may or may not be ethernet).

Ethernet is not a physical layer. Ethernet is a networking protocol, a fairly low-level one. It's compromised of host addresses (known as MAC addresses), and has a frame definition to send frames over the network with a standardized header. It's not a simple source to destination interconnect, it supports many different topologies for interconnection of multiple destinations.

And you're forgetting that transport layers aren't just a networking feature. Thunderbolt has a transport layer, it doesn't mean it can be used for host to host communication. By your definition, just a computer is a network (and frankly it is, a Host Area Network if you will, a network built within a single node for its peripherals to communicate with each other).

Heck, by your token, SATA is networking and you could basically work up a cluster of computers through purely SATA connections. It. Makes. No. Sense.

You're not making sense. You're just trying to not be wrong. I'm done with you, you're never going to get it. Keep arguing. I'll do one better I think at this point with you. I'll simply add you to ignore. There's nothing ever good that comes out of someone that keeps arguing in circles anyhow.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Ethernet is not a physical layer. Ethernet is a networking protocol, a fairly low-level one. It's compromised of host addresses (known as MAC addresses), and has a frame definition to send frames over the network with a standardized header. It's not a simple source to destination interconnect, it supports many different topologies for interconnection of multiple destinations.

It's not really important to the argument, ethernet is both part of the physical layer and data link layer obviously. Routing of packets are done higher up in the protocol stack, independent on what physical link that is used, that is my point, for example using tcp/ip over firewire. I'm not talking about tcp/ip here however, and ethernet was mentioned as an example, and because the physical layer can be exchanged, I hoped to provide a point.

And you're forgetting that transport layers aren't just a networking feature. Thunderbolt has a transport layer, it doesn't mean it can be used for host to host communication. By your definition, just a computer is a network (and frankly it is, a Host Area Network if you will, a network built within a single node for its peripherals to communicate with each other).

Heck, by your token, SATA is networking and you could basically work up a cluster of computers through purely SATA connections. It. Makes. No. Sense.

No that is not what I'm doing, you're missing the point, you would need to add software to support it by overlaying a new protocol, not surprisingly. The question was, does thunderbolt provide the necessary means to do it.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
shot yourself in the foot here....

Also, two Macs can already communicate over Thunderbolt in target disk mode.

This factoid actually harms your argument.

Why can't two Apples communicate over T-Bolt while running Apple OSX - why does one Apple have to be in a catatonic BIOS-level "target disk mode" state in order to communicate?

Simply because the target disk mode computer has to emulate a PCIe "slave" for the "master". I'd love to see the "pcimap" from an Apple connected to an Apple in target disk mode.

If "shared PCIe" were practical, the market would be full of multi-socket systems using cheaper Core i* chips on shared PCIe.

But it's not, and it isn't.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
This factoid actually harms your argument.

Why can't two Apples communicate over T-Bolt while running Apple OSX - why does one Apple have to be in a catatonic BIOS-level "target disk mode" state in order to communicate?

That's how target disk mode works, and has always worked.

If "shared PCIe" were practical, the market would be full of multi-socket systems using cheaper Core i* chips on shared PCIe.

But it's not, and it isn't.

First it requires that a fast interconnect (like Thunderbolt) is available. The practice is supported with non-transparent bridging. I don't know if this is used, I just don't think that a "not possible" is in place when someone throws out a pure speculation of future possible uses.

The PCI Express specification has been silent with regards to implementing multi processor systems. Because of this, many have assumed that distributed processing cannot be implemented using PCI Express. This, of course, is incorrect; given that PCI Express is software compatible with PCI, and PCI systems have long implemented distributed processing.


http://www.plxtech.com/files/pdf/technical/expresslane/NontransparentBridging.pdf

http://www.ge-ip.com/userfiles/file...eer_interconnect_wp_gft804.pdf?cid=GlobalSpec

http://www.hpcwire.com/hpcwire/2011..._a_high-performance_cluster_interconnect.html
 
Last edited:

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
First it requires that a fast interconnect (like Thunderbolt) is available. The practice is supported with non-transparent bridging. I don't know if this is used, I just don't think that a "not possible" is in place when someone throws out a pure speculation of future possible uses.

So, is non-transparent bridging implemented in T-Bolt?

After all, the original topic was using T-Bolt as a network.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
So, is non-transparent bridging implemented in T-Bolt?

After all, the original topic was using T-Bolt as a network.

It would be implemented in the host, or the controller? The conflict would appear on the host as far as I can tell, a schematic overview shows that the controller has a pci switch, as well as a thunderbolt switch.

Both would work I think, but the answer is that you and I don't know since that isn't publicly available information afaik. I only reacted to your assertive, "not possible", it certainly looks possible on the surface, I wouldn't leave that out.
 

gnasher729

Suspended
Nov 25, 2005
17,980
5,566
This factoid actually harms your argument.

Why can't two Apples communicate over T-Bolt while running Apple OSX - why does one Apple have to be in a catatonic BIOS-level "target disk mode" state in order to communicate?

Simply because the target disk mode computer has to emulate a PCIe "slave" for the "master". I'd love to see the "pcimap" from an Apple connected to an Apple in target disk mode.

If "shared PCIe" were practical, the market would be full of multi-socket systems using cheaper Core i* chips on shared PCIe.

But it's not, and it isn't.

If you had "shared PCIe", then every computer that is sharing would think that it controls the PCIe. So unless someone invested in the operating system in some major way, you would have just two or more computers fighting each other over control of PCIe.

Now I suppose you could create a network similar to Gigabit Ethernet but faster (let's call it Xnet) and create Thunderbolt-to-Xnet adapters to create a very, very fast network. But that would still be a network, with multiple independent computers communicating through a fast network. But that kind of thing is available already.
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
It would be implemented in the host, or the controller? The conflict would appear on the host as far as I can tell, a schematic overview shows that the controller has a pci switch, as well as a thunderbolt switch.

Both would work I think, but the answer is that you and I don't know since that isn't publicly available information afaik. I only reacted to your assertive, "not possible", it certainly looks possible on the surface, I wouldn't leave that out.

Please reread my comment:

It just doesn't work like that.

T-Bolt is not a network - it's a private peripheral bus (PCIe). You can't simply put two processors on a private peripheral bus - two masters only works in kinky porn videos, it doesn't work for a peripheral bus.

A dual socket (e.g. 2*4 = 8 core) machine has tremendous bandwidth to a single shared memory realm. Even if you managed to solve the issues with dual masters on the PCIe, you'd have two separate memory realms with horrifically slow bandwidth between the realms (and that assumes some huge amount of magic to create an illusion that the two memory spaces are really one).

Two Mini-Macs on T-Bolt can never approach the capabilities of a dual-socket workstation. Never.
(emphasis added)

I said that it wasn't simple, not that it wasn't possible.
 

subsonix

macrumors 68040
Feb 2, 2008
3,551
79
Please reread my comment:


(emphasis added)

I said that it wasn't simple, not that it wasn't possible.

Hm, that's subtle. But never mind, it is possible and the prospect of using PCIe for distributed computing is discussed elsewhere as you can see.
 

mex4eric

macrumors 6502
Original poster
Jun 23, 2009
263
0
Ottawa, Canada
Apple has shown us some add-on strategies, such as the MacBook Air plus a Thunderbolt Display. Perhaps the Mac Pro can be replaced with an 27" iMac plus a Thunderbolt expander box where you can add special purpose cards and more drives. Or a Thunderbolt cable to join two 27" iMacs together to get an 8 core machine with more memory?

The big Mac Pro box is dated.

But will this increase overall sales???
 

AidenShaw

macrumors P6
Feb 8, 2003
18,667
4,677
The Peninsula
Apple has shown us some add-on strategies, such as the MacBook Air plus a Thunderbolt Display. Perhaps the Mac Pro can be replaced with an 27" iMac plus a Thunderbolt expander box where you can add special purpose cards and more drives. Or a Thunderbolt cable to join two 27" iMacs together to get an 8 core machine with more memory??

It just doesn't work like that.

T-Bolt is not a network - it's a private peripheral bus (PCIe). You can't simply put two processors on a private peripheral bus - two masters only works in kinky porn videos, it doesn't work for a peripheral bus.

A dual socket (e.g. 2*4 = 8 core) machine has tremendous bandwidth to a single shared memory realm. Even if you managed to solve the issues with dual masters on the PCIe, you'd have two separate memory realms with horrifically slow bandwidth between the realms (and that assumes some huge amount of magic to create an illusion that the two memory spaces are really one).

Two Mini-Macs on T-Bolt can never approach the capabilities of a dual-socket workstation. Never.
 
Last edited by a moderator:
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.