Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

digitalcuriosity

macrumors 6502a
Original poster
Aug 6, 2015
690
299
It seems Apple has dates that are known for when products are released, stockholders know these dates and depend on them for returns on their investments.

In the rush to make these dates could quality of parts and fully testing be somewhat reduced?

In the past we have seen some products, that have suffered with quality and their operation when first released.
 
No. Apple plans product launches far ahead and has been know to delay them when they aren't ready (HomePod, AirPower, etc)..
 
The iPhone has settled in pretty much, but has still been delayed like last year and this year. iPads aren't released at any certain time, could be Spring, could be fall, so no, I don't think that at all.
 
I think we would be kidding ourselves if we were to think that schedule pressures don't exist or have absolutely no impact on QA/QC. I know from experience in my own engineering work that schedule and budget go hand in hand, so while we may have the opinion that Apple has deep coffers which somehow translates to looser schedules/budgets there's got to be limits on that for them too.
 
No. Apple plans product launches far ahead and has been know to delay them when they aren't ready (HomePod, AirPower, etc)..
So you are implying that the large Apple stockholders don't know about when a product is going to be released or delayed.
When an iPhone or iPad is sold and people find damaged housing and faulty screens, this is not a problem of them being rushed out without a quality operation test and a quality of build inspection.
When people buy a product costing from $800US to $2000US, they expect a quality product when they open the box.
 
So you are implying that the large Apple stockholders don't know about when a product is going to be released or delayed.
When an iPhone or iPad is sold and people find damaged housing and faulty screens, this is not a problem of them being rushed out without a quality operation test and a quality of build inspection.
When people buy a product costing from $800US to $2000US, they expect a quality product when they open the box.

Every product ever made could have issues like that, especially when producing vast quantities.

I would actually go the other way and say how incredible it is too have a few issues as these have when you think of the scale.
 
  • Like
Reactions: chabig
Every product ever made could have issues like that, especially when producing vast quantities.

I would actually go the other way and say how incredible it is too have a few issues as these have when you think of the scale.
I am reading already people reporting quality problems, this is not a good sign i feel.
 
So you are implying that the large Apple stockholders don't know about when a product is going to be released or delayed.
When an iPhone or iPad is sold and people find damaged housing and faulty screens, this is not a problem of them being rushed out without a quality operation test and a quality of build inspection.
When people buy a product costing from $800US to $2000US, they expect a quality product when they open the box.

To build on what Lobwedgephil mentioned, the problem is more that when you are building tens of millions of a thing, QA gets a little complicated. Do I fully test every single one fully? That adds to the cost of the product, and may not be terribly helpful if the mistake rate is something like 1-3% (on average).

QA for mass production isn't checking 100% of the devices coming off the line unless it can be quick and automatic. Instead it is done via random sampling of batches. So say I produce 10,000 widgets a day, I may test 100 of them at random, or 1%. Assuming enough samples, with a true random sampling, will get you quite a bit of information on how manufacturing is actually going. This is generally the "front-end" of the quality checks, where a company needs to produce 1 million widgets for launch, will randomly sample, say, 1% of them (10,000) for QA. Those 10,000 should not show anything abnormal, and themselves should have no more than ~100 failures. Spikes in failure rates, specific things showing up repeatedly in the failures, etc, all suggest something systemic that is fixable. However, if the failures themselves are small, say maybe 50 failed at a rate of 0.5%, and the failures are seemingly random, rather than systemic, it becomes a question of just how much effort you expend to find the other ~4950 likely defective widgets out of a million. And as a customer, realize that you would be the one paying the cost.

The other part is looking at failure rates. These are devices that are DOA, are exchanged/returned due to defects, or develop problems during the lifespan of the device. But getting it to 0% is surprisingly difficult. This data is used to drive investigations, manufacturing improvements, stocking of repair parts, etc. It's helpful for this cases where your random sampling misses something about a bad batch, or there's something systemic that QA simply cannot catch because it takes, say, 1,000 hours of use before it shows up.

One thing that the internet does is that it's super easy to share information about these failures that can and will make it through QA. This isn't a bad thing, per se, because it makes it a bit harder for a real problem be kept quiet, say a bad batch of produce with salmonella. But it's worth also pointing out that the plural of anecdote isn't data. Anecdotes of people having issues is a useful tool, but the inherent bias involved with self-reporting means it's not an ideal way to gauge how things really are.

Especially when in the case of the iPad, I would bet that there's probably been at least ~5k devices that were bad out of the box for every single iPad launch dating back to 2010, if I assume an early failure rate of about 1-2%. Without knowing how many of those are early adopters that would post on a forum like this one, what percentage of those affected report, etc, etc, and it becomes hard to figure out what the odds are that you would get a bunk device because of these other factors.
 
To build on what Lobwedgephil mentioned, the problem is more that when you are building tens of millions of a thing, QA gets a little complicated. Do I fully test every single one fully? That adds to the cost of the product, and may not be terribly helpful if the mistake rate is something like 1-3% (on average).

QA for mass production isn't checking 100% of the devices coming off the line unless it can be quick and automatic. Instead it is done via random sampling of batches. So say I produce 10,000 widgets a day, I may test 100 of them at random, or 1%. Assuming enough samples, with a true random sampling, will get you quite a bit of information on how manufacturing is actually going. This is generally the "front-end" of the quality checks, where a company needs to produce 1 million widgets for launch, will randomly sample, say, 1% of them (10,000) for QA. Those 10,000 should not show anything abnormal, and themselves should have no more than ~100 failures. Spikes in failure rates, specific things showing up repeatedly in the failures, etc, all suggest something systemic that is fixable. However, if the failures themselves are small, say maybe 50 failed at a rate of 0.5%, and the failures are seemingly random, rather than systemic, it becomes a question of just how much effort you expend to find the other ~4950 likely defective widgets out of a million. And as a customer, realize that you would be the one paying the cost.

The other part is looking at failure rates. These are devices that are DOA, are exchanged/returned due to defects, or develop problems during the lifespan of the device. But getting it to 0% is surprisingly difficult. This data is used to drive investigations, manufacturing improvements, stocking of repair parts, etc. It's helpful for this cases where your random sampling misses something about a bad batch, or there's something systemic that QA simply cannot catch because it takes, say, 1,000 hours of use before it shows up.

One thing that the internet does is that it's super easy to share information about these failures that can and will make it through QA. This isn't a bad thing, per se, because it makes it a bit harder for a real problem be kept quiet, say a bad batch of produce with salmonella. But it's worth also pointing out that the plural of anecdote isn't data. Anecdotes of people having issues is a useful tool, but the inherent bias involved with self-reporting means it's not an ideal way to gauge how things really are.

Especially when in the case of the iPad, I would bet that there's probably been at least ~5k devices that were bad out of the box for every single iPad launch dating back to 2010, if I assume an early failure rate of about 1-2%. Without knowing how many of those are early adopters that would post on a forum like this one, what percentage of those affected report, etc, etc, and it becomes hard to figure out what the odds are that you would get a bunk device because of these other factors.
From what you have posted and i believe your correct, many people getting their new iPads when they turn them on it's the first time it has been started as a complete unit.
Am sure in the first lot of say 25,000 iPads someone in quality pulls say 25 units and thats a guess and checks if the boot up and their screens look clear with good color and the build meets the quality expected.
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.