Become a MacRumors Supporter for $50/year with no ads, ability to filter front page stories, and private forums.

Ansari

macrumors newbie
Original poster
Jun 6, 2008
17
0
Cambridge, Ontario
How large can you scale a 300 DPI image before it begins to deteriorate if the file is to be output at 1200 dpi? 133 lpi?



I think I'm off in my calculation but here's what I came up with.

To convert lpi to dpi, I multiply it by 1.5–2, yes? So 133 x 2 = 266. So I can reduce my image to 266dpi to print it at 133lpi? Not sure where the 1200dpi comes into play.
 
A 300 dpi image would have to be quadrupled in image resolution to achieve an effective 1200 dpi... which of course means, unless some sort of intelligent scaling is employed, an equal and opposite drop in image sharpness by the same amount.

What is lpi anyway? If it's "lines per inch" like I think it is, then... the conversion factor will depend on the width/height of the media being rendered on, I would think. I'm not sure exactly how the conversion works.
 
What is lpi anyway? If it's "lines per inch" like I think it is, then... the conversion factor will depend on the width/height of the media being rendered on, I would think. I'm not sure exactly how the conversion works.

It is "Lines Per Inch" and I was taught to double the lpi (or screen frequency) to get the optimum dpi.

So ... your printed image is going to be 100mm x 150mm printed with 75lpi screen, you want the image file to be the same dimensions and 150dpi. Preferably a TIFF.

Cheers

Jim
 
It is "Lines Per Inch" and I was taught to double the lpi (or screen frequency) to get the optimum dpi.

This applies to converting from analog to digital. Saw you have a negative that you know holds about 80 lines per unit of lenght you would need to scan it with at least 160 pixels per unit of length. Use inches, mm or yards for the unit of length.

This rule you were taught comes from "nyquist' theorem" but is a very non-exact interpetation of it. You should be using a factor closer to three.
http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
 
lpi dpi ppi spi ???

Whoa! We got some confusing terms here! LPI is specification for a line screen for traditional halftone printing (what is sometimes called amplitude-modulation screening), and yes, the rule of thumb is 1.5-2 times the LPI for the appropriate DPI to print with best quality.

PPI=pixels-per-inch - how your camera/scanner/printer measures "resolution"
DPI=dots-per-inch - same as PPI, but applied to a binary device, i.e an imagesetter or b/w laser printer that can only produce two colours - black or white. Imagesetters create halftoned images from continuous tone images, either on film or paper from which printing plates will be made, or more often these days - directly to plate. Sometimes referred to as spots-per-inch (SPI!)
LPI=lines-per-inch - the number of halftone dots per inch in a halftone reproduction. The higher this number is, the better the quality of reproduction. Newspapers generally print in the 85-110 LPI range, magazines print at 120-175, and some fine art or other specialty prints are printed even higher - 200-300 LPI.

You can enlarge a 300 PPI image by a factor of about 113% (no resampling in photoshop) to yield approximately 266 PPI, the so-called ideal resolution to produce a 133 LPI reproduction halftone. The maximum you should increase the image size would be about 225%, which gives you 133 DPI, the so-called minimum resolution to produce a 133 LPI screen. If your original image has no sharp details, the maximum enlargement is often sufficient. It depends on the original just how much enlargement is okay, and yes, do not send JPEG files for printing by this method, use TIFFs.

A digital imagesetter is used to produce halftones for printing, replacing the old vertical cameras and photo-mechanical "screens" that were used in the pre-digital era.The imagesetter must print at very high resolution in order to create the illusion of continuous shades of gray. To produce a 133 line screen, each dot in the screen must be able to vary in size from nothing to fully black. In your example, a 1200 DPI imagesetter can produce a 133 LPI screen, but each dot would only be able to represent about 82 levels of gray - not bad, but not enough to produce the full range of grays, so you would likely see "stepping" or "banding" in the final image. To determine how many greys an imagesetter can produce, use the following formula:

(output resolution/screen frequency)squared + 1.

And the answer should be 250 or greater for best results.

Imagesetters can image up to 4000 DPI, so you may just have to ask your service provider to up his resolution to either 1800 (184 greys) or 2400 (>255 greys) DPI to achieve the best result for a 133 LPI screen. Higher resolutions allow the imagesetter to more accurately reproduce a traditional optical screen at the correct angle, which becomes more important when printing with more than one-colour, for example with four-colour process printing.

As far as the original image goes - if your image is 8x10 at 300 PPI and you double it to 16x20, the PPI is halved, i.e. 150 PPI - and the file is the same size as you haven't really changed anything. Of course Photoshop can "upsample" it to 300 PPI, but nothing is gained, pixels are simply doubled, and file size is quadrupled as is the processing/transmission time, and there is no benefit to doing this. Of course, the opposite is also true, halving the dimensions doubles the resolution, i.e. an 8x10 at 300 PPI becomes a 4X5 at 600 PPI. Not having enough resolution in the image will produce "pixellation". i.e, you can clearly make out the individual "squares" the image is made out of. Having too much resolutio has no real bad side effect, but wastes processing time on unnecessary pixels!

I hope this helps. You will find a similar relationship with non-traditional screening techniques, like those used on most modern inkjet printers. These devices, and a lot of more modern printing presses use an opposite technique to the above-mentioned method called frequency-modulated (yes, FM and AM!) or stochastic (random) screening. While AM screening uses a fixed grid of varying spot sizes to represent different levels of grey, FM screening uses a single spot size on a variable grid.

The sampling/re-sampling algorithms are the same, and you can see this whenever you print at a lower resolution than your output device is printing at, i.e. you print a 180 PPI image on a 720 PPI printer - it looks soft and blurry and may even show the pixel grid... though FM screening is much more forgiving of this problem, it still requires at least one pixel in the original image for each pixel in the output. Printers add their own "dithering" to smooth out colour, and that helps with the problem, but details are still soft.

Get a loupe and check out some magazines and books to better see and understand these concepts - they're important to anyone who works in the graphic arts.

Good luck!

dmz
 
Register on MacRumors! This sidebar will go away, and you'll see fewer ads.