How to Transform & optimize PDF


We're having a slight problem with some pdf files we'er receiving from
our publishers. Is there an easy way to downsize the pdf files using
the pdfnet library, i.e. transform from files, suitable for printing to
files, suitable for display on the web?

You can use PDFNet to implement various optimizations on existing PDF
documents such as removal of embedded fonts, image sub-sampling, image
recompression, etc.

There are many possible ways to optimize existing PDF documents and the
implementation may require advanced knowledge of PDF format.

For example, to remove embedded fonts you could use the algorithm
suggested in the following FAQ:

If you would like to recompress (and/or subsample) all images in a PDF
document you may want to take a look at JBIG2 sample project:

The optimizations you choose to implement will usually depend on the
files you are dealing with.

the JBIG2 sample project only handles grayscale images, is there by any
chance a code example that handles coloured (4 components per image, 8 bits
per component) images?

Currently, we do not have a sample code that optimizes any type of PDF
image (in part because JBIG2 compression only applies to 1bpc
monochrome images). Still it should be relatively easy to extend the
JBIG2 example to handle other types of images and compression
algorithms, for image sub-sampling, etc.

For example:

int cnum = input_image.GetComponentNum() == 1 if (cnum == 1) {
   // Process gray-scale images ...
else if (cnum == 3) {
   // Process RGB images ...
else if (cnum == 4) {
   // Process CMYK images ...

Or you could simply normalize all images to RGB format using Image2RGB:

pdftron.PDF.Image input_image = new pdftron.PDF.Image(obj); Image2RGB
conv = new Image2RGB(input_image); FilterReader reader = new
FilterReader(conv); ...
Read RGB image data, sub-sample, optimize etc.
Than replace the old image with the new one.