How do I control the memory use when rendering PDFs in JAVA?

Q: We are going to use PDFTron for snapshots creation and parsing a
huge number of PDF files. To get acquainted with your product we used
its trial version. And we faced the following issue:

When consecutively (one by one) processing a huge number of PDF-files,
persistent growth of memory (allocated by the process) occurs.

Please find a Java-project created by our team in attach
( Launching this project you can see what we are
talking about. We used samples of snapshot creation from PDFTron
distributives. Allocated by javaw.exe process memory growes till
operation system hang (collapse).

System requirements (environment for test):
OS: Windows 7 (x64);
Memory (RAM): 2 Gb;
Java version: 1.6.0_21;
IDE: Eclipse;
PDFTron version: 5.0.2

Could you please help us and let us know about the problem reason and
if we use your product correctly?

package test;

import pdftron.Common.PDFNetException;
import pdftron.PDF.PDFDoc;
import pdftron.PDF.PDFDraw;
import pdftron.PDF.PDFNet;
import pdftron.PDF.Page;
import pdftron.SDF.ObjSet;

public class PdfTronTest {
  public static void main(String[] args) throws PDFNetException {
    System.out.println("==Start initialize PDFTron==");
    System.out.println("==End initialize PDFTron==");

    String filename = "ph2.pdf";

    for (int i = 0; i < 1000; i++) {
      System.out.println("=Start #" + i + "=");


      System.out.println("=Complite #" + i + "=");



  public static void example(String filename) {
    try {
      String input_path = "./TestFiles/";
      String output_path = "./TestFiles/Output/";

      PDFDraw draw = new PDFDraw();

      String str92png = filename + ".92dpi.png";

      try {
        PDFDoc doc = new PDFDoc((input_path + filename));



        Page pg = doc.getPage(1);
        draw.export(pg, (output_path + str92png));

        System.out.println("Example 1: " + output_path + str92png
            + ". Done.");

      } catch (Exception e) {
    } catch (PDFNetException e) {
    } catch (Exception e) {
A: it seems that the problem is that in the loop you are creating a
new PDFDraw which is never explicitly disposed. Unfortunately Java
garbage collector is not always smart enough to track native
resources. To fix this call draw.destroy() (Dispose() in C#/VB) after
each iteration or reuse the same PDFDraw object.

Q: Thank you for your quick response. Provided by you solution fixed
our problem.

But we faced another issue, similar to the first: while parsing PDF-
file, allocated by process memory growthes.
Please find an example written on Java in attach
( It shows the parsing process: all pages,
lines and words are parsed over one by one. The word processing
doesn't occur. We got a parsing method from samples in PDFTron
distributives (, example4).

System requirements (environment for test):
OS: Windows 7 (x64);
Memory (RAM): 2 Gb;
Java version: 1.6.0_21;
IDE: Eclipse;
PDFTron version: 5.0.2
A: Just like PDF rendering sample above you would need to call
Destroy() (or Dispose() in C#/.NET) on TextExtractor to release the
allocated resources in time. Please let me know if this helps.