Does the Java SDK native library do any memory management?

We are using JAVA SDK to merge annotations in multiple PDFs. We containerised the app and when we did load test on it, it is consuming the all the memory allocated for the container. The JVM heap is staying at 700 MB. But, the java process is consuming all the memory. When we gave 4 GB memory to the container, it consumed 3.9 GB of it. Even if we apply more load, it stays at 3.9 GB and doesn't break. So, just just thinking, does the native library do any form of memory management?
We are properly calling PDFDoc.close() and FDFDoc.close(). Is there anything else to be done to free up the memory? We are using Java 1.8.0_121 on Ubuntu 16.04.

What version of PDFNet are you on? PDFNet.getVersion()

Also are you able to reproduce using our SDK sample?
Can you post the code to reproduce?

Hi Ryan,

We are using version 6.7.1 (java library and the native library).
I have a small reduced version of test case. - This takes upto 120 MB of RAM - After completion JVM heap is at 3 MB. With no objects related to com.pdf.pdftron package in memory. Memory consumption stays at 120 MB even after all runs have been processed. - This takes upto 870 MB of RAM (This is done to simulate actual server environment). Memory stays at 870 MB.

Thanks, contains five PDF files with 328 pages. Each file is of size 5.8 MB.

After all processing has been done, I took a process dump using gcore. It gave a dump file of size 7.8 GB. I viewed it using glogg tool. I could see a lot of PDF files inside the process dump. I go hits when searched for “%PDF-”, the start signature. I am just thinking, if we had closed all the PdfTron PDFDoc objects why would it still store them in the memory? Are there any destructors to be called as part of the process?

The attachment didn’t go through. If you can’t attach to your post in the google group, then contact support at

However, does it matter on the PDF files, or can you reproduce with any PDF file?

Hi Ryan,

Attaching two java files to the post.

Thanks, (1.87 KB) (1.56 KB)

It works on any PDF file.

Could you clarify exactly what the issue is. Are you running out of memory? Does memory consumption keep going up?

I’m a bit confused as the first post was that the memory consumption went up to match how much was available, but then a later post is about multi-threading.

The issue is with memory consumption. It is consuming more if there is any sort of parallelism involved. The first program reading PDF files in serial is consuming far less memory than the second one doing it in parallel. Our main doubt is the native library might be leaking memory.

As long as the memory stabilizes, when running the same task in multiple threads, then there is no leak. If memory always kept increasing, then that would be a sign of a problem.

We have not had any report of a leak, and since running the same task multiple times at once is going to result in more objects in memory at once, some sort of increase in memory over the serialized case, suggest that there is no leak. That is, a leak would normally be visible in a non-concurrent test also.