AWS Fargate bb-memory issues

Product: WebViewer Serve

Product Version: pdftron/webviewer-server:2.4
Server built: Nov 6 2025 01:31:11 UTC
Server version number: 10.1.49.0
OS Name: Linux
OS Version: 5.10.245-243.979.amzn2.x86_64
Architecture: amd64
Java Home: /usr/lib/jvm/java-17-amazon-corretto
JVM Version: 17.0.17+10-LTS
JVM Vendor: Amazon.com Inc.
CATALINA_BASE: /home/tomcat/wv-server
CATALINA_HOME: /home/tomcat/wv-server
Server version name: Apache Tomcat/10.1.49

Please give a brief summary of your issue:
Incorrect memory readings and usage when running webviewer server on AWS fargate.

Please describe your issue and provide steps to reproduce it:
When provisioning web viewer server on AWS fargate according to the Cloudformation template from Deploy WebViewer JavaScript PDF Viewer to AWS | Apryse documentation I am observing a strange behaviour with memory usage that is at all times really low.

Surprisingly in logs after startup I am seeing

BlackboxContextListener - {"PDFNet_version":"11.9.0-670bded97c","build_date":"2025-11-22T09:48:25+0000","num_cpu":4,"os_arch":"amd64","os_name":"Linux","os_version":"5.10.245-243.979.amzn2.x86_64","server_id":"s:dec75689-8b73-4dfb-9910-4eab723fcd54","server_version":"2.4.1-072eee4","total_memory":15704}

Moreover the Java memory seems fine too:

[pool-2-thread-4] INFO Monitor - STATUS - Free [14758MB] Total [15704MB] CPU [0.200904%]
[pool-2-thread-4] INFO Monitor - DISK - Free [15GB] Total [31GB]
[pool-2-thread-4] INFO Monitor - JAVA MEMORY - Used [137MB] Max [2048MB]
[pool-2-thread-4] INFO Monitor - QUEUE SIZES - Convert [0] Fetch [0] Main [0]

In JAVA_OPTS (default value, not setting it) I am seeing:

JAVA_OPTS= -Djava.library.path=/home/tomcat/wv-server/../libs:/usr/local/tcnative/lib:/usr/lib/x86_64-linux-gnu -Xmx4G -XX:MaxDirectMemorySize=4G   -Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager   -Djava.io.tmpdir="/home/tomcat/wv-server/temp"   -Djava.library.path="/home/tomcat//libs"   -Dcatalina.base="/home/tomcat/wv-server"   -Dcatalina.home="/home/tomcat/wv-server"   --add-opens=java.base/java.lang=ALL-UNNAMED   --add-opens=java.base/java.io=ALL-UNNAMED   --add-opens=java.base/java.util=ALL-UNNAMED   --add-opens=java.base/java.util.concurrent=ALL-UNNAMED   --add-opens=java.rmi/sun.rmi.transport=ALL-UNNAMED   -Dlog.base="/home/tomcat/wv-server/logs"

So the total_memory resembles to the host total memory but the way MemoryManager.cpp figures out total/free memory (I assume based on cgroups) seems not suitable for AWS fargate. Without any document processing I am constantly seeing the following logs:

bb-worker: (BlackBoxWorker.cpp) Completed task: ea8f8838-a1ae-452d-a900-13a1725c7c7a
bb-worker: (BlackBoxWorker.cpp) ea8f8838-a1ae-452d-a900-13a1725c7c7a: {"jtype":"pdf","src":"/home/tomcat/wv-server/static_data/Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf","id":"ea8f8838-a1ae-452d-a900-13a1725c7c7a","job":{"op":"image","args":{"p":0,"size":320,"dest":"/home/tomcat/wv-server/static_data/Image/Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf_dir/pageimg0_320.jpg","isAPI":false}}}
bb-memory: (MemoryManager.cpp) CGROUP MAX DETECTED 368
bb-memory: (MemoryManager.cpp) CGROUP FREE DETECTED 11
bb-memory: (MemoryManager.cpp) Found 368 MB total memory
bb-memory: (MemoryManager.cpp) Found 11 MB free memory
bb-memory: (MemoryManager.cpp) CONTEXT MAP: 1
bb-memory: (MemoryManager.cpp) STARTING ITERATION
bb-memory: (MemoryManager.cpp) looking at entries
bb-memory: (MemoryManager.cpp) deleting doc: /home/tomcat/wv-server/static_data/Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf
bb-memory: (MemoryManager.cpp) STARTING ITERATION
bb-memory: (MemoryManager.cpp) Context map empty.
bb-memory: (MemoryManager.cpp) Cleaned 1 entries from cache.
bb-worker: (ZMsg.cpp) {"jtype":"pdf","src":"/home/tomcat/wv-server/static_data/Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf","id":"fbb0f652-2c83-45cb-ab35-a77a7b4eba63","job":{"op":"image","args":{"p":0,"size":320,"dest":"/home/tomcat/wv-server/static_data/Image/Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf_dir/pageimg0_320.jpg","isAPI":false}}}
2025-12-09/10:13:15.323/UTC [pool-2-thread-1] INFO Monitor - STATUS - Free [14759MB] Total [15704MB] CPU [0.150905%]
2025-12-09/10:13:15.324/UTC [pool-2-thread-1] INFO Monitor - DISK - Free [15GB] Total [31GB]
2025-12-09/10:13:15.324/UTC [pool-2-thread-1] INFO Monitor - JAVA MEMORY - Used [137MB] Max [2048MB]
2025-12-09/10:13:15.324/UTC [pool-2-thread-1] INFO Monitor - QUEUE SIZES - Convert [0] Fetch [0] Main [0]
2025-12-09/10:13:15.324/UTC [pool-5-thread-5] INFO DocManagement - Copying http://0.0.0.0:8090/test/sample.pdf to /home/tomcat/wv-server/static_data/Fetched/download15365184642157255892.tmp
2025-12-09/10:13:15.324/UTC [pool-5-thread-5] INFO DocManagement - Moving temp file /home/tomcat/wv-server/static_data/Fetched/download15365184642157255892.tmp to /home/tomcat/wv-server/static_data/Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf[0]
2025-12-09/10:13:15.324/UTC [pool-5-thread-5] DEBUG DocManagement - fetch complete, no errors
2025-12-09/10:13:15.324/UTC [pool-5-thread-5] INFO DocReference - Rendering thumbnail for /home/tomcat/wv-server/static_data/Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf
2025-12-09/10:13:15.321/UTC [http-nio2-0.0.0.0-8090-exec-6] INFO DocReference - http://0.0.0.0:8090/test/sample.pdf: detected file type of .pdf
2025-12-09/10:13:15.321/UTC [http-nio2-0.0.0.0-8090-exec-6] INFO DocReference - Setting local path for http://0.0.0.0:8090/test/sample.pdf to Fetched/pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf...
2025-12-09/10:13:15.321/UTC [http-nio2-0.0.0.0-8090-exec-6] INFO DocReference - Fetching http://0.0.0.0:8090/test/sample.pdf to location pdftron_tester_239cnmia1odmwi9c9mcasocj902dnau2.pdf
2025-12-09/10:13:15.322/UTC [http-nio2-0.0.0.0-8090-exec-6] INFO ServerConfig - convQ - 0 - adding job
2025-12-09/10:13:15.322/UTC [pool-5-thread-5] INFO DocManagement - Kicking off potential fetch of http://0.0.0.0:8090/test/sample.pdf
2025-12-09/10:13:15.319/UTC [pool-4-thread-6] INFO Monitor - queue-status - 0.250000 0.133790 0.062482 0.011354 0.005726
2025-12-09/10:13:15.320/UTC [pool-2-thread-3] INFO Monitor - Running health check...
2025-12-09/10:13:15.319/UTC [pool-2-thread-4] INFO Monitor - Sending out new queue probe: Tue Dec 09 10:13:15 UTC 2025

I constantly see items being purged out from cache, I assume due to the low total/free memory? Is the above expected or this is not proper? When running locally with plain docker those log lines clearly show much bigger values. I think this is not expected, right? It looks like the way the internal worker figures out available mem does not work nice with any docker setting that fargate enforces under the hood.

Please provide a link to a minimal sample where the issue is reproducible:

You can spin up the below AWS Fargate task to observe the above (just make sure you replace some of the masked settings)

{
    "cpu": 4096,
    "environment": [
        {
            "name": "TRN_DISABLE_VALIDATION",
            "value": "false"
        },
        {
            "name": "TRN_FORCE_URL_RECHECK",
            "value": "false"
        },
        {
            "name": "SHOULD_HIDE_PRODUCT_KEY",
            "value": "true"
        },
        {
            "name": "TRN_DISABLE_CLIENT_SIDE_RENDERING",
            "value": "false"
        },
        {
            "name": "TRN_ENABLE_SESSION_AUTH",
            "value": "false"
        },
        {
            "name": "TRN_FETCH_TIMEOUT_MS",
            "value": "20000"
        },
        {
            "name": "TRN_MAX_CACHED_MB",
            "value": "12902"
        },
        {
            "name": "TRN_DEBUG_MODE",
            "value": "true"
        },
        {
            "name": "TRN_HTML2PDF_TIMEOUT",
            "value": "300000"
        },
        {
            "name": "TRN_ALLOWED_ORIGINS",
            "value": "https://my_secret_domain.com"
        },
        {
            "name": "TRN_MAX_EXCEL_CELL_COUNT",
            "value": "200000"
        },
        {
            "name": "TRN_FETCH_REQUIRED_URL_ROOTS",
            "value": "my_secret_domain.com"
        },
        {
            "name": "TRN_BALANCER_COOKIE_NAME",
            "value": "HAPROXID"
        },
        {
            "name": "TRN_LOG_LEVEL",
            "value": "DEBUG"
        },
        {
            "name": "TRN_FORCE_LOSSLESS_IMAGES",
            "value": "true"
        },
        {
            "name": "TRN_MAX_CACHE_AGE_MINUTES",
            "value": "120"
        },
        {
            "name": "TRN_MAX_MEMORY_CACHE_AGE_MINUTES",
            "value": "5"
        },
        {
            "name": "TRN_ENABLE_PER_SESSION_CACHING",
            "value": "false"
        },
        {
            "name": "INCLUDE_DEMO",
            "value": "false"
        },
        {
            "name": "TRN_PDFNET_KEY",
            "value": "my_secret_key"
        }
    ],
    "essential": true,
    "healthCheck": {
        "command": [
            "CMD",
            "python3",
            "-c",
            "import sys,urllib.request; sys.exit(0 if 200<=urllib.request.urlopen('http://localhost:8090/blackbox/health', timeout=3).getcode()<400 else 1)"
        ],
        "interval": 30,
        "retries": 5,
        "startPeriod": 60,
        "timeout": 5
    },
    "image": "pdftron/webviewer-server:2.4",
    "interactive": false,
    "linuxParameters": {
        "initProcessEnabled": true
    },
    "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
            "awslogs-group": "/aws/ecs/web-viewer",
            "awslogs-region": "my-region",
            "awslogs-stream-prefix": "ecs"
        }
    },
    "memory": 8192,
    "mountPoints": [],
    "name": "web-viewer",
    "portMappings": [
        {
            "containerPort": 8090,
            "hostPort": 8090,
            "name": "web-viewer",
            "protocol": "tcp"
        }
    ],
    "privileged": false,
    "pseudoTerminal": false,
    "readonlyRootFilesystem": false,
    "restartPolicy": {
        "enabled": true
    },
    "startTimeout": 30,
    "stopTimeout": 10,
    "systemControls": [],
    "versionConsistency": "enabled",
    "volumesFrom": []
}

I have run out of ideas at this point and would love to have this resolved/answerd to ensure my server is performing as expected and takes advantage of the available 8GB of mem which were assigned to it.

I also tried the arm version of the 2.4 image but I am seeing same behaviour (would love to use arm instead of amd64 but looks like the x86 one is updated more often).

Please let me know if I can provide more info to get this resolved, if it won’t be possible then I would have to resolve to EC2 instance managed by fargate which would hopefully allow proper available mem discovery.

PS

I think your cloudformation template is a bit outdated and could use some update.

Hello, I’m AiRon, an automated tech support bot :robot:
While you wait for one of our customer support representatives, I can help you with your question.


It seems you’re experiencing issues with memory detection and usage when running WebViewer Server on AWS Fargate. This is likely due to how memory is reported in Docker containers and how Fargate enforces its settings. The internal memory manager might not be accurately detecting the available memory within the Fargate environment, leading to incorrect readings and cache purging.

Some things you can try:

  • Ensure that the Fargate task definition has the correct memory settings. The memory limit should match the expected environment settings.

  • Consider adjusting the JAVA_OPTS parameters to better align with the container’s memory limits, especially -Xmx and -XX:MaxDirectMemorySize to ensure they are within the constraints of the Fargate task.

  • Investigate if Fargate’s task memory settings are being correctly interpreted by the JVM and the application. This might involve checking the container’s environment for any discrepancies.

  • Look into whether there are any updates or patches for WebViewer Server that address memory detection issues in containerized environments.

  • Consider using EC2 instances if Fargate’s abstraction layer is causing too many issues with memory reporting and you require precise memory management.



Here are some resources that might help you:

  • AWS Deployment Guide - This guide provides instructions on deploying WebViewer Server on AWS, which might help in ensuring your setup aligns with recommended practices.

  • WebViewer Server Performance - Offers insights into expected performance and might help you evaluate your deployment.

  • AWS Fargate Documentation - Provides detailed information on how Fargate manages resources, which might be useful in understanding memory reporting issues.

  • ECS Task Definition Parameters - Useful for ensuring your task definitions are correctly set up with the appropriate memory and CPU resources.



Hopefully, this helps you solve your problem while you wait for a human to get back to you.

Hello Leszek,

Thank you for contacting Apryse Forums.

There seems to be a memory limit based on your logs.

bb-memory: (MemoryManager.cpp) CGROUP MAX DETECTED 368
bb-memory: (MemoryManager.cpp) CGROUP FREE DETECTED 11

The cache appears to be purging aggressively because the internal logic detects only 368 MB of available memory.

You can confirm the container’s actual memory limit by running:

cat /sys/fs/cgroup/memory.max

If the output shows 368M, then the container is restricted to that amount, and the ECS/Fargate configuration will need to be adjusted.

If the output shows 8192M, then the container limit is correct, and the issue may be an issue with how WebViewer Server interprets that limit.

Best Regards,
Darian

1 Like

Hey,

So I know that Fargate does something special to cgroups since the tasks run on shared infrastructure which probably confuses Web Viewer to figure out the limit properly.

The output of cat /sys/fs/cgroup/memory.max is cat: /sys/fs/cgroup/memory.max: No such file or directory. After doing some more digging, only the below is available:

# ls /sys/fs/cgroup | grep mem
cpuset.mems.effective
memory.numa_stat
memory.pressure
memory.reclaim
memory.stat

I can’t recall if Fargate uses cgroups v2 or v1 but I know they are launching tasks on bigger shared instances and limiting task resources via cgroups.

I have launched the exact same task definition from my initial post on ECS Amazon managed instances (Amazon ECS Managed Instances) and it picked up the available mem correctly. It would be good to have Web Viewer figure out the memory limit properly in both cases.

1 Like

Thank you for your reply,

We will investigate this issue and will let you know when we have an update.

Best regards,
Kevin

1 Like

This is a classic example of AWS Fargate weirdness: bb-memory perceives a fake low limit because cgroup information is concealed or virtualized.

It’s not your configuration because it operates on instances that are managed by ECS. WebViewer requires either an explicit app-level memory override or a fargate-specific memory detection.

1 Like